
Updated: February 2026
Intellectual Property Reform
Adapting copyright, patent, and licensing frameworks to clarify rights over AI training data, establish compensation mechanisms for creators whose works train AI systems, and determine the ownership status of AI-generated outputs.
What it is:
Intellectual property reform for AI addresses the fundamental tension between the data-intensive requirements of machine learning and the rights of creators whose works are scraped, often without permission, to train models worth billions of dollars. The core policy questions involve both inputs and outputs: on the input side, whether creators should be compensated when their copyrighted works are used as training data, and through what mechanisms (opt-out provisions, licensing schemes, transparency mandates); on the output side, whether AI-generated content should receive IP protection and who should own it.
In the context of AI-driven economic transformation, these frameworks determine whether productivity gains from AI accrue primarily to model developers or are shared with the creative labor force whose works made those models possible. Reform proposals range from expanding "fair use" exceptions for training to mandatory licensing schemes modeled on music industry royalty systems, to transparency requirements that would enable creators to verify whether their works were used.
Recommended Reading:
Simon Chesterman
Good models borrow, great models steal: intellectual property rights and generative AI
January 2025
Chesterman identifies two critical policy questions that will determine generative AI's impact on the knowledge economy: whether creators whose works are scraped should be compensated, and who should own AI-generated outputs. He argues that while markets for "legitimate" models trained only on licensed or public domain works are emerging (Adobe's Firefly, Shutterstock's Contributor Fund), most AI development has proceeded without recognition of creators' contributions. He proposes that development should at minimum disclose the origins of training data with compensation paid where appropriate, and that a new category of "computer-generated work" could offer reasonable middle ground between full copyright protection and public domain status for AI outputs.
UK Intellectual Property Office
Copyright and artificial intelligence statement of progress under Section 137 Data Act
December 2025
The UK government's consultation proposes expanding the text and data mining exception to allow commercial AI training, provided rights holders can opt out through standardized technical mechanisms. The consultation articulates three objectives: control (rights holders should be able to license and seek remuneration), access (AI developers should be able to train models lawfully without infringing copyright), and transparency (greater clarity about works used in training and outputs). Following intense debate, the UK's Data (Use and Access) Act 2025 mandated an economic impact assessment of reform options by March 2026, with working groups convened on technical standards, transparency requirements, and licensing frameworks.
TRAIN Act
August 2025
The bipartisan Transparency and Responsibility for Artificial Intelligence Networks Act, introduced by Senators Blackburn (R-TN), Welch (D-VT), Hawley (R-MO), and Schiff (D-CA), establishes an administrative subpoena process enabling copyright holders to compel AI developers to disclose whether their works were used in training data. Modeled on procedures used for internet piracy cases, the bill requires only a good-faith belief that copyrighted material was used, creating a rebuttable presumption of copying if developers fail to comply.
Really Simple Licensing (RSL) Standard
December 2025
The Really Simple Licensing (RSL) 1.0 standard, developed by the RSL Technical Steering Committee (including Yahoo, Ziff Davis, and O'Reilly Media) and supported by Reddit and Medium, establishes a machine-readable framework that augments robots.txt to define AI usage rights. The RSL Collective functions as a collective rights organization (analogous to ASCAP or BMI) to negotiate on behalf of publishers.
Real-world precedents:
The music industry's collective licensing infrastructure — where organizations like ASCAP, BMI, and SESAC pool rights to negotiate blanket licenses with radio stations and streaming services — offers a model for aggregating fragmented copyright claims at scale.
In February 2025, Thomson Reuters v. Ross Intelligence became the first case rejecting fair use for AI training, finding that using Westlaw headnotes to train a competing legal research tool was not transformative. However, in June 2025, Kadrey v. Meta and Bartz v. Anthropic found AI training to be highly transformative fair use.
The EU's Digital Single Market Directive established opt-out text and data mining exceptions, while the EU AI Act requires general-purpose AI model providers to publish training data summaries that enable rights holders to exercise their rights.
© 2026 Windfall Trust. All rights reserved.