Sen. Peter Welch, D-Vt., has introduced the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act, a bill aimed at increasing transparency in how generative AI models are trained. The proposed legislation would allow copyright holders to subpoena AI developers to confirm if their works were used without permission during training.
Under the bill, developers would be required to disclose enough training data to verify whether specific copyrighted works were used. Non-compliance would create a legal presumption that the developer infringed on copyright, shifting the burden of proof to the AI company. Welch described the act as a way to ensure artists, musicians, and other creators can track unauthorized use of their works and receive appropriate compensation.
The rise of generative AI has raised concerns among artists and creators who fear their work is being used to train models without credit or consent. Cases like the Midjourney spreadsheet, which identified thousands of artists whose work was reportedly used without permission, have intensified calls for regulation. Lawsuits from major entities, such as The New York Times and leading music labels, highlight the growing legal tensions around AI training data.
The TRAIN Act has garnered support from groups like SAG-AFTRA, the Recording Academy, and major music labels. However, with limited time left in this congressional session, the bill may need to be reintroduced next year. This comes amid broader efforts in Congress to regulate AI, including proposed bills addressing deepfakes and personal data use in AI training.