Allegations of Misleading Evidence
In a legal showdown, OpenAI has taken aim at The New York Times, accusing the newspaper of orchestrating a hack into its AI models. OpenAI alleges that The Times paid someone to manipulate its systems, including ChatGPT, using deceptive prompts that violated OpenAI's terms of use. The tech giant claims this action was intended to generate misleading evidence for The Times' copyright lawsuit against OpenAI and Microsoft.
Legal Maneuvering in Federal Court
OpenAI's allegations came to light in a recent filing in a Manhattan federal court, where it seeks to dismiss parts of The Times' lawsuit. The filing asserts that The Times caused its technology to reproduce copyrighted material through deceptive means. However, The Times' attorney, Ian Crosby, argues that OpenAI's claim of "hacking" is unfounded, suggesting it was an attempt to uncover evidence of alleged theft and reproduction of The Times' content.
The Larger Copyright Battle in AI Training
The Times' lawsuit, filed in December 2023, alleges that OpenAI and Microsoft used millions of its articles without authorization to train chatbots. This legal clash underscores broader concerns about the use of copyrighted material in AI training. Copyright holders, including authors, visual artists, and music publishers, have filed similar lawsuits against tech firms, challenging their practices in AI development.
Fair Use and Copyright Law
The legal dispute raises questions about fair use in copyright law, particularly regarding AI training. OpenAI contends that training advanced AI models without incorporating copyrighted works is impossible. On the other hand, tech firms argue that their AI systems use copyrighted material fairly, emphasizing the industry's potential for growth.