Prepare for a worldwide wrangle over copyright, deepfakes and more
The Economist
Science & technology
Start with the litigation. Representatives of nearly every creative industry have filed copyright-infringement complaints against generative-AI companies for using their material, without payment or permission, to train their AI models. Most of the legal action is in America, where OpenAI and Microsoft are being sued by the New York Times, and Anthropic is being pursued by parties including Universal Music Group. In Britain, Stability AI is being sued by Getty Images. All deny wrongdoing.
These and other disputes may be settled out of court: some see the lawsuits as a negotiating tactic by content companies to make tech firms cough up. OpenAI has made at least 29 licensing deals with platforms and publishers, from Reddit to the Financial Times, according to a tally by Peter Brown of the Tow Centre for Digital Journalism at Columbia University. (The Economist Group, our parent company, has not taken a public position.) The value of OpenAI’s deals alone already exceeds $350m, by Mr Brown’s reckoning. A rocky time in court in the coming months could cause that figure to rise.
If claimants resist settling, legal precedents will be set in 2025 that could shape the tech industry for years to come. In America the tech companies are narrow favourites to win. Their “fair use” defence (essentially, that copyrighted material can be used without explicit permission in some cases) has got them off the hook in previous copyright cases, such as a legal complaint against Google Books nearly a decade ago.
However, “if they get to a jury, anything is possible”, cautions Matthew Sag of Emory University’s School of Law. Stability AIi faces a harder test: Britain’s copyright law is somewhat stricter than America’s, and Getty is also claiming trademark infringement, after some of Stability’s generated images reproduced its logo.
As courts deliberate over existing laws, legislatures will debate new ones, in particular on “deepfakes”, which use AI to insert a person’s likeness into an existing photo or video, often of a pornographic nature. This is worrying parents (whose children are being harassed with “nudifying” apps), celebrities (whose likenesses are being stolen by con artists) and politicians (who have found themselves the targets of AI-powered disinformation). In March the American state of Tennessee passed the Ensuring Likeness Voice and Image Security (ELVIS) Act, to protect performers from having their image or voice used illegally. California has passed laws to stop political deepfakes.
Copyright law may also be reformed. The European Union, Japan, Israel and Singapore have already introduced exceptions to allow the use of copyrighted material, without permission or payment, in the training of AI models, at least under some circumstances. Some in Silicon Valley worry that tech investment could flow away from America to more relaxed jurisdictions. Yet, so far, no country seems willing to become a regulatory wild west. Japan seems minded to tighten the exceptions it has set, to protect copyright interests. Most countries are coalescing around a moderate position: a “race to the middle” is most likely, believes Mr Sag.
The emerging compromise is that tech companies will have to find ways to allow copyright-holders to opt out of having their content used for training. Tech firms will also have to make AI tools better at handling abstract concepts without regurgitating copyrighted material (for instance, being able to draw a generic superhero without reproducing images of Superman). That may prove easier said than done. Do not be surprised if the year ahead is one in which AI generates more questions than regulators can answer. ■
新英文外刊
每日精选优质外刊文章
扫码关注我们