AI court cases and developments to monitor if you’re a content creator
We’re slowly drowning in AI-generated content, and there’s no end in sight (yet). Whether it’s a blog post produced by ChatGPT or “insightful” LinkedIn comments created with the help of an AI tool: everyone working in the creative sector will have come across text that’s been written with minimal human input.
Many businesses have welcomed the rise of cheap generative AI tools like ChatGPT & Co. Others, like professional content writers, are now most at risk of being replaced by AI. You only need to read the latest news items on this subject to see where this trend is heading.
Yes, it’s now much cheaper (or free) to create marketing copy or even visuals. But there are still good reasons to remain cautious and to consider both the potential legal and ethical implications of relying too heavily on these tools.
Let’s look at some recent AI court cases and legal developments that illustrate these issues well.
1. Copyright infringement claims in the US
One thing is clear: law firms across the world will be dealing with a rise in copyright lawsuits. Currently, there are at least three US copyright-infringement class action lawsuits that have been filed against OpenAI, the company behind ChatGPT.
AI court cases by Pulitzer Prize winner Michael Chabon and other authors in the US
One of the latest cases is by well-known writer Michael Chabon and a group of like-minded authors. As Sarah Silverman before them, these plaintiffs claim that OpenAI simply used their copyrighted works without permission to train ChatGPT.
OpenAI has been arguing that its use of any published works in ChatGPT’s training data falls within the category of “fair use” and therefore doesn’t constitute a direct copyright infringement. It remains to be seen whether all US judges involved in these court cases will accept this argument.
Concerns and potential actions by newspapers, e.g. the New York Times
In the meantime, the New York Times and The Guardian are just some high-profile news publications that have blocked ChatGPT’s crawling bots on their websites. (This is fairly easy to do: find a quick guide here for your own website.)
The New York Times also seems to be considering a copyright lawsuit against OpenAI.
2. GDPR/privacy infringement claims in the EU
OpenAI has also come into the firing line for potential breaches against the GDPR. In late August 2023, Polish privacy researcher Lukasz Olejnik submitted a lawsuit against OpenAI.
He argued that it committed a number of GDPR breaches. This included processing Mr Olejnik’s data “unlawfully, unfairly, and in a non-transparent manner”.
Mr Olejnik claimed he had not been given the opportunity to access his personal data or rectify inaccurate data about him, and he accused OpenAI of violating the principle of privacy by design.
Follow Lukasz Olejnik on Twitter/X for updates on his case. It’ll be interesting to see how the legal industry will respond.
3. Demands for transparency in the EU’s proposed AI Act
Countries in the EU, meanwhile, have been working on an Artificial Intelligence Act. Currently, there are debates about whether companies like ChatGPT should openly disclose their use of copyrighted materials. The relevant text discussed by the European Parliament in June 2023 reads, for example:
“Generative AI systems based on such models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.”
4. Law vs ethics: the human cost of generative AI products
Perhaps the “fair use” argument will be accepted by US courts and there is legally no breach of US copyright law. But, surely there are ethical issues with the indiscriminate use of text and data involuntarily provided by millions of users, website owners, journalists, and other content creators?
And what about the rising number of human copywriters, content writers, and editors being replaced by AI?
Just because something may be legal doesn’t make it ethical.
Remember the famous “grandmother monologue” in the BBC drama “Years and Years” (2019)? In this scene, she lamented the rise of machines and cheap products at the expense of human jobs and dignity.
Watch it. I promise you’ll think of her every time you use a self-checkout, let alone a new generative AI tool.
I’m not saying that all AI tools are inherently unethical. There are plenty of interesting products that make our lives easier (spellcheckers, for example, though they can’t replace a human expert, either). And, luckily, there are still a few tasks that humans can do better than AI.
But: it would be unwise to switch to new tools without thinking through potential legal or ethical consequences of our actions.
Quick disclaimer: I’m not a legal expert, just a marketing translator and content writer with an interest in the effect generative artificial intelligence is having on my profession. The court cases I’ve picked here happen to be about ChatGPT, though there are many other artificial intelligence products currently facing similar challenges. I assume no responsibility for any errors or omissions on this site, or for the results obtained from the use of this information.