ChatGPT: what the law says about who owns the copyright of AI-generated content

ChatGPT: what the law says about who owns the copyright of AI-generated content

ChatGPT has generated enormous interest, but is some of its content protected under copyright law? Shutterstock / Blue Planet Studio

The AI chatbot ChatGPT produces content that can appear to have been created by a human. There are many proposed uses for the technology, but its impressive capabilities raise important questions about ownership of the content.

UK legislation has a definition for computer-generated works. Under the Copyright, Designs and Patents Act 1988 they are “generated by computer in circumstances such that there is no human author of the work”. The law suggests content generated by an artificial intelligence (AI) can be protected by copyright. However, the original sources of answers generated by AI chatbots can be difficult to trace – and they might include copyrighted works.

The first question is whether ChatGPT should be allowed to use original content generated by third parties to generate its responses. The second is whether only humans can be credited as the authors of AI-generated content, or whether the AI itself can be regarded as an author – particularly when that output is creative.

Let’s deal with question one. The technology underpinning ChatGPT is known as a Large Language Model (LLM). In order to improve at what it does, it is exposed to large data-sets, including vast numbers of websites and books.

At the moment, the UK allows AI developers to pursue text and data mining (TDM), but only for non-commercial purposes. OpenAI’s terms of use assign to the users “all its right, title and interest in the output”.

But the company says it’s up to users to ensure the way they use that content does not violate any laws. The terms and conditions are also subject to change, so do not carry the stability and force of a legal right such as copyright.

The only solution will be to clarify laws and policies. Otherwise, every organisation will have to take legal action individually, aiming to show that they own the works used by an AI. Furthermore, if governments do not take an action then we are approaching a situation where all copyrighted materials will be used by others without the original author’s consent.

See also  Neuralink's monkey can play Pong with its mind. Imagine what humans could do with the same technology

Question of ownership

Now to question two: who can claim copyright to AI-generated content. In the absence of a claim by the owner of original content used to generate an answer, it’s possible that copyright to the output from an chatbot could lie with individual users or the companies that developed the AI.

Copyright law is based around a general principle that only content created by human beings can be protected. The algorithms underpinning ChatGPT were developed at OpenAI, so the company would appear to hold copyright protection over those. But this might not extend to chatbot responses.

Man thinking

One day, AIs could own the copyright to what they produce, but we’re not there yet.
Shutterstock / Liu zishan

There is another option regarding the ownership of AI-generated content: the AI itself. UK law would currently prohibit an AI from owning copyright (or even recognising that an AI created it), as it is not a human and therefore cannot be treated as an author or owner under the Copyright, Designs and Patents Act. It is also unlikely that this position is going to change anytime soon, given the UK government’s response to the AI consultation.

Where a literary, dramatic, musical or artistic work is made by an employee in the course of their employment, their employer is the first owner of any copyright in the work – subject to any agreement to the contrary.

For now, policymakers are sticking to human creativity as the prism through which copyright is granted. However, as AI develops and is able to do more, policymakers might consider granting legal capacity to AIs themselves. This would represent a fundamental shift in how copyright law operates and a reimagining of who (or what) can be classed as an author and owner of copyright.

See also  The AI arms race highlights the urgent need for responsible innovation

Such a change would have implications for business as firms integrate AI into their products and services. Microsoft recently announced that it will be embedding its product Copilot – based on ChatGPT – into the company’s software, such as Word, PowerPoint and Excel. Copilot can help users with written communication and summarise large volumes of data.

More developments like this are sure to follow, and early adopter firms have a chance to capitalise on the current situation, by using AI to increase the efficiency of their operations. Firms can often gain an advantage when they are first to introduce a product or service to a market – a situation called the “first-mover advantage”.

Future shifts

The UK government recently carried out a consultation on AI and copyright. Two conflicting views emerged. The tech sector believes the copyright to AI-generated content should belong to users, whereas the creative sector wants this content to be excluded from ownership completely. The UK government has not acted on the findings and instead recommended further consultation between the interested parties.

If copyright law shifts away from its focus on human agency in future, one could imagine a scenario where an AI is classed as the author and the developers of that AI as the owners of the output. This could create a situation where a handful of powerful AI companies wield colossal influence.

They could end up owning hundreds of thousands of copyrighted materials – songs, published materials, visuals and other digital assets. This could arguably lead to a dystopian situation where the majority of newly-created works are generated by AI and owned by businesses.

It seems logical that such knowledge should remain in the public domain. Perhaps the solution is that each person or company declares their contribution when they use AI – or that their contribution is automatically calculated by software. Accordingly, they get credit or financial benefit based on the amount of work they contributed.

See also  How AI could change our relationship with religion

AI content that is itself based on copyrighted materials remains problematic. An inability to rely on copyrighted materials could undermine the ability of the AI system to answer prompts from end users. But if the content is to be based on protected works, we would need to accept a new era of open innovation where the intellectual property rights do not matter.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.