Keyboard with AI button

Writers and AI

A policy position statement

Intro/background

The development and use of Artificial Intelligence (AI) systems and tools have been gathering at pace in recent years.

AI systems use complex algorithms and vast data sets to teach machines how to learn, reason, make decisions and generate content in a way which mimics human activity as closely as possible.

Much of the data which has been ingested to train AI systems can and has been scraped or data mined from public-facing internet sources without the owner’s prior knowledge or permission.

The most prominent and sophisticated writing AIs include OpenAI’s ChatGPT and Google’s Bard.  These systems are becoming increasingly sophisticated and better able to replicate human-like decision-making and content creation.

AI is now attracting considerable public attention as writers, artists, musicians, academics and others have started highlighting the potential dangers of unregulated AI and governments across the world are starting to consider the best ways to regulate AI.

Why are writers concerned about AI?

Writers are concerned that:

  1. The Government is not doing enough to protect writers.
  2. AI developers are using writers’ work without their permission.
  3. AI developers are infringing writers’ copyright.
  4. AI tools do not properly identify where AI has been used to create content.
  5. Increased AI use will lead to fewer job opportunities for writers.
  6. The use of AI will suppress writers’ pay.
  7. AI will dilute the contributions made by the creative industry to the UK economy and national identity.

They are also concerned about the expansion of AI-based decision-making in other areas of their working lives, such as benefit assessments etc.

What are the potential benefits of AI?

If AI is developed and used in an ethical, transparent and responsible way in partnership with civil society, government and trade unions it could be of benefit to writers and their careers.

If AI tools pay writers fairly for the use of their work this could mean that writers are able to diversify and increase their income streams, allowing them to sustain careers. This will also mean that writing is not only an option for those who can afford to get through tough times.

AI can also be used to help identify where copyright infringements are already taking place and help take them down. 50% of respondents to our AI survey ‘somewhat agreed’ or ‘strongly agreed’ that, “AI will be able to help identify where my work has been used.”

Is there a risk AI could replace writers?

While the AI systems are not yet sophisticated enough to produce works which accurately mimic the standard of writing produced by professional writers, this is a likely future scenario.

However, the WGGB does not believe that AI will be able to replicate the originality, authenticity, enthusiasm and humanity that professional writers put into their storytelling.

An early impact assessment by OpenAI indicated that the exposure risk to poets, lyricists and creative writers was amongst the highest, at 68.8%. 61% of respondents to a recent WGGB survey on AI ‘somewhat agreed’ or ‘strongly agreed’ that, “The increased use of AI could replace writers in their craft area.”

What are the pay issues?

As well as having the potential to replace some writers, there are concerns that AI will be (and in some cases already is) used by some producers, publishers and employers to cut corners and save costs, meaning that the pay of professional writers will be squeezed further. A recent report by KPMG, Generative AI and the UK Labour Market, estimates that 43% of the tasks associated with authors, writers and translators could be automated, with humans ‘fine tuning’ machine output.

For example, writers could be used to ‘polish’ draft scripts written by AI tools rather than develop original work themselves and be paid less as a consequence.

In a recent WGGB survey 65% of respondents ‘somewhat agreed’ or ‘strongly agreed’ that, “The increased use of AI will reduce my income from writing.”

AI and the creative industries

The creative industries contributed £115.9 billion to the UK economy in 2019 and is a core pillar of the UK’s national identity. Highly skilled UK writers and other creatives produce distinctly unique British content which is sold around the world. This helps to build and maintain the UK’s global influence through soft power, raises revenue and brings in investment.

There is a risk that if AI developers, many of whom are based in the US, can input content from UK creatives without any restrictions to produce outputs which compete with them, it will undermine the UK creative economy. Furthermore, if UK creatives are no longer able to sustain a career due to a lack of adequate remuneration we will likely see a reduction in people wanting to enter the industry.

AI Issues and copyright

In addition to concern about pay and job opportunities there are a multitude of copyright issues relating to permission, moral rights and remuneration when it comes to AI.

Permission to use work

Currently AI developers can use bots to ‘scrape’ or ‘mine’ writers’ work from the web and use it to ‘train’ their tools, without the knowledge or permission of writers. AI systems are also unable to ‘unlearn’ material once it is ingested, so once it has been ingested it will remain in that system forever.

Many AI developers are not transparent about what data has been used to train their tools, meaning writers cannot tell if their work has been used.

The WGGB believes that AI developers should only use writers’ work if they’ve been given express permission to do so. 80% of respondents to our survey ‘somewhat agreed’ or ‘strongly agreed’ that, “AI developers and systems should seek permission from writers before using their material.”

Input transparency 

82% of respondents to our AI survey ‘somewhat agreed’ or ‘strongly agreed’ that, “AI developers should be transparent about what data they have used in creating AI systems, including where they have used writers’ work.”

Therefore, we believe AI developers should also maintain clear and accessible logs of the information used to train their tool and allow writers to check if their work has been used.

Payment for using work

Writers should be fairly compensated when developers use their work.

AI tools can use a writer’s work indefinitely and on an unlimited number of commercial projects, so it is only right that writers receive payment for this.

This can be done through a voluntary licence arrangement, with writers receiving ongoing payments for the use of their work.

81% of respondents to our AI survey ‘somewhat agreed’ or ‘strongly agreed’ that, “Writers should be paid a fee when their work is used by AI systems.”

Output transparency

Where content has been generated, or decisions have been made by AI and not a human being, it needs to be clearly labelled as such.

In a world of misinformation and deepfakes, audiences deserve to know if the content they are consuming has been produced by a human, or a machine.

There is a concern that AI tools will struggle to distinguish between genuine, factual information and deliberate misinformation, producing results based on flawed assumptions. There is a further risk that AI outputs which are based on incorrect information could themselves be ingested back into machines, leading to further incorrect outputs.

AI tools have been known to ‘hallucinate’, producing incorrect and entirely made-up results reporting them as facts.

Where AI has been used to create content, AI developers should appropriately credit the authors whose work has been used to create such content.

Additionally, if decisions such as benefit claims are being made by AI, then individuals should be informed of this.

What more can Government do to protect writers?

Regulation

In its white paper AI regulation: a pro-innovation approach the Government failed to address any issues regarding the rights of content producers, such as writers, and AI developers.

So, the first step government can take to protect writers is to properly regulate AI developers and introduce the measures regarding transparency, payment and labelling as described above.

59% of respondents to our AI survey believed that a new, independent regulator should be set up to oversee and monitor the expansion of AI.

The Government should set up a new regulatory body whose remit specifically covers AI. The new body will need to be appropriately resourced and able to take enforcement action against AI developers who do not adhere to rules governing transparency, remuneration and the right to human review.

This regulation should be applicable to all future and previous AI development work, so that writers and others are able to assert rights regarding work which has already been used without their knowledge or permission.

Maintain and strengthen copyright protections

There are several ways for Government to maintain and strengthen copyright in the UK so that writers can protect their copyright.

Government needs to help ensure that AI developers have a clear understanding of copyright law, preventing inadvertent copyright infringements. This could involve providing guidance to AI developers, copyright holders and the public on how to comply with copyright laws.

The Government should establish a clear definition of what constitutes solely AI-generated work, and work made with the intervention of creators.

Government also needs to facilitate discussions between AI developers, creators, Government and regulators so that they are all involved in any policy and decision-making process. By working together, we could develop best practices, standards and guidelines for AI development which fully consider existing copyright law and rightsholders.

The Government should not allow any copyright exceptions to allow text and data mining for commercial purposes. This would allow AI developers to scrape writers’ work from online sources, without permission or payment.

There should also be clear, accessible and affordable routes for writers to challenge the practices of AI developers and bring claims regarding the use of their work. IPEC small claims need to be accessible for writers, many of whom operate as individuals. This means that bringing a claim needs to be inexpensive, easy to understand and quick to resolution.

Right to human review

Where AI has been used for decision-making purposes, organisations should also provide contact details for those who need assistance and detail a clear complaints and appeal/review process which should include human scrutiny of the issue.

There is a risk that AI will base its decision making on flawed data and assumptions, resulting in incorrect outputs. For example, in awarding of benefits, AI may not be able to correctly take into account the highly nuanced way in which freelance creatives work and deny benefits incorrectly.

There is also concern that AI may make decisions based on flawed assumptions regarding equalities data and produce discriminatory results.

Individuals should have the right to both know when decisions have been made by AI and have the opportunity to challenge them through a right to human review.

Policy position statement ends

The WGGB Wales Branch participated in the report A snapshot of workers in Wales’ understanding and experience of AI, which was launched in January 2024. You can read the report here.

Read our guideline Using AI as a research and writing tool – the risks 

Read our guide My book has been used to ‘train’ AI – what can I do?

Read the joint position statement on ethical use of AI issued by the International Affiliation of Writers Guilds and the Federation of Screenwriters in Europe

Read our guide What writers need to know about Meta’s new privacy policy

Read our guideline Don’t let Grok use your data!

Read our recommendations on AI in our General Election 2024 manifesto, Putting writers at the heart of the story.

Read the letter we signed to software developers as part of the Creators’ Rights Alliance

Photo: Shutterstock.com/Venus78