Challenges and Changes in 2023 and Beyond

By Yin Nwe Ko 


AI and robots are causing some concerns for writers in 2023. A man named Gary Dalkin is thinking about how AI is changing writing and what might happen in the future.

Every day, we hear about AI changing writing or publishing. Some recent news includes stories like ‘AI using works by Zadie Smith, Stephen King, and Rachel Cusk without permission’ (from the Guardian), ‘Author using AI to write novels without worrying about losing his job’ (Sky News), ‘Site Prosecraft, which analyzes fiction, closed after people got upset’ (Gizmodo)…

It’s clear that AI is changing things very fast. You’d need to read a lot of articles every day to understand what’s going on. There are two main problems: we don’t know what’s happening right now, and we don’t know what might happen in the future. This is because AI is a technology that people always talk about, but even the experts don’t completely understand it.

There are things we don’t know about AI. We don’t know how it might change writing, publishing, or even the world. Technology is growing really fast, and people are always finding new ways to use it. We also don’t know if people will accept AI in their lives or try to limit its impact. Everything is changing so quickly, and we’re in a new frontier where what’s possible with technology, what the law allows, and what big and small companies can do all come together in surprising and confusing ways.

One example of this confusion is a website called Prosecraft, which closed down quickly. It was run by a man named Benji Smith. He said his site was for analyzing literature, including thousands of books by many authors. Smith originally created the site to figure out how many words are in different types of fiction. Later, he started analyzing novels to statistically break down thousands of titles. For instance, he used Alice’s Adventures in Wonderland, saying it has 26,814 words, a ‘vividness’ score of 83.93 per cent, and uses passive voice 8.08 per cent of the time. The problem was that Smith used old classics and many new books by living authors, and some people didn’t like it. Smith argued he used the texts fairly and didn’t make money from it, saying he was a victim caught up in the writing world’s fight against AI.

Smith’s situation might be a simple example of a small operation using computers to gather statistics about books. They put authors’ work into a computer, but it’s not far from there to train even more powerful computers. These computers wouldn’t just make data about existing books; they would use those books to teach artificial intelligence to write.

Now, let’s talk about ChatGPT and similar systems like Google’s Bard, Sudowrite, and Writesonic (mostly based on GPT, the technology behind ChatGPT). The problem with these systems is that copying is a big part of their work. This means they couldn’t do what they do without learning from many books by professional writers. These writers’ work is taken and analyzed without permission to make new technology to replace writers. In simple terms, the work of many writers is taken without asking to create technology that competes directly with human authors.

A person named Alex Reisner wrote in the Atlantic that he got hold of a dataset used by Meta to train an AI called LLaMA. This dataset had over 170,000 pirated books, most published in the last 20 years. It was called Books3 and was also used to train BloombergGPT and GPT-J by EleutherAI. This vast collection includes texts by many authors, like Sarah Silverman, Richard Kadrey, Christopher Golden, and more, all taken without asking.

Books3 is part of a bigger dataset called The Pile, which has not just books but other digital texts from places like the European Parliament and Wikipedia, as well as subtitles from YouTube videos. Reisner found over 30,000 books from Penguin Random House, 14,000 from HarperCollins, and 7,000 from Macmillan in The Pile. He found novels by famous authors like Jonathan Franzen, Margaret Atwood, and L Ron Hubbard (who founded Scientology and wrote many pulp novels). Reisner couldn’t figure out what Books1 contains, but he thinks it might be the Project Gutenberg database of around 70,000 old books. Books2 is likely pirated digital libraries with names like Bibliotik, Library Genesis, and Z-Library, which you can find and download using BitTorrent.

Until recently, authors worried about their books being copied, but using pirated books to teach machines that might replace writers is a much bigger concern.

Simon Rich is a writer from America, and he worked on a book called I Am Code: An Artificial Intelligence Speaks, with an AI named code-davinci-002. He also wrote an article for Time with just the title, ‘I’m a Screenwriter. These AI Jokes Give Me Nightmares,’ might sound scary. In the article, he talked about AIs like ChatGPT and said they are deliberately not super good. OpenAI, the company behind ChatGPT, spent a lot of time and money to make it safe and not too creative.

A professor named Rachel Armstrong thinks writing with AI will become very normal like using spell-check. She believes AI will mostly help our writing but might be annoying sometimes. People might even turn it off when it doesn’t work well. She thinks we will treat AI like a writing friend, wondering what it thinks but not really knowing its biases.

Simon Rich knows about a more advanced AI called code-davinci-002. Thanks to a friend at OpenAI, he used it and found it can even make funny jokes. He shared made-up news headlines like ‘Experts Warn that War in Ukraine Could Become Even More Boring’ and ‘Budget of New Batman Movie Swells to US$200M as Director Insists on Using Real Batman.’ Maybe one of those made you laugh! Rich thinks AI might be better than human writers in about five years.

Professor Armstrong has a different idea. She thinks AI writing will feel safe and predictable to readers, but some will miss the wild and risky writing. She looks forward to a time when writers rebel against the AI’s usual way of writing, using techniques that confuse the AI. She wants a kind of writing that AI can’t fully understand.

A big point here is that no matter how good AI gets at copying human writing, people will still want to read things that come from real human experiences. Computers don’t know what it’s like to be human, and they never will. AI might take over some writing jobs, like corporate writing, but writers will resist, just like in the WGA strike in the USA. Human-written works might even have a special logo to show they are real. Simon Rich ends by saying he’ll keep writing even if technology makes him less needed. He thinks his human need to create something without a clear reason is what makes him special.

Good news and challenges are part of the story about AI. While there’s a new and more natural AI called ChatGPT4, and guides on websites like How-To Geek help make AI text sound more human, Professor Gary Marcus warns about serious problems. He’s unsure if generative AI, like ChatGPT, will have the big impact people expect. He compares it to past technologies, like airships, that didn’t live up to the hype. He even suggests that the whole generative AI field might not last long.

Another thing to consider is what technology can do and what society decides to allow. Laws are like a balance against technology. For instance, some lawyers in the US got in trouble for using fake information from ChatGPT in court. This shows that people can’t always trust what AI creates. Worse, some writers take legal action because AI systems use their work without asking. On 18 August a US judge said AI-created art can’t be protected by copyright. This means anyone can copy it, and publishers might not want to invest in books that can be freely copied right after publication.

This decision is for the US, but it could affect the world, especially English-speaking countries. The US is a big market for English books, and if AI-created works have no legal protection there, it might be the same everywhere. The judge said copyright only protects works made by humans, not AI. This is a strong stance to protect human creativity. So, even though AI is growing, the law ensures human creativity stays safe.

Things are changing fast, but there are good and not-so-good sides to it. ChatGPT, a smart computer programme, is now on Android phones and is part of Microsoft’s Bing search engine. It can even ‘see’ things and has voice functions. But, when you use it, there’s a warning that it might give wrong information. It’s interesting how ‘inaccurate information’ is now a big deal.

In the European Union (EU), they made a new law called The Digital Services Act. It’s stricter for big tech companies with lots of users. They need to follow rules about how they use people’s information. Facebook, one of these big companies, added a page to help users control the information AI uses.

Amazon, the giant online store, limited how many books people could upload using its publishing system because AI made too many books. But not all AI use is bad. With help from Microsoft and MIT, Project Gutenberg made 5,000 old books into audiobooks using AI voices.

The Writers Guild of America (WGA) won against Hollywood Studios, saying AI can’t write or help make movie scripts. The Atlantic magazine is also checking which books are used to train AI. They even made a tool to see if your work was used without permission. I found a book I edited with it!

The Author’s Guild is suing Open AI for using famous writers’ work without asking. People are taking sides, and it shows that good changes are happening. Even though we’re not sure what will happen, things created by humans will still be important, and AI-made things won’t replace them completely.

(To all writers who are on the same trip and in the same boat.)

Reference: Abridged from the article titled “Robots Write On” in the “Writing Magazine” January 2024