ChatGPT, Jasper, Bard, and all the other artificial intelligence (AI) text generators are a devastatingly groundbreaking force that, in just a few months, has already changed our world forever. While this may seem like a bold statement, ChatGPT alone generated 1.6 billion visits in June 2023.
I must admit that I occasionally utilize these tools to aid me in writing content, primarily for books (rather than articles like this one). In the publishing industry, everyone, from writers (first and foremost) to editors, translators, and even advertisers, finds themselves caught between the joy of embracing these fantastic tools that simplify our lives and the fear of disruptive technology that could render us all obsolete.
Personally, using ChatGPT to assist me with writing feels like a guilty pleasure. The allure of having a tool that effortlessly creates writing frameworks is irresistible, akin to an intoxicating addiction. However, there’s an underlying sense of unease, knowing that this unstoppable force may lead to the extinction of my job eventually. It’s a Faustian bargain with the devil, embracing progress while also acknowledging the potential consequences.
So, what are the primary concerns surrounding the use of AI text generators? Why do they spark controversy? How can we take a stance and ensure their ethical and legal use?
Let’s discuss these issues in this article (which I wrote without the help of AI text generators).
Economic Impact on the Writing Community: Experiencing the AI Onslaught
Will AI lead to a massacre in the job market for writers? It seems likely. Similar to the impact of past highly disruptive technologies, AI is poised to revolutionize numerous industries, and creative writing is particularly vulnerable. The widespread adoption of AI in this field may displace human writers as it enables the rapid generation of substantial amounts of average-to-good quality content in a matter of seconds, requiring minimal human involvement.
In June 2023, Germany’s leading tabloid Bild announced a €100 million budget cut that led to 200 writers losing their jobs, as their tasks could be “performed by AI and/or automated processes.”
But, is this going to make all writers obsolete and eventually jobless people who will end up living off social security checks? Not necessarily. The impact of AI on writers will vary depending on the type of writing they are involved in. For instance, as demonstrated in Bild’s example, writers who primarily produce short and simple articles sourced from other places may face a higher risk of job displacement as this task can be easily automated. Machines will likely erase copywriters who mainly engage in word spinning and create standard, repetitive, or unoriginal content.
One may argue that truly creative minds will be the ones to retain their jobs, as machines may never fully achieve the level of complexity, emotional understanding, and empathy that only human minds can offer. Another point to consider is that if you excel as a writer, you might be able to secure your position while others lose theirs.
In fact, good writers might even see an increase in pay as finding individuals who can truly surpass AI could become increasingly challenging. As a result, competition is likely to diminish, and specialized minds could become a valuable and scarce resource.
Yet, the assumptions made about the limitations of AI in comparison to human creativity are based on the current state of technology and understanding. The future remains uncertain, and the continuous evolution of technology raises the question of whether it might eventually learn to assimilate “humaneness” to such an extent that it becomes virtually indistinguishable from the work of a real human writer, poet, or musician.
Can the System Eventually Balance Itself on Its Own?
One of the most poignant questions that is constantly raised in this debate is the ethicality of using AI in writing. Should we, as writers, feel guilty every time we ask for the help of an AI, even if it’s just to write a barebone draft or outline of our content? How does this differ from drawing inspiration from other existing works and then expanding upon them, which has been a common practice among writers for centuries?
As I said earlier on, progress and technological advancements cannot be halted, and AI’s influence will inevitably grow regardless of individual choices. Refusing to embrace AI’s capabilities could potentially hinder a writer’s ability to stay relevant in a rapidly evolving landscape, analogous to the situation in the 2010s when some individuals resisted using the Internet to source information.
The widespread use of AI by major corporations to replace content creators for products like TV series, movies, commercial songs, novels, and fiction can be argued as unethical. This practice throws countless writers under the bus and prioritizes increased profits at the expense of quality, potentially leading to the decline of creativity (which some argue is already endangered to some degree). But what will the consequences be? Will the system be able to absorb the impact on its own?
The answer is “maybe,” but it’s a distinct possibility.
On one hand, generative AI text generators need to draw from existing content that is found on the web or inside datasets, which are entirely created by humans, offering access to an endless pool of creativity. However, as AI-generated content becomes more mainstream, there is a potential concern that it will begin to draw from pre-existing and AI-written material, leading to a decline in overall quality. If the system’s quality diminishes instead of improving over time, it becomes an unsustainable model. And once again, it would be the duty of creative humans to come to the rescue (at a much higher cost this time).
Also, a very important aspect of this story is that right now, AI text generators are a free bounty ripe for the picking. Why? Because these models are still in development, so organizations such as OpenAI still find it valuable to let people use them for free to feed them. Eventually, however, when they will have established themselves in a position of strength, they may start asking steep prices to offer services that come for free right now. When only major corporations may be able to afford them, human workers may become the cheaper alternative once again.
Lastly, the consequences of inundating customers with massive amounts of low-quality content are already evident for major entertainment companies. Industry giants like Netflix, Amazon Prime, and Disney are experiencing significant financial losses, amounting to billions of dollars.
This reduction in the overall quality of content is apparent to everyone, and it has proven detrimental to even the most profitable media companies worldwide. If slightly below-average human content creators were unable to satisfy the audience, introducing subpar content generated by AIs is unlikely to improve the situation.
External Intervention Can Be a Last-Ditch Solution
Just like the eternal debate that rages over global financial markets, people may argue that an external intervention may be necessary to regulate the future of AI before millions of jobs get caught in the middle. Besides the ethical implications of the widespread use of AI text generators for content creation, there may very well be some legal reasons why the intervention of regulatory agencies, governments, and policymakers may be necessary rather than just desirable.
AI text generators excel at emulating human writing, but they do not truly create original content in the same sense as humans do. All this lack of originality can potentially lead to plagiarism or copyright infringement if it generates content heavily inspired by existing works by human authors.
This raises concerns about the difficulty in determining authorship and giving proper credit for AI-generated content. The lack of originality in AI-generated content can lead to ethical and legal issues if it reproduces copyrighted material without permission or attribution. Establishing clear boundaries and guidelines for AI-generated content can be a crucial step in limiting the over-proliferation and misuse of AI text generators.
Also, policymakers can establish defined rules to address the AI issue, particularly by addressing public perception. Recently, the European Parliament proposed the labeling of all content created by AI, enabling people to differentiate artificially generated text from human-created content. Whether this AI-generated content proves to be more enjoyable than human-generated content remains a subject of debate. Nonetheless, new rules and laws can be enforced, such as limiting the profits earned from AI-created content through measures like price caps or increased taxation for companies that extensively utilize AI tools.
While the legal framework surrounding AI might currently be unclear, global powers like China have already taken a position, and it’s only a matter of time before every major country follows suit.
The Bottom Line
There’s no definitive conclusion at this point, as we are currently witnessing the beginning of a transformative change. The future of writing is deeply intertwined with the future of AI, but the rapid pace of developments makes it challenging to predict what will transpire in the coming months, let alone in the next few years or decades.
However, one thing is certain: the landscape for creative individuals, including myself, is bound to change – a lot.