Artificial intelligence (AI) is driving a rapid transformation in the way that news organizations operate. The use of algorithms that automate the writing process is raising questions about the impact of AI-generated content on journalistic ethics and credibility. It will also increasingly have an impact on human journalists’ roles and job security.
The use of AI in producing content from software that writes news stories containing facts and figures is not new; it has been prevalent in some newsrooms for almost a decade. But generative AI – which automatically produces text and images in response to prompts – has begun to affect journalistic practices in various ways and will determine how the news industry will evolve.
How the News Industry Incorporates AI
Newsrooms are increasingly incorporating AI-based tools to research, generate, and distribute content as they try to keep pace with their competitors.
Information Gathering
Machine learning (ML) and natural language processing (NLP) algorithms enable journalists to track breaking news and trending topics from a range of sources, such as social media platforms, discussion forums, and blogs instantaneously. ML can identify trends and patterns in large volumes of data that reporters may not otherwise have time to analyze.
In this way, they can stay up-to-date with the latest developments and uncover stories while providing more detailed context and deeper analysis.
AI tools are also providing suggestions for compelling headlines, contributing relevant imagery, and assisting fact-checkers in verifying the legitimacy of images, videos, and other content to identify deepfakes. And speech-to-text tools allow reporters to transcribe audio and video files from interviews and events within minutes.
Content Generation
As algorithms can analyze data and generate content in seconds, news agencies are increasingly using them to create articles based on data and statistics, such as company financial reports, sports results, traffic reports, and weather forecasts. Generated content can range from templates and outlines to entire articles.??
Such AI tools can pull the relevant data and craft accurate articles faster than a human can read through a report, saving time and increasing the number of articles published in a day. This also gives organizations a first-mover advantage in publishing coverage before their competitors.
AI automatically sorts information into different categories, for example, to produce infographics, visualizations, or multiple versions of a story for publication in different regions.
Content Distribution
News organizations use AI to analyze the behavior and preferences of visitors to their websites and apps. They can track demographics, the most popular content, how users access content, and how long they remain on a specific page. The algorithms then tailor their content and news feeds to keep readers or viewers engaged based on their interests.
Algorithms can recommend related content and relevant advertising, deliver dynamic content in multiple formats, and enhance accessibility. In addition, AI-driven chatbots can answer consumer queries automatically.
This personalization of content delivery and customer service encourages loyalty, increasing advertising or subscription revenue. It also reduces the time that production teams need to spend on content layout and design.
The Journalistic AI Dilemma
While automated tools can make news gathering and reporting more efficient, they present complex challenges that are fundamental to the future of journalism.
Ethics and Bias
There are arguments that AI helps to reduce human biases in interpreting data and reporting events. However, algorithms are trained by humans and have been shown to produce content that replicates human biases and prejudices surrounding gender, race, and ability.
And the personalization of content delivery means that consumers only view content that reinforces their existing viewpoints and does not provide them with alternative perspectives.
To avoid creating echo chambers, news organizations must find ways to maintain diversity in the content they deliver so that consumers are exposed to a range of views.
An important ethical standard in journalistic practice balances the public’s right to know and an individual’s right to privacy. AI algorithms do not have the human ability to consider the nuances involved in privacy issues, which can make it challenging to ensure they do not violate privacy law or other ethical boundaries.
The use of generative AI tools and large language models (LLMs) in creating content also raises ethical questions surrounding transparency and attribution. Should news reporters disclose whether they use content created by AI in their work?
Some news organizations have started publishing AI-generated content under generic bylines without clear attribution; some have decided to state when content has been generated by AI, and others have opted not to use AI in their articles.??
Credibility
The provenance and accuracy of data sources fed into AI algorithms are unknown unless an organization trains them internally. Automatically generated content may lack the correct context, contain misplaced facts, or confuse the consumer. The current generation of AI chatbots is prone to producing content containing factual errors, which unchecked can compromise the credibility of the journalists and organizations that publish it. Media outlets can become unintentionally responsible for spreading false information.
As chatbots generate content from the information they are fed, they tend to deliver text that replicates relevant sentences or even paragraphs, creating content that is plagiarized and unoriginal. This infringes on the copyright of the original content creator, reproducing their work without consent.
AI-generated text can often read as trite rather than the more sophisticated and complex analysis that human writers can produce.
Publications have also been deceived by hoaxes, publishing content supposedly contributed by human authors but produced by AI and accompanied by a fake profile – damaging the publication’s reputation.
Job Security
AI tools can help to streamline news gathering and production processes, freeing up time for journalists to focus on more complex reporting and analysis that requires human understanding and creativity.
However, there are concerns that the proliferation of automated content will lead to a deterioration in journalistic quality and result in job losses as organizations downsize reporting teams. Some have already begun to make redundancies or limit new hires, opting to rely on AI tools instead of human reporters. Several TV news channels around the world have even begun running news programs with AI-generated anchors.
The nature of a journalist’s role will likely evolve over time, requiring them to incorporate AI tools into their workflow. Those unable or unwilling to work with these tools could also see their jobs threatened.??
The Future of AI in Journalism
The use of AI algorithms will shape the future of the news industry and redefine the role of journalists. News organizations need to navigate the opportunities and challenges they face in a way that makes effective use of these tools while preserving a form of journalism that is ethical, unbiased, and factually accurate. Some of the concerns surrounding AI tools can be overcome with human review – and intervention where necessary.
As the technology continues to develop, journalists will need to adapt their workflows to incorporate AI tools – while compensating for their limitations.
It will become ever more important for human journalists to produce unique and insightful commentary, investigative reporting, and exclusive insights. They will need to demonstrate critical thinking and empathy that computer-based systems cannot replace. This will be key to the survival of a vibrant media industry that maintains a core set of values and ethical standards.