To fear or not to fear: The future of journalism in an AI-driven world
As a future journalist in this new AI-driven world, one may question the relevance of pursuing a profession that AI could potentially replace. However, the importance of human-centred journalism, which focuses on empathy, critical thinking, and ethical reporting, cannot be easily replaced by large language model tools such as ChatGPT and Jasper. While these tools are disrupting the market with their speed and cost-effectiveness, they lack the foundations of journalism rooted in human understanding and interpretation of our world.
According to Cyrus Farivar in the article, Don’t fear the future of AI-assisted journalism: It’s already here, the assistance of AI has been utilised by the Los Angeles Times for over a decade to alert California residents of earthquakes over a certain magnitude. Quakebot, the Los Angeles Times’s AI earthquake informant, provides readers with the basic five journalistic Ws but lacks human analysis and interpretation. For an emergency incident that requires timely reporting, such as an earthquake, AI has positively revolutionised and automated standard journalistic reporting for life-threatening news.
However, AI’s current capabilities cannot distinguish credible sources and accurately analyse and interpret such incidents from a human perspective. AI tools are biased toward their users and can create a piece of “news” based on the information and agenda provided by the prompter.
According to Sander Van Der Linden in the WIRED article, AI-Generated Fake News Is Coming to an Election Near You, based on information and prompts given to an early version of ChatGPT during a study, the large language model tool can churn out “thousands of misleading but plausible-sounding news stories”.
Linden and his team tested such fake AI-generated news story headlines on American citizens to understand their susceptibility to believing fake news. Based on this study, Linden said, “41 per cent of Americans incorrectly thought the vaccine headline was true, and 46 per cent thought the government was manipulating the stock market. Another recent study, published in the journal Science, showed that GPT-3 produces more compelling disinformation than humans and that people cannot reliably distinguish between human and AI-generated misinformation.”
This study’s alarming results show how AI can be used negatively in journalism to manipulate an audience based on one’s agenda. They prove the importance of humans in the journalism profession and the necessity of teaching human journalists how to use AI tools ethically.
While the use of AI in journalism is not a new practice, the recent availability of free AI tools such as ChatGPT has created new possibilities and challenges. AI has the potential to enhance and speed up journalistic processes, particularly in handling drudgery tasks. However, it is crucial that AI does not replace human journalists. Instead, teaching journalists how to utilise these tools ethically can help harness the potential of AI in the newsroom as a partner, alleviate concerns and combat the increased spread of misinformation and fake news.