Technology is evolving constantly, and so are the changes along with it. One such change that technology has brought is in the field of news and media. Initially, it started with the telegraph, then evolved to radio, after which came to the television, and currently, it is the internet that is ruling the world.
Despite all such developments, technology served as the medium, and human journalists played a vital role as messengers. Enter AI, and everything changes.
Currently, AI machines designed to act as communicators generate news content independently of humans. This means that AI is both the medium and the messenger, providing human journalists with a new synthetic partner programmed to assist in news gathering.
A new study has identified that many Americans are unaware of AI’s role in their world, and also in news production.
The findings come at a difficult time in the news media industry when newsmakers are dealing with historically low levels of public trust in their product. More Americans than ever before report obtaining their news from the very social media platforms that use AI to provide them with their media, while newsmakers warn about the perils AI can pose to privacy, fairness, equality, safety, and security. It’s a mystery.
So, how do we help people trust AI while also reporting on companies that use AI to harm the public? questions Chad S. Owsley, a doctoral student at the University of Missouri School of Journalism as well as coauthor of the recent study. How do you tell them apart?
With the help of an online survey in 2020, Owsley and Keith Greenwood – coauthor associate professor discovered that less than half the respondents—48%—were sure that they had come across AI in the previous year either through reading, seeing, or hearing something, while another 40% could only state that it was a possibility, and only 25 percent of participants believed AI could write or report the news as well as or better than human journalists.
The 48 percent was consistent with a European study conducted three years earlier, which Owsley finds interesting given the high rate of technology usage claimed by participants in the 2020 study. For instance, 61 percent said they owned a smartphone. He predicted that as people’s use of technology grew, so would their understanding of AI.
We appeared to have stalled for three years, and it’s worth asking why and how, Owsley states, adding that the study didn’t seek to answer those questions, which he will address in his dissertation. Some of this is quite geeky. There may be a general disinterest in going into great detail about AI. People may think, I just want it to work. I’m not concerned with how it works.
Despite this lack of understanding, some forms of AI are replacing journalists, while others assist in news gathering.
Forbes launched “Bertie,” a publishing platform that uses AI to assist reporters with news articles by identifying trends, suggesting headlines, and providing visual content that matches relevant stories, in 2018. The Washington Post and the Associated Press are also using AI to perform journalistic tasks. Financial reporting is one of the most frequently used AI applications.
According to Owsley, how an AI machine thinks or operates is based on constructed data. The information must be in tabular forms, such as spreadsheets and tables. The AI then uses that knowledge to create a human-language story.
According to Greenwood, a long-lasting criticism of innovation is that news organizations frequently rush into the “next best thing” without considering important factors such as what does our target audience think, and how will they react to this?
If organizations are considering adopting these AI technologies, one of the first questions they should ask is, how does this match the expectations of our target audience, or how does it fit with their perception of who we are? he states. Instead of thinking, well, this is the future, and we must go that way.
The research was published in the journal AI & Society.