Transparency and trust: How news consumers in Canada want AI to be used in journalism

Transparency and trust: How news consumers in Canada want AI to be used in journalism

Developing clear policies and principles that are communicated with audiences should be an essential part of any newsroom's AI practice. (Shutterstock)

When it comes to artificial intelligence (AI) and news production, Canadian news consumers want to know when, how and why AI is part of journalistic work. And if they don’t get that transparency, they could lose trust in news organizations.

News consumers are so concerned about how the use of AI could impact the accuracy of stories and the spread of misinformation, a majority favour government regulation of how AI is used in journalism.

These are some of our preliminary findings after surveying a representative sample of 1,042 Canadian news consumers, most of whom accessed news daily.

This research is part of the Global Journalism Innovation Lab which researches new approaches to journalism. Those of us on the team at Toronto Metropolitan University are particularly interested in looking at news from an audience perspective in order to develop strategies for best practice.

The industry has high hopes that the use of AI could lead to better journalism, but there is still a lot of work to be done in terms of figuring out how to use it ethically.

Not everyone, for example, is sure the promise of time saved on tasks that AI can do faster will actually translate into more time for better reporting.

We hope our research will help newsrooms understand audience priorities as they develop standards of practice surrounding AI, and prevent further erosion of trust in journalism.

AI and transparency

We found that a lack of transparency could have serious consequences for news outlets that use AI. Almost 60 per cent of those surveyed said they would lose trust in a news organization if they found out a story was generated by AI that they thought was written by a human, something also reflected in international studies.

The overwhelming majority of respondents in our study, more than 85 per cent, want newsrooms to be transparent about how AI is being used. Three quarters want that to include labelling of content created by AI. And more than 70 per cent want the government to regulate the use of AI by news outlets.

See also  I unintentionally created a biased AI algorithm 25 years ago – tech companies are still making the same mistake

Organizations like Trusting News, which helps journalists build trust with audiences, now offer advice on what AI transparency should look like and say it’s more than just labelling a story — people want to know why news organizations are using AI.

Audience trust

Our survey also showed a significant contrast in confidence in news depending on the level of AI used. For example, more than half of respondents said they had high to very high trust in news produced just by humans. However, that level of trust dropped incrementally the more AI was involved in the process, to just over 10 per cent for news content that was generated by AI only.

In questions where news consumers had to choose a preference between humans and AI to make journalistic decisions, humans were far preferred. For example, more than 70 per cent of respondents felt humans were better at determining what was newsworthy, compared to less than six per cent who felt AI would have better news judgement. Eighty-six per cent of respondents felt humans should always be part of the journalistic process.

As newsrooms struggle to retain fractured audiences with fewer resources, the use of AI also has to be considered in terms of the value of the products they’re creating. More than half of our survey respondents perceived news produced mostly by AI with some human oversight as less worth paying for, which isn’t encouraging considering the existing reluctance to pay for news in Canada.

This result echoes a recent Reuters study, where an average of 41 per cent of people across six countries saw less value in AI-generated news.

Concerns about accuracy

In terms of negative impacts of AI in a newsroom, about 70 per cent of respondents were concerned about accuracy in news stories and job losses for journalists. Two-thirds of respondents felt the use of AI might lead to reduced exposure to a variety of information. An increased spread of mis- and disinformation, something recognized widely as a serious threat to democracy, was of concern for 78 per cent of news consumers.

Using AI to replace journalists was what made respondents most uncomfortable, and there was also less comfort with using it for editorial functions such as writing articles and deciding what stories to develop in the first place.

See also  Why Google, Bing and other search engines' embrace of generative AI threatens $68 billion SEO industry

There was far more comfort with using it for non-editorial tasks such as transcription and copy editing, echoing findings in previous research in Canada and other markets.

We also gathered a lot of data unrelated to AI to get a sense of how Canadians are tapping into news and the news they’re tapping into. Politics and local news were the two most popular types of news, chosen by 67 per cent of respondents, even though there is less local news to consume due to extensive cuts, mergers and closures.

A lot of people in our sample of Canadians, around 30 per cent, don’t actively look for news. They let it find them, something called passive consumption. And although this is proportionally higher in news consumers under 35, this isn’t just a phenomenon seen in the younger demographic. More than half of those who reported letting news find them were over 35 years old.

Although smartphones are increasingly becoming the likely access points of news for many consumers, including almost 70 per cent for those 34 and under and about 60 per cent of those between 35 and 44, television is where most news consumers in our study reported getting their journalism.

Respondents in our survey were asked to select all of their points of news access. More than 80 per cent of participants chose some form of TV, with some respondents picking two TV formats, for example, cable TV and smart TV. Surprisingly to us, half of 18-24 year olds reported TV as an access point for news. For those 44 and under, it was more often through a smart TV, though. As shown in other Canadian studies, TV news still plays an important role in the media landscape.

This is just a broad look at the data we have collected. Our analysis is just beginning. We’re going to dig deeper into how different demographics feel about the use of AI in journalism and how the use of AI might impact audience trust.

See also  What 70 years of AI on film can tell us about the human relationship with artificial intelligence

We will also soon be launching our survey with research partners in the United Kingdom and Australia to find out if there are differences in perceptions of AI in the three countries.

Even these initial results provide a lot of evidence that, as newsrooms work to survive in a destabilized market, using AI could have detrimental effects on the perceived value of their journalism. Developing clear policies and principles that are communicated with audiences should be an essential part of any newsroom’s AI practice in Canada.

The Conversation

Nicole Blanchett receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and The Creative School at Toronto Metropolitan University.

Charles H. Davis receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and has received funding from Toronto Metropolitan University.

Mariia Sozoniuk works with the Explanatory Journalism Project which is supported by funding from The Creative School at Toronto Metropolitan University and the Social Sciences and Humanities Research Council of Canada (SSHRC).

Sibo Chen receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and The Creative School at Toronto Metropolitan University.