This event has now finished.
  • Date and time: Friday 29 September 2023, 1pm to 3pm
  • Audience: Open to the public
  • Admission: Free admission, booking required

Event details

This talk examines ChatGPT and generative AI from the perspective of media. 

Using examples, the talk explores how generative AI is different from earlier ‘narrow’ AI, how this difference make it easier to use, and how different AI models can be used together to automate media production. Both the long-term implications of AI in media and practical guidance for using AI for communication are discussed.

 View the recording of this event on our YouTube channel 

Q&A transcript

Q: Newsrooms…I’ve noticed that the BBC news on R4 tend to report what they’ve said in the Today programme, for instance, as “News” in the 09:00 and 13:00 news headlines. Seems like news self-generation to me and it’s affecting my trust levels. How do we ensure that we retain trust in news outlets if this continues? 

A: I think this may be an indication of how some traditional news organisations have not yet adequately adjusted to the increased control over news consumption that distribution changes like social media and on-demand video have given to news consumers. Roughly speaking, some traditional news organisations may still act as if news consumers had limited choice, which is no longer true. The BBC’s preferencing of news from its own programming rather than from the wider information ecosystem or from audience demand might be an example of this. Something similar might also be occurring on the production side of news, as evidenced by the recent Conservative boycott of the Today programme – which they are no longer dependent on for communication with the public. In general, this situation may make adjustment to an AI-mediated ecosystem more challenging because AI will likely accelerate the control that consumers have over their own consumption experiences and may further expand the available sources of news. 

Q: Is the main role of AI in the newsroom to increase efficiency and does it imply job losses? 2) Where are the users/audiences in this? Do they have any input in these news solutions? Are they being informed that AI played a role in the news they see? 

A: I think that in the short term most newsroom investment in AI is likely to be focused on efficiency. This does not necessarily imply job loss but there will certainly be ‘task loss’ as many tasks are automated, and there will also be new tasks related to the control and use of AI. Jobs are therefore very likely to change significantly, but I think it is still unclear whether there will be fewer, the same or even more newsroom jobs in an AI-mediated environment. Academic studies of audience acceptance of automated news production generally show that audiences don’t care how content is produced, as long as it is valuable to them. In almost all cases so far of published content created using generative AI there are clear labels about the origin of the content on every artefact. Almost all newsrooms who have published guidelines on the use of generative AI have declared that they will be transparent about the origin of AI-produced content, and there are explicit references to transparency in pending legislation in the EU and US and in and in the terms and conditions of some model providers. 

Q: What are your thoughts on the potential (and likely) malevolent use of such software by disinformation groups, which can practically create alternative realities that are much more realistic than most "fake news" we see today? 

A: This is clearly a risk, but we have not seen significant evidence of this so far. Disinformation using AI-generated content may also still be primarily a distribution platform challenge, as it has been with human-generated disinformation. We already have essentially infinite content, and so navigating information is already primarily done via platform algorithms. One major new threat, in my opinion, is the new ability to engage in ‘conversational disinformation’ – i.e. persuasion that occurs in multiple interactions over days or months instead of just via consumption of a social media post or an article. Think of this risk as ‘grooming at scale’ or ‘radicalisation at scale’. So far, to my knowledge, there is no evidence that this is occurring. 

Q: When selecting information that it integrates into a final product, does such generative software distinguish between "real" information and AI-generated one? 

A: I would challenge the distinction here – AI-generated information can be, and most often is, ‘real’. The accuracy and relevance and accessibility of information is independent of the means by which it is produced, although there may be some specific reasons for preferencing human-produced information (style/voice, accountability, maybe brand position, etc.) in some cases. 

Q: If the simplicity in creating images and videos improves over time, what effect do you think this will have on the entertainment industry? People will be able to make movies out of their favourite books, games, or any other source of media. People could also make games that never existed or continue franchises that ended long ago. There really wouldn't be as much interest in company sold entertainment. For example, I could generate several seasons of Takeshi's Castle even though it ended, or make Dark souls 4, which is a game that never existed to begin with simply by giving some interesting plots and following past gameplay mechanics. 

A: Good question. I’m focused on news rather than on entertainment, but I’ll offer a few observations. One is to look at the current ‘fan fiction’ environment, which has become a huge area of content origination, and which now has new tools to create new stories within the narrative worlds of entertainment properties. Another is to consider the changing role of ‘production values’, and especially the increasing acceptance of ‘good enough’ content (YouTube, TikTok, niche channels, etc). A third would be to consider the fundamental basis of entertainment, which may largely be the ‘story’ – in which case these tools might unlock huge quantities of previously untold stories, by empowering many more people to become storytellers. 

Q: I just have a question regarding whether the use of AI will degrade the importance and respect placed upon human creativity if we can all create anything that we want within a matter of hours? 

A: One of the most surprising things about generative AI is that it’s not cold and calculating and logical (as we expected AI to be), but instead is artistic and poetic and creative. I don’t think that generative AI will approach the creativity of the most talented humans, because of structural limitations in its training data and training process. But I do think that these tools will exceed the creative abilities of most people, if they haven’t already – coding is a good example. I think it will take some time for the impact of this development on human identity to become clear.

About the speaker

David Caswell

David Caswell is the founder of StoryFlow Ltd., an innovation consultancy focused on AI workflows in news production. He was formerly an Executive Product Manager at BBC News Labs, focused on AI-based new product initiatives. He previously led product management for machine learning at Tribune Publishing and the Los Angeles Times, and was Director of Product Management for Automated Content Understanding at Yahoo!. David has also researched and published extensively on computational, structured and automated forms of journalism, including as a Fellow at the Reynolds Journalism Institute at the Missouri School of Journalism.

Contact

Nadine Waehning