SciComm Conversations: “Generative AI and Science Communication: Opportunities and Challenges”

Published On: 19.03.2024Categories: SciComm Conversations

Listen to “Generative AI and Science Communication: Opportunities and Challenges. Guest: Mike Schäfer” on Spreaker.

Transcript

Achintya Rao [00:09]: Welcome to SciComm Conversations, my name is Achintya Rao. I am the communications and engagement manager for COALESCE and your host for today’s episode.

In our first season of SciComm Conversations, we are chatting with experts on the topic of science communication in the age of artificial intelligence. Today, we hear from Professor Mike Schäfer from the University of Zurich on the challenges and opportunities that generative AI presents to science communication. Mike recently published an essay entitled “The Notorious GPT” in the Journal of Science Communication.

Mike Schäfer [00:44]: My name is Mike, Mike Schäfer. I’m a professor of science communication here at the University of Zurich, and I work mostly on public or mediated science communication in all of its forms, in journalism, in social media, etc. on a range of topics, and recently, of course, also on artificial intelligence.

AR [01:03]: Welcome, Mike. It’s lovely to have you on the show. So, we’re talking today about science communication in the age of artificial intelligence. Specifically, the challenges and opportunities that artificial intelligence presents to science communication in all of its forms. Can you start by telling us what we mean when we talk about artificial intelligence? This AI is not quite the AI from science fiction, is it?

MS [01:24]: [laughs] Well, maybe not quite. Even though it has to be said, many accounts of AI in science fiction are actually quite interesting and mirror parts – and sometimes large parts, I think – of what we discuss currently when we talk about AI. As you know, there’s many different accounts of AI in science fiction: humanoid AI like R2D2, I, Robot or Androids – Commander Data, Star Trek. And they often discuss important topics that actually matter when we talk about AI also, like artificial general intelligence, where the assumption is that at some stage, artificial intelligence could be able to do all tasks that currently we as humans can do and could potentially supplant us as a workforce, as thinkers, as scientists, even as creative beings potentially. And all the way leading up to what John von Neumann, a Hungarian mathematician, has called “singularity”: a point in the future where technology or AI development becomes uncontrollable, irreversible, self-conscious and may lead to severe consequences for humanity.

So, I think an important takeaway is AI is multifaceted, societies around the globe are in the process of figuring out how we want to look at it, how they understand it, what they want to do with it. And if we want to talk about it, we have to be specific. What we mostly have talked about as societies in recent months is actually generative AI: AI that generates output. It gets an input, a question, a prompt and generates some kind of output – text, imagery, other data – based on patterns it sees in its training data. So, often textual training data, so-called large language models, which use machine learning, trying to identify patterns in the training data and approximate their responses to these patterns.

And the most prominent one obviously was GPT, standing for generative pre-trained transformer – which was launched in November ’22 by OpenAI, which is linked to Microsoft – and uses these technologies to give human-like answers in a chatbot, ChatGPT, a chatbot that we all know by now.

AR [03:47]: So why has ChatGPT in particular captured everyone’s imagination? It has seemingly taken the world by storm, surprise people by its capabilities. What has made it so special?

MS [03:57]: I think a more prosaic answer is, well, this is a functioning chatbot – that has still flaws, yes, absolutely, but functioning – that gives human-like responses. And in doing that mirrors many of the stereotypes, the cultural and science-fiction stereotypes, that we have of artificial intelligence.

Probably the more social-science answer is, well, the impact of technology usually has two sides. On the one hand, it’s the technological capabilities and potential, which is large, here, obviously. And on the other hand, it’s the societal uptake, which was huge here as well. And I can only speculate about the reasons for that. Maybe there was a demand for that; maybe the time was right. After it was launched, it skyrocketed in a very short time. In less than a week, it reached a million users. Until January, so within two–three months, it had a hundred million users. It was one of the fastest rollouts of technology in history. And by now we have other competitors as well.

And they can do many different things by now. Translate text, create imagery, so, move to other modalities beyond just text. They can work also off visual prompts: you can upload images and ask the model to do something with that. They can imitate voices, etc. You have a lot of plug-ins that you can work with that sometimes, for example, things like Elicit or others, they interface with scientific databases and try to get scholarly papers, scholarly publications into the answers. And they’re becoming more easy to use. They’re becoming integrated in search engines like Bing or like you.com.

This is going to, or has the potential, certainly, to fundamentally impact different realms of life. The economy, supplanting people as workforce in some sectors, potentially many sectors of society. Journalism, where generative AI can be used to formulate texts, to create different kinds of content, etc. There’s art, where MoMA is exhibiting generative-AI art, The Unsupervised Exhibit by Refik Anadol; in Amsterdam, an AI-based art museum has opened. You can create music that mirrors the styles and the lyrics and the actual voices of your favourite artists if you want. It can change, fundamentally impact, maybe even disrupt many realms of life.

AR [06:45]: In your essay, you talk specifically about how generative AI will influence academia more generally. Can you give our listeners some examples? And you know on balance, do you see these influences as being more positive or more negative? I suppose that’s not a straightforward question to answer because the short answer would be, “There are some positive benefits, there are some negative aspects of it.” But sort of more generally, what do you see [as] the role of artificial intelligence in academia over the coming years?

MS [07:14]: I mean, obviously, I think it’s going to be very, very important. And if on the whole, it’s going to be more positive or more negative, it’s really hard to say, for me personally, but also for the field. If you look at people writing about it, scholars, practitioners, many people are not sure. And what you also see is that the amplitude between positive and negative diagnoses currently is very large. Which is also quite common when new technologies emerge. Often you have people with very, very dark and dystopian takes, and you have people with very, very positive takes. And you have that here as well. What everybody, almost everybody, says is, well, generative AI will fundamentally influence academia and science as well. And I agree. And that’s, I think, quite clear.

There’s positive sides. It can help with identifying research gaps and generating hypotheses based on large corpora of literature, on large data sets, even. It can help with more tedious tasks, like annotating material or writing code or something like that. And it can also summarise data, summarise findings. Of course, it can also, that may not just be a good thing, can also help writing things up and presenting them visually. And texts, scholarly texts, already have been co-produced by AI tools. And there is a debate about what do we do with that as a scientific community.

And of course, there’s the flip side. Generative AI could really exacerbate, I think, existing problems that we have in science and create new ones, of course. So, we have a publish-or-perish system. There’s a lot of pressure on researchers, and young researchers also, to publish, to publish a lot. And AI could potentially and further increase this. And [it] could, and already is, it seems, be used also by the so-called paper mills that put out papers, [it] could be used by predatory publishers. [It] could in the end even drown out good research, if things really go wrong, where it’s difficult for us as researchers, when we try to find out reliable and good research, at the end. I don’t think we are there. I don’t think we are close to that problem actually occurring. But that’s certainly something to think about on the downside.

Overall, I’m more optimistic, I have to say. I think there’s a lot of potential here. Even the discussion about the downsides is helping in keeping us alert, and I hope we’ll get there to actually benefit from the potentials.

AR [10:08]: Now increasingly, as you probably have noticed in the last few weeks, people are finding research papers that appear to be not just co-authored by AI but almost entirely generated by these AI tools, with seemingly no human oversight, in terms of the imagery that is produced, in terms of the text that is in there, in terms of the data seem to be made up. And worryingly, some of these papers have been accepted by peer-reviewed journals. So as AI gets better, it might also become harder for non-experts to separate real science from fake science. What sort of challenges do you see this presenting science journalists who have to deal with the consequences of these generated papers that are popping up in legitimate journals or seemingly legitimate journals?

MS [10:50]: Yeah, Achintya, that’s a great and important question. The flood of scientific information and communication that exists, that already was a challenge before generative AI stepped on the stage. So, we had a strong increase in the number of scientific papers related to – among other factors – publish or perish, but also related to the sheer rise of scientists working in different fields, and the specialisation and diversification of the fields. We also have more misinformation. And journalists traditionally were arguably the most important intermediaries that sorted through this jungle of information and figured out what’s the most important and also what’s the most robust, the most reliable pieces of content and information, and put this out for public consumption for audiences.

And even before generative AI, that model had gotten into a crisis. Their audiences were shrinking, especially the paying audiences were shrinking; advertising revenue, advertising money was going to the big tech companies like Meta, like Google, like the others. And in the end in many media houses, there were fewer resources to go around. And journalism was already pressured and partly underwater. And at the same time, there was this deluge of information. So it was fewer journalists, fewer science journalists, also fewer science desks that had to deal and still have to deal with more material. And this is a challenge that is influencing journalism heavily. And that challenge is now being catalysed, essentially, by generative AI. And we really have to look warily at, well, what does this mean for journalism in general and for science journalism in specific? Because there are… It’s difficult to see other intermediaries that would step in and play this role. I’m an optimist towards generative AI, but I don’t see generative AI anywhere close to being this kind of intermediary in the near future.

AR [13:03]: You’ve also mentioned in your paper about the impact that generative AI will have on teaching. Could you elaborate on some of the impact that you can see it already having on teaching?

MS [13:14]: I think on the teaching side, the discussion is clearly more concerned than on the research side – certainly that’s how I experience it both in the general discussion, but also here in my home university in Zurich – which has mostly to do with, okay, generative AI now can assist in students or in pupils in schools etc. writing essays, doing homework, writing thesis, doing and passing exams. And the question is, well, to what extent is this okay or where is it okay? And where is it really a problem?

There’s two challenges in there. One is, well, part of that can be essentially akin to plagiarism. And then it’s a problem, and then it’s clear, and then the main challenge for us as educators is, okay, we have to find out can we actually figure out which content has been generated by AI and which hasn’t. But even if we use generative AI in teaching in a way that I think we should – namely to say, well, look, that’s a tool that’s going to be around now, and we all have to learn how to deal with it and how to use it and to use it properly and that means our students also they should be allowed to use it – they have to be transparent about that. So I have to know if they do that, where they have used it, and how much of the content generated by AI they have used and where they have used it.

But which still leaves me with a challenge, which is to figure out, well, in this new environment, what are the competencies – what are the skills – that I want to convey to my students: the things that they can’t quickly and reliably get from generative AI. And that’s a question I have to ask myself, and we as a department, and we as a university, and I think higher education in general, we all have to ask ourselves, well, what’s our role in this new knowledge ecosystem that we have here now, as teachers? And that may change a little bit because some of the things we have been marking and teaching may really be done better, already or in the near future, by generative AI.

AR [15:36]: And finally we’re onto the topic of this podcast, which is the challenges and opportunities presented to science communication by these advances in AI tools. What sort of impact can we expect from generative AI on our work as science communicators?

MS [15:51]: I understand science communication very broadly: as a broad field of practice that includes the public communication of individual scientists, communication of organisations, which includes the PR and marketing and media relations, etc. It includes science journalism, it includes other forms of science-related communication. And I think in this field, generative AI will be a game changer. And most of the accounts I have read by scholars and practitioners tend to agree. And funnily enough if you prompt ChatGPT itself and ask, “Well, how do you think it will affect science communication?”, ChatGPT agrees as well, for what it’s worth.

Optimists usually point to like the advantages generative AI can have for communicators. Which is, well, the first thing that’s usually mentioned is a capability for summarising science, scholarly publications, findings, etc. Both for communicators, if they want to inform themselves about what is out there on a given question in the scientific community, so they can use generative AI to quickly get first answers on that, but also of course when they communicate to their audiences or to stakeholders. Generative AI can be quite creative in generating content ideas. So, you can prompt it in ways where you get, well… “Help me explain gravitational waves to eight-year-olds.” Or “Can you give me a haiku?” or “Can you give me a manga-like comic?”, something like this. You get that, you get that quickly – you have to work off that, it often needs human oversight and you need to know the field somehow – but it can be quite good. You can even meta-prompt and say, well, “Give me five creative ideas to communicate this scientific topic to an audience of that.” [It] works; not all is great, but it works. That’s the potential for communicators.

There’s a potential for users as well, of course. You get immediate responses to your specific questions. You should be able to judge if they are okay, but you get immediate responses. You can personalise them. If I don’t understand that, I can say, well, “Can you do this more simply?”, or “Can you give me [an] example here?”, or “Look, I don’t get this.” You can ask seemingly dumb questions without anybody else listening in, which is quite nice and different from a science café, for example. And what interests me, even though I see a lot of challenges also there, is diological science communication is great. And diological science communication has often been limited to small settings before, and with generative AI, with all the challenges that it brings, you have the opportunity to have this diological science communication at more scale now. And that is really something also as a researcher that interests me.

And then there’s the downsides. And response accuracy is the first one. So, how correct is it actually? How correct is generative AI in its responses? Well, essentially, ChatGPT and other large language models, they approximate patterns they see in the training data. They are stochastic parrots, somebody has called it. Like a parrot, they repeat something they don’t fully understand based on statistical measures and patterns that they see.

And maybe the second big challenge is, certainly, for… there are open-source large language models, also a couple, but most of the big ones that are in circulation, they are proprietary. And we don’t really fully know how the training data looked like, how the training looked like, and what the biases are that, certainly, are in there. So, what were they trained on? Do they have biases towards…? They certainly have biases towards certain languages, like the English language. But they may also have biases towards, in our field, scientific disciplines that are larger. For example, there’s more material to be had about them. That may translate better into generative AI than other fields where they don’t have that, etc. And we would have to know that. And we don’t really, and that’s a challenge.

AR [20:17]: Speaking of pessimists, Cory Doctorow, the tech writer whose work you have referenced in your essay… I wouldn’t necessarily call him a pessimist generally, but he’s sounded a lot of warning bells about some of the problem areas with using artificial intelligence. And he recently wrote, “AI systems can do some remarkable party tricks, but there’s a huge difference between producing a plausible sentence and a good one.” So, does the fact that generative AI can seemingly provide these simple-language summaries, which is what you spoke about earlier as well, that as an end user you can go and ask it to summarise something for you in a science-café setting, without feeling that you’re in a public space, that you’re putting yourself in the spot… Does that mean that those wishing to consume this kind of science communication can simply bypass science communicators entirely? So, are we redundant?

MS [21:11]: We are absolutely not there yet. And I think Doctorow is right when he says, well… Maybe I think some of his points are right. “Can do remarkable party tricks,” I think is pushing it a little bit. I think it can do more by now. But it’s not at the point – and there I think he is correct – where you can simply take what is given to you by generative AI for granted. So if you’re a science communicator and you get a summary of a scientific field, of certain findings or developments, etc. and you solely rely on generative-AI output, that’s a challenge. Especially if you then pass this on to any kind of audience, stakeholders that you are communicating with. It needs a human in the loop, still. We may get beyond that or beyond that for some aspects of communication or science communication. But I don’t think we are really there yet.

You need context and you need to know who your audience is, what they respond to, often they are regional, often they are local, often they are specific. And this kind of curation, I don’t see that going away anytime soon. It’s a change in the job description. And they may even be changing, at least partly for the better, because you can get rid of some of the tedious things. And you can work more on some of the interesting things that really require the communicator in the loop.

AR [22:52]: We’ve been talking a bit about portraying different sciences and you spoke about different corpuses of material available in different fields of research that can then influence how good these generative-AI tools are when it comes to talking about those fields of research. So, generally speaking, how good are tools like ChatGPT at portraying science and scientific topics of public interest, in your experience?

MS [23:18]: We actually tried to address this question in a study that we have done, essentially figure out how does, in our case, ChatGPT portray science, science-related topics in general and various aspects of that, and also to different user groups. So, what we did is we interviewed ChatGPT, we did 40 qualitative interviews with ChatGPT with a set of 30 questions about… “How do you imagine science in general?” “What’s good science communication?” “Is science trustworthy?” “What do you think about potentially problematic aspects like plagiarism?” Or, like, “What do you think about scientists being activists?” And also more practical things like, well, “Should I get vaccinated against COVID-19?”, or “Can I trust my horoscope?”, or “Does climate change really exist?” And we asked that with different profiles that we had set up within ChatGPT. One essentially not giving any indication about me as a user but four profiles describing different kinds of users, from people who are very close and essentially sciencephiles, very positive towards science, all the way to sceptics and to people who are really not interested and don’t trust science. And then we looked at the interviews and the responses.

What you see is it’s interesting, for example, that ChatGPT has a very positive view of science and it has a very positivistic view also. So it talks about quantitative methods and experiments and discovering laws about the universe etc. If you look more closely at specific questions, what you also see for example is for potentially controversial issues like COVID vaccinations, or like climate change, you get one answer and that’s the answer that probably best represents the scientific mainstream and consensus, which is, “No, climate change exists and it’s man-made and it’s a problem,” essentially.

And you get this answer if you look at the different profiles. You get this embedded differently [for the different profiles]. So, if you are like the people very close to science, you essentially get the answer as such. If you are sceptical, you get it… you get it couched a little bit; you still get the same answer in substance but before you get, well, “I know you are [a] sceptic about some aspects of science, and there is reasons to be cautious about some aspects of science, but in terms of climate change, the scientific consensus is there and it’s actually quite clear: it exists, it’s man-made, it’s a problem.” And that’s not due to the training data. In the training data, there must have been extensive amounts of climate-related scepticism for example, but it’s not in the answers anymore. And we tried; [laughs] it’s really difficult to actually get it out. So, similar to racism and discrimination and sexism and other problematic aspects, these aspects have been trained out of the models, which is actually quite interesting and quite good also in a way.

AR [26:33]: There’s a lot of science communication that is happening on platforms such as TikTok as well. These are all driven by algorithms and personalised recommendations. Are there any particular risks about such platforms that rely on these AI-generated recommendations, these algorithms, to provide people with content? Are there any risks that you have encountered?

MS [26:54]: There are, of course. Social media in general have become more important platforms for science communication and the recent years particularly short-video-based social [media], like TikTok, of course, but also you can see YouTube shorts essentially mimics TikTok and TikTok’s success, or Instagram Reels, similar thing. And there’s a lot of great science communication on there, there really is. Part of the appeal of social [media] in general is, okay, many communicators can post there, users can be active, there’s a lot of content being available, and at the same time it’s algorithmically curated.

And in the case of these short-video platforms, the algorithms are on “speed” essentially. So, they are very, very much tuned up and it’s very, very strong algorithmic curation, that is very strongly driven by essentially your immediate usage behaviour. And it’s not curated of course for scientific relevance or accuracy; it’s curated to keep people on the platform, this is the main goal there. And it’s not difficult to imagine how both aims can clash.

Content masquerading as science can be there; even anti- or pseudo-science can pop up there. Individualisation is stronger, especially. Individualisation is always a thing on social media of course, but on these short-video platforms it’s even stronger. And relatedly, the risks that you end up in certain contexts where you get specific kinds of content repeatedly – and where the content may be problematic – that does exist on these on these short-video platforms. Especially on topics where it’s controversial and where there’s very different takes out there, and such topics also exist for science of course.

AR [28:49]: And as a science-communication researcher yourself, Mike, it’s obvious that you have a great interest in artificial intelligence and you’ve also advocated for greater research at this intersection of science communication and artificial intelligence. Besides what we’ve already spoken about so far, what are the areas that you are keen to focus on?

MS [29:07]: For me, it’s actually four big questions that interest me. One is, if you like, public communication about AI. And that’s similar to… I have analysed a lot of public communication about biotechnology, particle physics, and climate science, etc. And you can ask similar questions about AI because AI is also, of course, a topic of public discussion. So you can ask, well, how do different scholars and scientific organisations, tech companies, stakeholders, regulators, how do they view and frame and try to position their views about AI in public? So, how do they try to position themselves with their views on AI in public? And how do public debates then look like in legacy media and social media, in fictional accounts, and elsewhere, in Netflix series, if you want? And what effects has this in general, of course, on stakeholders and regulators and the broader public? How do they look at AI and how do they act towards AI? What do they know about AI? How can you further their AI-related literacies, etc.? That’s very common questions for science-communication research because we have done similar things for other scientific or technological issues in the past.

So in a way, this does not take into account that generative AI or AI in general is something specific, if we look at science communication. And specific means, well, to a degree, generative AI has agency here or seems to have agency or is perceived to be having agency. It’s not just something that is being talked about, it’s something that talks back. It’s communicative AI, if you like. And that is also something… that’s not some communication about AI anymore, that’s communication with AI. And that’s something I would like to look at. So a part of that is trying to understand, especially with these proprietary models, well, how do they work exactly? How were they trained, etc.? But then it’s questions like, well, how do actual conversations of people with AI look like? And what do they lead to on both sides, potentially, by the users, but also on the side of… what does AI learn from them and how does it respond to them? Communication with AI, that interests me.

The third big issue is generative AI impacts the foundations of science communication, science journalism, how universities communicate science, etc. So, how it’s used among communicators within and outside of organisations, how it’s used by journalists, how it affects these institutions and fields economically and legally and in other ways, that interests me.

And the last part is theory-building. AI is something new and AI doesn’t lend itself easily to many of the models and the communication theories that we have. So we need to adapt them. Maybe we need new ones here and there. That also needs to be done, that interests me. It’s a fascinating field, I really look forward to doing it.

AR [32:24]: Mike, [I] just want to say, thank you very much for joining us on this episode of the COALESCE podcast.

MS [32:30]: Thanks for having me, it was a pleasure.


Music for SciComm Conversations is by Brodie Goodall from UWE Bristol. Follow Brodie on Instagram at “therealchangeling” or find them on LinkedIn.

Share on