SciComm Conversations: “Using AI as Science Communicators and Journalists”

Published On: 29.05.2024Categories: SciComm Conversations

Listen to “Using AI as science communicators and journalists. Guest: Mohamed Elsonbaty” on Spreaker.

Transcript

Achintya Rao [00:09]: Hello and welcome to SciComm Conversations. My name is Achintya Rao. I am the Communications and Engagement Manager for COALESCE and your host for this episode.

In our first season of SciComm Conversations, we are chatting with experts on the topic of Science Communication in the age of artificial intelligence. In our third episode of the season, Mohamed Elsonbaty tells us about his experience training science journalists and science communicators in using generative-AI tools such as ChatGPT.

Mohamed Elsonbaty [00:42]: My name is Mohamed Elsonbaty Ramadan. I am a science journalist and science-communication consultant, also I’m a trainer in science journalism and science communication. I am the vice president of the Public Communication of Science and Technology network, PCST, and also co-founder of the Arab Forum of Science Media and Communication. And interim board director of Arab Science Journalist Association and recently a founder of an initiative called SciComm-AI.

AR [01:10]: Thanks very much for joining us on the podcast, Mohamed. As a science communicator, when did you first become interested in using AI tools professionally? Was it with the launch of ChatGPT? Or were you already dabbling with some of the earlier tools?

ME [01:25]: I would say, I mean, I have some previous interest before ChatGPT release in November 2022. And it was mainly that I have tested a couple of AI tools that researchers were working on to help science journalists in doing their job. And I found it interesting because at that time I feel like this is the future. Of course, they were, like, very limited when you compare it to what we have now with ChatGPT.

And then was the release of ChatGPT. There was like a huge hype. And when I try to explore it and I thought, like, “This is going to change everything related to communication and media. “How can I use this in science communication? “How can we leverage these great tools in communicating science to the public?” And from here I started to be more and more interested in using different generative-artificial-intelligence tools. And because they were like so, so many I decided to focus or start with ChatGPT.

AR [02:27]: Great, So let’s talk about ChatGPT. And today we’re talking with you about how science journalists and science communicators can use ChatGPT in their work. And you have for some time now offered training on using ChatGPT to communicate science. In your experience, how has ChatGPT changed the nature of science communication?

ME [02:48]: Well, I would say that we haven’t reached the phase yet where we can, like, decide or even evaluate how ChatGPT has changed as a way we are communicating science. Because I mean, we don’t have enough data about who are using it exactly and to do what? Most of practitioners in science communication, if they are working in [an] organisation or an institution, they have to follow their policy. And it differs widely. You have… I have seen, like, some institutions who decided to, like, adopt using generative-AI tools in their work and some other, like, refused it completely and decided, no, will not do that for different reasons.

So I think we cannot tell. But what I think is it’s going to change everything, especially when it comes to communication. I mean, we have seen how ChatGPT is very powerful in generating a text. Actually this is what it [does]: to generate text in a very, very good way. That’s why I decided to have this kind of initiative I call like SciComm-AI, which [is] an initiative that tries to help science communicators and science journalists, and provides them the tools and resources and training, so that they can use generative-AI tools in their work and in their practice.

AR [04:04]: So you offer these training courses under the banner of SciComm-AI. What do the participants at your workshop learn when it comes to using ChatGPT?

ME [04:13]: What I try to do is, like… ChatGPT can do a lot but you have to learn how you can deal with it, and you have also to understand how it works, what [are] its potentials, what [are] its limitations, so that you can have your own approach when you are trying to apply this into your daily tasks. So, my training is mainly starting with trying to understand how ChatGPT was developed and how it works because this is very important. You need to realise that. ChatGPT was built in a certain way: so it was trained on some data and we don’t know what are the data it was trained on. We don’t know that. We need to understand that at a certain moment of its development there [was] human interference through a process called fine tuning. So there were people who are, like, testing the responses of ChatGPT and giving it feedback and try to enhance it and make it work better. And because of that ChatGPT has some biases, and this is one of the limitations of ChatGPT that they have to be aware of. So we focus on, like, how it works, how it was developed and then move to what [are] its limitations. And from that we start to develop what should be our mentality or our approach when we try to [use] ChatGPT in science communication.

So, ChatGPT is like having an intern. Every one of us would like to have an intern helping them in their science-communication or science-journalism work. But of course intern can do mistakes. Intern need learning, intern needs supervision, and at any case the work produced by an intern should be reviewed. And this is how we should start thinking about ChatGPT. We have an intern helping us. This intern is a super-intern because it has a lot of amazing capabilities. It has, like, vast knowledge. I mean the knowledge […] or the data it was trained on for the free model, it’s equivalent to like someone who spent 1 million years reading books. So, this is the amount of data we’re speaking about. So, we have an intern with this kind of experience of reading books for 1 million years, [while doing] nothing else.

But in the same time it has some restrictions that we have to deal with. And starting from this understanding, then we go more into, like, what we call “prompt engineering”, which means, like, how you ensure that you get from ChatGPT what you want. It’s about how you craft… it’s a craft of writing prompts. So, because some people usually when they start trying ChatGPT they’re just, like, “ChatGPT is not working that good.” This is mainly because ChatGPT… or we who are dealing with ChatGPT need to know how we can speak to it. How we can write prompts: it’s called “prompt engineering”. And then we go more deep into something called “prompt patterns” and this is coming from research done by scientists in computer sciences, who have noticed that usually when you structure your prompts in a certain way for ChatGPT, it gets the best output. And they notice that there is a pattern for this kind of structures. So, we learn these patterns and see how we can apply this into our science-communication job.

And I’m arriving [to] all of this by, like, usually an exercise to use ChatGPT to produce a plan for a science-communication activity, like, for example, developing a hands-on demonstration in a science festival. And together we work through this big task. We finish it having everything ready to just execute in just under 20 minutes, while usually it would take a team of two a week to do this job.

AR [07:52]: And there’s another sort of fascinating aspect of ChatGPT that, sort of, goes beyond prompt engineering, which is that you can tell it how it should respond to you, the sort of personality to adopt and what sort of responses you want from it, what you don’t want from it. Have you found anything interesting about the different kinds of responses you get when you feed it with these different personalities?

ME [08:15]: Yes, actually the personality, or we call it persona pattern, this is one of the patterns I’m focusing on. This is the first pattern usually I start on. So, usually if you tell ChatGPT, like, “Explain quantum computing,” it can explain quantum computing in a good way. I mean it explains, like, the main aspects of it and stuff like that but it’s […] still written in a technical language. But if you add something like, “Act as [a] science communicator and explain […] quantum computing,” you start to see how ChatGPT will react differently, explaining the same scientific knowledge as if it were a science communicator. So it avoids jargon, jargony language; it uses analogies and metaphors; it tries to explain it. And sometimes, like, you can make it more advanced and say, like, “Act as [a] science communicator: explain quantum computing for an eight-year-old child.”

So even give it, like, this kind of different [audience persona], and you can see how it’s different. And we start to play with that to see if I say, like, act as a science writer or a science journalist, instead of, like, a science communicator, how it will be different. You notice a difference in how it writes the text it produces. I remember, like, I asked it to explain quantum computing for an eight-year-old child, then I went more… specified to “for an Egyptian eight-year-old child”. And you can see how it starts to the use different analogies and metaphors to explain that. But there is a limitation here […] that we need to take care of, because ChatGPT is biased and dependent on the data it was trained on, […] which mostly coming from the internet mostly are English data.

So when it tried to explain quantum computing for an Egyptian child it used a metaphor which actually doesn’t fit an Egyptian child. Me as an Egyptian, if I’m a child I wouldn’t [find] that interesting. But it fits the perception – or the Western prescription – about [an] Egyptian child, which I find fascinating. You have to take care of this and this is, like, very important because I find, like, bias in generative-AI tools generally… it’s very alarming, when you see how it works and how it responds and how it just – or mostly – sees the world from just one perspective. And usually science communicators start to see that, like, “Hmm, this is interesting, we didn’t expect that.” I remember in one workshop we had, like, English, French, Dutch, Italian [and] Arabic speakers, and when they ask ChatGPT to do the same task for different languages you start to see how its performance varies a lot between languages.

AR [10:58]: Yeah, I can believe that because, again, so much of the training is based on the languages that we’re communicating [with] on the internet, and there is definitely a bias towards, sort of, majority-spoken languages and cultures in there. But beyond the biases that that comes baked into ChatGPT when it comes to languages and cultures, I’m guessing that the participants attending your workshops might not necessarily have been super familiar with ChatGPT and all of the things that you can do with it. What, in your opinion, surprised the participants most at your workshops?

ME [11:29]: I think when they know more about prompt engineering and prompt patterns, they start to be, like, more surprise about how much they can achieve with ChatGPT and how ChatGPT can be very helpful. They don’t expect that. For example, they expect ChatGPT, like, to do a kind of task like summarisation, maybe translation, brainstorming with some ideas. But I use ChatGPT to produce robust work, to develop an activity for a science festival from A to Z, or even write a science article from A to Z. Yes, it takes some time, I mean which is like 20 minutes. I mean it’s not like you are… in three or five minutes you do that. You need like 20 minutes, 30 minutes to produce something like that.

But by the end of it, like, for a science activity you can have a plan for it, with a script, with demonstrations and 3D models, and how you can build models, and like different kinds of cheat sheets for safety, and specification to build like the entire activity and your installation in the science festival, and stuff like that. And you can see, like, it’s not only about producing text but it can produce something more than that. So they start to be surprised by how much they can achieve with that, because no one is expecting any science communicator to design, I mean, a science… a big science activity in a science festival in just under 30 minutes.

AR [12:50]: Yeah, I mean, that particular example is quite fascinating because it shows the sort of depth of what you can achieve with these generative-AI tools beyond just saying, “Explain this to me,” but you can have a creative brainstorming session with it, with this tool. Now we’ve spoken a lot about what the… what the participants gain from this. But have you personally encountered any challenges when it comes to training science communicators and science journalists on using these generative-AI tools like ChatGPT?

ME [13:20]: I would say in the beginning there was resistance, because, like, I think many people who are working in the field – in communication, in the media generally – maybe they are adopting using these kinds of tools, still sceptical about it. But when it comes to communicating science specifically… Here it becomes more and more, people become more and more sceptical, because, like, for them scientific accuracy is top priority. They will not…

AR: … compromise on that.

ME: … compromise with that, exactly. So, they will be afraid of using ChatGPT in science communication. And here we start to deal with that, explaining how it works, understanding its limitations, and they start to see… “Hmm, how can we handle this?” or “How can we deal with this kind of biases or limitations?” So, for example, instead of depending on the data, or scientific data, that ChatGPT was trained on, you can feed it with data, you can put your own stuff, you can tell it, like, “This is the scientific information I would like you to use only for the purpose of producing something like that.” So, by that you [can] avoid this kind of, like, problem when it comes to the scientific accuracy. Also when they understand that ChatGPT can hallucinate, which means like it can generate something that looks really nice but in reality it’s not factually correct… Yes, it can do that. How [can you] minimise that? ChatGPT was designed to produce text, not facts – they have to remember this all the time. And when they start this kind of different things, we together start to develop how we can approach these limitations and make sure that what we are producing using ChatGPT is scientifically accurate.

So usually they have this kind of scepticism in the beginning because they have heard of a lot of people who asked, like, ChatGPT some questions and it start inventing stuff and saying some stuff. And usually, like, ChatGPT is really good at inventing stuff. I remember, like, once I was trying to… like I mean, I was trying to play with it, I tried different stuff. So, I said let’s see how this would work, for example, if someone tried to use it to write an assignment. And I used one of the assignments I had when I did my master’s degree in science communication. And it did a very great job but it didn’t provide any references. And when I asked it to rewrite it and provide references, it started providing some references, which were really, really convincing. Then I was looking at the reference and I have the name of the author or the researcher and of course the publication, and I felt like yes, yes, yes. And one of the authors, I know him personally but I wasn’t aware that he had published a work in this specific area. So I started to search this publication and it wasn’t there, it didn’t exist! But it was that convincing that it invented publications and research work that fits the profile of the researcher it has attributed to. Which is fascinating! So it’s really good at even this kind of hallucination. So I have to be very clear on that.

Also of course, like, many of these problems partially resolve in the paid version of ChatGPT. But not everyone [has] access to the paid version, so most of my training is, like, focused on the free version that everyone can do. But for the paid version, it’s [a] completely different level. It can do so much, I mean, better jobs in a lot of things. But usually people find it convincing for them to use the free version to help them in what they are doing.

AR [16:52]: Besides the limitations and biases that ChatGPT has, sort of, baked into it, including the hallucinations and generating text not facts as you pointed out, and excluding the fact that the paid version which has GPT 4 in it as opposed to the free version which has GPT 3.5 in it, what are some of the things that we as science communicators and science journalists should be mindful of, in your opinion, when it comes to using AI tools like ChatGPT to help with our work?

ME [17:22]: So I would say, like, and I mean, if you understand how it works and its limitations, you will try to avoid at least 90% of the problems you can end up with using ChatGPT. But I would say my top advice will be, like, always review what ChatGPT produces. Don’t rely on it completely. It’s not here to replace you, it’s here to, I mean, help you. And actually using ChatGPT is mainly about, like, doing things faster and with higher quality, because your human input will be focused on the things that you can excel at while ChatGPT can do the tasks that are just time-consuming and doesn’t need that much thinking or focusing.

AR [18:07]: And what has the experience been like personally for you to train science communicators and science journalists on using ChatGPT? Was it enjoyable? Was it frustrating? Was it rewarding? What sort of experience […] have you had with it so far?

ME [18:20]: I would say it was enjoyable and rewarding, because I feel like we are together discovering something new and something fascinating. Because, like, science-communication practice, like, involves a lot of different profiles – job profiles – and doing different kind of activities. And as you start to see – when you teach people – how they use it, how they can, like, be more creative and using in a different way and I find this really fascinating, when I see how people apply this in different contexts and how it works. Because of course most of the examples I’m giving is coming from my personal experience, which is, yes, diverse but I haven’t done everything. So I find this really fascinating. And it’s rewarding because, like, you find after that that people come back to me and say, “Yes, you know, I did that and I use this pattern, persona pattern, and I find it very super helpful, and it helped me a lot and makes me do my job better.”

And sometimes because like when we discuss stuff like the biases of these kind of tools, they find this also, like, makes them talk and think and reflect about their own practice, how maybe they can be biased in the practice. But when they see how ChatGPT is biased, they start to realise their own biases and work on that. So sometimes it can lead to like more diverse and inclusive science-communication work, because people start to be more aware of that. This can be kind of a very critical moment for science communication, to adopt these kind of tools. For the future, like, if you ask people not to adopt these tools, it’s like asking people not to use the internet like 20 years ago or not using Google, because like you are cheating by using Google or something like that; you make it very easy. Actually this is what generative-AI tools are about: so, making it easier for us so that we […] can produce better work and more efficient work.

AR [20:06]: We spoke a bit earlier about the sort of biases that are built into ChatGPT when it comes to language. You’ve offered training in both English and in Arabic. Did you notice any differences in how you are choosing to approach the trainings in those two different languages?

ME [20:23]: Yes, yes, there was a huge difference because ChatGPT is not doing a great work in Arabic at all. At all. It’s, yeah… And from the start of my training specifically in Arabic, I tell the participants that all the prompts we are going to write and all our communication with ChatGPT will be in English, because if you do that in Arabic, you will get like very, very bad responses. Usually when ChatGPT is stuck with something in Arabic, it translates it to English and answers to it English, and translates it back to Arabic. And you can see, like, this is a translated piece, it couldn’t generate this text naturally, definitely is translated from English. So yeah, it’s different.

And also, like I mean, one of the very, very first exercises I use in the training when I try to explain how ChatGPT works, because by the end of it it doesn’t understand what to say. It’s just a statistical model that tries to predict the next word, word by word in a sequence based on the data it was trained on. So if I type to ChatGPT something like “twinkle twinkle”, it will write “twinkle twinkle little star” and the whole song. If I wrote to it something in French like “frère Jacques”, it will write the whole thing. But if I used a very famous, I mean, children song in Arabic, it doesn’t know what is that. And you can see like every time I try this exercise in training, I get different response from ChatGPT. Sometimes they [ChatGPT] say that “We don’t understand what you are saying.” Sometimes they just translate it into English and start responding in English again, while I put the question in Arabic. And sometimes they completely invent a new song using these two or three words and this is fascinating and just shows the participants, “This is like… here, be careful when you deal with Arabic.” So usually my approach [is], like, do everything in English and then the last step you translate, because by that you will minimise the errors happening through the whole process.

AR [22:17]: That’s really fascinating because like Arabic is one of the most spoken languages in the world!

ME: Exactly.

AR: Anyway, now to something slightly, potentially controversial. And the question is, do you see a risk of science desks around the world just getting rid of trained journalists and replacing their articles with ones written by ChatGPT or other such generative-AI tools? And the reason I ask this question is, you know, as a science editor, one could in theory simply ask ChatGPT to summarise a research paper or a complex topic – you spoke about being able to provide it with your specific input that you want included in a response – and then from that point, you know, just edit the text that it produces, without requiring a writer on one’s staff. So is there… is there a risk that we are simply training people in tools that will replace them?

ME [23:02]: Well, for science journalism, I think ChatGPT has the potential to do that. You can give it a paper and ask it to read it, especially the paid version, and analyse it, and suggest an angle to write a news piece about it for example, and then can it suggest like a name of a scientist that you can approach and questions, and then it can write the text from all of these inputs, and you can edit and review. You don’t even, like, have a text and you edit it yourself – you can edit it with ChatGPT. And you can like produce a whole article, I would say, in maybe like an hour maximally, which usually will take at least a day or more. Yes, it’s a possibility.

But what I think is this mainly depends on how the whole media landscape will evolve in the presence of these new tools of generative AI. So, yes, we can end up, like, deciding, “No, we’ll keep journalists. We need human elements and they can use ChatGPT to help them in their jobs, so that they can produce more work or better work.” Yes, this is one way to do it. The other way it can go completely, I mean, in the… I wouldn’t say the wrong direction but I would say like completely different direction that they decide, “No, we will not need journalists anymore. We will produce, like, content through generative AI and that’s it.”

Recently I participated in a kind of an exercise try to think of like a scenario about what can happen in the future of science journalism in the presence of ChatGPT. And I decided to go for like a dystopian scenario. And my scenario was, like, you cannot think of science journalists or science media outlets alone; you have to think of the whole landscape. So with time maybe, like, all the content will be AI-generated, because it can be better personalised. So we will see how the entire content generation through the internet, for example, will be like AI-generated. The users will be used to that and then there is no need for like media outlets to produce these kind of contents because AI is doing that. And that will apply of course to the science journalist. And you can see like how the whole landscape will be affected for the people who are producing scientific content and people who are like consuming scientific content and how it’s circulated through different kind of media.

Maybe like we don’t… we will have like a new kind of media or maybe like I mean… we have seen a shift in like the consumption of news over social media, from like texts through Facebook and Twitter to be like with picture like Instagram and then to be like videos, short videos through TikTok or YouTube Shorts or Snapchat or whatever. And yes, you have to think of the big picture. I mean science journalism is not isolated from other types of journalism and media. And in my opinion, if this scenario will happen, it will happen very early to science journalism specifically, because usually when any kind of media outlet worldwide decides to do a budget cut, the first thing they get rid of the science. And we have seen this and we have seen even just when we started to have like COVID pandemic how there wasn’t enough science journalists to deal with the situation, because we suddenly needed them, because in the few years before that a lot of science departments in different media outlets were closed.

AR [26:18]: Right. I mean, I suppose at some point it will come down to a question of trust and the people who are consuming the news will, you know, decide whom they trust. Will they trust a science desk that has known, recognised humans who are behind the scenes producing the, in quotes, “content”? Writing these stories, interviewing people, talking to them. Or is it just being spat out by some sort of generative-AI tools. And I suppose that’s where that decision will be made: if people are happy to consume stuff that is coming from generative AI then the science desk will almost be justified in saying, “Right, the people want this, they’re happy with it, it’s good enough.”

ME [26:53]: Yes, yes, it’s a possibility. I mean, one of things that I can imagine that if… that we can have like a kind of a one-man science outlet, that we have an editor only and using generative-AI tools they produce a lot of content and they review and edit; that’s it. So it’s one person. Maybe it can be fully automated, yes, but maybe like we can have like a media outlet of just… a science media outlet of just one man. This is a possibility and honestly it’s very difficult to predict how this kind of science news will be consumed, or scientific content will be consumed. The whole media landscape is changing very, very fast. Maybe at a certain moment, like, there will be no-one consuming like written content. So yeah, it will be different and we don’t know how it will be different.

I mean, I can notice within my family how myself and my cousins for example, who are like ten years younger than me, and my daughter are using social media in completely different ways. And when they try to find a scientific information [from] the internet, everyone has their own behaviour: I go Google, they go TikTok. There was a couple of fascinating like articles about how TikTok can be used as a search engine, because people start to think more visually because it’s very easy. And of course like when we have… I mean, OpenAI, the company behind ChatGPT, has recently announced that they have a new AI model that can generate videos, called Sora, and when you look at its capability, it’s amazing and it’s just the beginning. So, once it’s… it’s really difficult to predict what will happen, but it’s a possibility. And it will be a choice at a certain moment, that someone has to make this choice.

AR [28:33]: Yeah well, let’s stay on the positive, optimistic side of things and assume that we’re just going to use this as a tool to enhance our existing work as science communicators. What is it about something like ChatGPT that excites you the most when it comes to doing science communication?

ME [28:50]: For me what excites me most is that with ChatGPT I can be really creative. Like for example, if I want to develop a kind of a science show, let’s say, about a certain scientific concept or scientific theme. Usually there are like a couple of demonstrations that fits in that and you can start building your show around that. And I would say like usually you will do the same demonstrations that others maybe used in other shows. And you try to build something new for the script or like the whole theme of the show. This is where you have creativity.

But with ChatGPT, like, the possibilities is endless, because one of the jobs that it excels at is brainstorming. And if you again, do know how you can use prompt patterns for brainstorming, you can get like amazing ideas. And I find this is really fascinating, because it helps me to think, to have something created out of the box. I would never think of that alone! Because I mean, you know, this power of brainstorming when you have like a couple of creative persons in the same room and they have a brilliant idea because of the discussion they have? You can have this just with ChatGPT because you can ask it like to play different roles and to act as different persons who are discussing the same thing from different perspective, with someone like being ultra-creative and someone like being very conservative. And you can ask it to analyse that and assess it and see how it is feasible or not, and it’s amazing when you think about it.

One of the early beginnings when I used this kind of technique, I was asking it to suggest, to give me like different ways to start a science talk about a certain scientific topic and it gave me like ten different starters of my talk. And I would say usually when I think, I will think only for like three or four of them. But it make me think of different stuff and starts to think with me, like what is like pros and cons of each, what I should use or not in what context, and it does this brilliantly!

AR [30:49]: It’s obvious that tools like ChatGPT have tremendous potential for us as science communicators and science journalists to use in our work, and I’m sure everyone who’s attending the training sessions that you’re offering will learn better ways of using these tools. Thank you so much, Mohamed, for joining us on this episode of the COALESCE podcast. It’s been a pleasure having you with us.

ME [31:12]: Thank you. I’m really glad to chat with you about ChatGPT today, and hope, like yeah, our listeners know something new or try to… try it in their work and see how it works.


This episode of the podcast was edited by Sneha Uplekar. Music for SciComm Conversations is by Brodie Goodall from UWE Bristol. Follow Brodie on Instagram at “therealchangeling” or find them on LinkedIn.

Share on