SciComm Conversations: “Non-human characters in science storytelling”

17.6 min readBy Published On: 18.02.2026Categories: SciComm Conversations

Listen to “Does public engagement make science better? Guest: Prof Bruce Lewenstein” on Spreaker.

Transcript

00:09: Welcome to SciComm Conversations, my name is Achintya Rao. We are continuing our series of interviews with science-communication researchers conducted at the PCST conference in Aberdeen last year. In this episode, we hear from Dr Hannah Little, Lecturer in Communication and Media at the University of Liverpool, on the role of non-human characters in storytelling about science. Hannah was interviewed by my colleague Sara Urbani from Formicablu.

Hannah Little [00:39]: So, my name is Dr Hannah Little. I’m a lecturer at the University of Liverpool in the Communication and Media Department. And I’ve been there for about three years. Before that I was a senior lecturer in science communication at UWE Bristol. But I’m originally a linguist and cognitive scientist who sort of accidentally ended up in science communication through a love of science and communicating it.

Sara Urbani [01:04]: And what have you been presenting here at PCST in Aberdeen?

HL [01:08]: Here I’ve been presenting some experimental work that I’ve done looking at storytelling and the use of storytelling within science. So, I’ve been doing what we call cultural-transmission experiments, where it kind of works like the game telephone if anybody’s ever played that, where you give somebody a story and ask them to repeat it from memory. And then their recall of that story gets passed to another participant who then recalls it again and it gets passed on and passed on. And through that method we can look at what information stays within the stories and what is lost. And so, what we’ve been doing is we’ve got lots of stories that have different types of information within them to see if that affects whether information gets retained within those stories. So, the presentation that I gave this afternoon… oh, no this morning it was… was all about social information bias.

So, social information bias is something that is quite a robust finding within the literature already where we know that stories are more memorable, people remember them more accurately, when they feature human characters or specifically more than one human character interacting and we understand their motivations, when we compare that to stories that have no human characters or stories with only one human character. So, if there’s some social aspect to the story it basically sticks in people’s heads. And I was doing some work with some science-communication practitioners, where I was asking them whether knowing about this social aspect of storytelling is useful for their practice or whether they’re already using it within their science communication and their storytelling, or whether they weren’t and if they weren’t why weren’t they.

And what people kept telling me was that they were struggling because their science doesn’t necessarily have human characters in it. Some sciences are very human. They’re like medicine or risks around things like volcanoes or the stories about humans or human risk. But a lot of science doesn’t have any humans in it at all. At the most extreme end we have things like maths, that are just numbers, but even a lot of physics, a lot of biology is about animals rather than humans. And people who are interested in communicating with our sciences kept asking me, “Well, [do] the characters within these stories have to be humans? “Right, could we have animal characters? Could we have anthropomorphised inanimate objects as characters? So we could have a physics story about atoms that are interacting as characters within a story or planets in a solar system interacting with each other.” And the answer is nobody’s done work on that, right? Nobody’s done this experiment where we’ve looked at whether social information bias – bias towards information where we have these interactions – works with anthropomorphised things.

So I was presenting where I did that. I did that. So we did an experiment where we had one condition where there was a story with humans interacting. There was a story about a forest fire and a mother wanted her daughter to come inside because there was smoke from these fires and how that was affecting climate change. There was a story with animal characters where the habitat of some rabbits was destroyed and that was removing the food source for some bobcats. And then we had another story where it was the same as the animal story, but they had names. So they were a little bit more humanlike, a little bit more anthropomorphised because they had names attached to them.

SU [04:48]: What were the names?

HL: So it was Susan the bobcat and Frank the rabbit. And then I had another condition where it was plants that we’d anthropomorphized and we had Terry the tree who’d lost his friends in a forest fire. It was very sad. So we give these stories to some participants, ask them to recall them and did this game of telephone with them. So we ran chains of participants where it was five people within a chain passed on, passed on, and looked at how much information remained in these stories. And basically we found that the stories held up quite well, right, when compared to the human example, but as you might expect people remembered the human story much bet– well not much better. A little bit better on the graph, a little bit better than stories with the anthropomorphised animals. When you give them names again you will get a little bit of a boost. But the poor Terry the tree, the plants were quite right at the bottom people remembered that less well than with the animals or humans. But again that’s very intuitive, right? So it’s always quite reassuring when the results of an experiment are very intuitive. So, yeah,that was basically me trying to answer this question. Does social information bias work when we don’t have humans or we don’t have access to have humans in our stories about science and instead have these characters who are animals or inanimate objects?

SU [06:09]: And the result says yes.

HL: Yeah, yeah, yeah, but with the caveat that, like, you do lose some some memorabilitity.

SU [06:18]: Going back to social information bias, if I recall correctly it’s one of the many cognitive biases. Can you explain a little bit what cognitive biases in general are and maybe give another example?

HL [06:33]: So we need to be careful with this term “cognitive biases” because quite often when I say cognitive bias science communicators or scientists get a bit uncomfortable because we hear the word bias and we’re like, “Oh no, we mustn’t make our science biased or we shouldn’t make it biased against any one audience or population or change the results of our science.” But that is not what I’m talking about at all. So when I’m talking about cognitive biases, I’m talking about biases in our brains that make us as humans pay attention to some types of information over all the types right. So we all know as humans we find some stuff really, really interesting and we want to read it and [it] really sticks in our heads and some stuff is really dull and we forget it the second we read it or heard it. So what I’m basically talking about the biases that lead us to think the interesting stuff is interesting and leads to that stuff sticking with us.

So yeah, there’s a few different biases that have been identified in the cultural-evolution literature. One of them is a bias towards negative information. So if you have stories where lots of negative events happen, that sticks with people more than if it’s lots of positive stuff. Which isn’t that surprising, right? Because anybody who’s ever kind of woken up in the middle of the night with a memory, it’s almost always a negative one right. We don’t remember the positive feedback, we always remember the negative stuff. What’s so powerful about the negative, the bias towards negative information we have is that even when you give somebody a story which is quite neutral, right – stuff’s happened but it’s not really clear whether it’s positive or negative – when you put it in one of these cultural-transmission experiments, the story changes to make the events negative, right? We take the neutral stuff and we make it negative.

SU [08:16]: To make it memorable.

HL: Well, I don’t think we’re not doing it to make it memorable. I think we just interpret information that is relevant as being more negative. Because if you kind of think about this for an evolutionary perspective information that is negative or potentially dangerous to us, we should pay more attention to that right because our survival depends on us paying attention to the stuff that could potentially be dangerous to us. And that leads nicely onto another bias, which is bias towards survival information, so any information that is aids our survival or might help us survive, that sticks with us much better within these cultural-transmission experiments. And the last one is counterintuitive information. So stuff that’s surprising, right? And we see this all the time in in the news media, right, lots of headlines are, you know, “If you drink a glass of wine a day, it’s better for you than go on to the gym.” And that’s really really counterintuitive. And also not true! [laughs] But you know the tabloid press really love taking science stories and making them counterintuitive or really searching out these counterintuitive elements within them to have the headline, right?

So this is, you know, it’s a little bit tricky for science communicators because it might be tempting to try and find something that’s quite counterintuitive about our research in order to make it more memorable, make people pay more attention to it, but most science isn’t counterintuitive, right? Most science most of the time, the thing we’re expecting to happen happens. And that’s not newsworthy, and we often need to communicate things that are not surprising. So it’s important for us to communicate that vaccines work but if I came to you and said, “Oh, I’ve done, you know, some clinical trials and shown that a vaccine works,” you’re going to be like, “Okay, that’s really great but it’s not, like, surprising because we’ve been doing clinical trials with vaccines for decades now.” So yeah that’s another thing I’ve been doing is taking these cognitive biases and asking science communicators where do we need to actually be a bit careful with our science communication. Like, when can using these biases be a bit dangerous potentially to what we’re trying to achieve with our communication. Because obviously the integrity of the science is really important to keep but it’s also important that we engage people and try and get that information sticking in people’s heads.

SU [10:47]: Can I ask you how many people were in the experiment?

HL [10:51]: So for the anthropomorphising one we did 10 chains for each condition and then each chain had five people – this is me doing some quick maths – so that’s four conditions with 50 people in each one, so it was 200 people.

SU [11:06]: And were they from any path of life? How did you choose the sample?

HL [11:12]: It was actually an online experiment. So they were people off the Internet. They were mostly Americans. The only restriction we put on it was that they should be able to read and write in English because that was a key part of the experiment, that they were reading stories written in English and then writing the stories down that were going to be given to somebody else. We also give them an English test at the beginning of the experiment and if they failed the test we threw their data out. So we actually… it was a lot more participants than we originally… that ended up in the dataset. Just with research that you run online you often get a large amount of junk that you have to kind of sift through.

SU [11:50]: And how long did it take, the whole experiment. I mean, recruiting and…

HL [11:53]: Well, the thing about online experiments is that you can do them very, very quickly. The tricky thing about this was that because we were running them in generations, we had to have one participant after another and after each participant we had to check whether they’d done the task properly because some people just kind of mash their keyboards or don’t follow in the instructions. And we were also looking… we had some code that looked at whether people had copied and pasted the stories. And quite often we’d see that, so any any copy and pasting the story got thrown out. Any gibberish, the story got thrown out. Hadn’t followed the instructions, the story got thrown out. Failed the English test, the story got thrown out. So after every single participant I had to go in and check all that. How long did that take? I think we did it about a week and that wasn’t like full-time. Because, because it’s so quick and you can run chains at the same time as each other, if you kind of like paying attention to it and one experiment where you’re running maybe five chains at a time probably talk about an hour.

SU [13:00]: And so this basically was a short-term memory because it was so… the generations were so close. Maybe it could be interesting to see after, I don’t know one month, one year – or one year is too much maybe – how much do you recall?

HL [13:15]: Yeah and that is the thing about these experiments is that… So we weren’t asking them immediately after the after they’d read the original story. We’d give them the story and we didn’t tell them that we were going to ask them to recall it. We just said, “Oh, can you read this story?” And then we give them some other questions that were you know, acted as a distract a task, which were all stories about… questions about science, and then we said, “Oh, can you recall the story?” Now what I think is interesting is the people who copied and pasted, because they must have kind of cottoned onto what we were going to do. Because I think a lot of these online participants do a lot of these experiments, because some people kind of it’s their whole job right, that they make money from these things, that they preempted it enough to copy it. But yeah if they’d done that we just throw out. Which is you know… we kind of knew that was going to happen so we’d put some buffers in.

SU [14:08]: So going back to the very basic, this is all about storytelling, right? And storytelling is a human trait that is very ancient in our species. So I know that you have a tool kit that you gave out during your presentation [at PCST]. Can you can you give us some tips or suggestions about what makes a good storytelling?

HL [14:33]: So what the tool kit is doing is it lays out the cognitive biases that I just spoke about but it’s based on some work that I did with science-communication practitioners, where I was asking them about the biases and asking them how would this bias be useful for you, but also where might it create risks for your communication. Because what’s crucial to remember about this work is that some information being memorable is not the same thing as it being effective for a certain objective, right? So if we take negative-information bias for example, which was the most controversial one for science communicators, they often have quite a visceral reaction to that. People are like, “Oh no, we mustn’t make our communication negative, because that will make people feel fear or despair, that might prevent behavioural change…” Or you know if you are doing a science-communication initiative because you’re trying to raise aspirations in science or change policy or you know get more, recruit more women into science or anything like this kind of making people feel anxious or sad probably is counter to that of you know of that objective. Or it can be. And so what the tool kits trying to do is it maps out each bias, all of the evidence that we have from the cultural-transmission literature about its effectiveness and how it works, but then it tries to map where these biases might be useful for science communication but also where there are risks. So what I’m trying to do is not come to science communicators and say, “You should make your stories more social,” “You should make your stories more negative,” “You should make your stories more counter intuitive.” What I’m trying to do is saying, “Look, the literature shows these things make information more memorable, which is quite you know important for a lot of science communication work but – big but – what we need to be doing is thinking critically about how these biases are going to interact with your objectives, what you use specifically are trying to do.” So the tool’s trying to instill people with the tools they need to basically do that critical analysis of: “Is this going to be useful for me and how can I use it? If I do use it, do I need to be careful?”

SU [16:57]: You started your presentation this morning with a very ancient, very old quote, or citation, from Bartlett in 1932, and that… I was very impressed because, well, you said it’s good to have to go back in in history and look at examples or good examples from the past. So can you tell us more about that, why did you decide to cite it?

HL [17:26]: I decided to say it because Bartlett was the person who invented these cultural-transmission experiments that I’m using and it always… you know, you should credit your sources. And even though many people have used them in the recent literature… So there’s a big gap in the literature, so Bartlett did it in 1932, all of these telephone experiments with stories and looked at what information stayed and what didn’t. And he was doing it to kind of look at folk stories and why across different cultures folk stories look quite similar, even though we think that in some instances there wasn’t necessarily cross-pollination in terms of cultural contact or transmission, why is that, right? And the answer is because some types of stories appeal to humans more than others. And after Bartlett, no-one really did it and for 70 years until Alex Mesoudi came along, who is a researcher from the University of Durham now in the anthropology department, and he did this original experiment using social information bias. And since then there’s been lots looking at different biases, different frames of stories, and different types of information within stories to look at what’s effective when it comes to stories being passed on. So yeah, I cited Bartlett because it just it seems fair to cite the person who came up with it, even if it was nearly a hundred years ago.

SU [18:51]: And after re-discovery because for 70 years or I don’t know how many, basically nobody paid attention or it got out of fashion, I don’t know.

HL [19:02]: Well, and Alex Mesoudi basically says that’s because psychology got a bit distracted with behaviourism and cognitive psychology. It was paying it was paying less attention to these social processes. But you know the transmission of stories is a very, very social process and the process of science communication via storytelling is a very social process and so it’s really important to look at how these social interactions affects our information, you know. We as humans do not live in little bubbles on our own.

SU: Exactly.


This episode was edited by Sneha Uplekar. Find out more about Sneha’s work on her website, microdragons.co.uk.

Music for SciComm Conversations is by Brodie Goodall. Follow Brodie on Instagram at “therealchangeling” or find them on LinkedIn.

SciComm Conversations, with the exception of the music from Game Changers, is released under the Creative Commons Attribution 4.0 licence.

The COALESCE project is funded by the European Union to establish the European Competence Centre for Science Communication.

Views and opinions expressed on this podcast are those of the guests only and do not necessarily reflect those of COALESCE or of the European Union.

Share on