SciComm Conversations: “Engaging the Public on Artificial Intelligence”

Published On: 30.04.2024Categories: SciComm Conversations

Listen to “Engaging the Public on Artificial Intelligence. Guest: Jennifer Edmond” on Spreaker.

Transcript

Achintya Rao [00:07]: Hello and welcome to SciComm Conversations. My name is Achintya Rao. I am the Communications and Engagement Manager for COALESCE and your host for today’s episode.

In our first season of SciComm Conversations, we are chatting with experts on the topic of science communication in the age of artificial intelligence. Today, in our second episode, we hear from Professor Jennifer Edmond from Trinity College Dublin on the subject of engaging the public on artificial intelligence. Jennifer has been involved in a number of projects that use theatre and interactive installations to engage people with AI.

Jennifer Edmond [00:44]: My name is Jennifer Edmond. I’m associate professor of digital humanities at Trinity College in Dublin, Ireland. My research interests… If you know anything about digital humanities, you know that it’s generally a complex road through research interests. So, I actually started my training as a scholar of German language and literature. And then I became very interested in the affordances of technology for that work. And I actually ended up kind of working many years in sort of meta research, looking at the way that humanities scholars were adopting new technology tools. So that’s where I really became established in sort of the core interests of digital humanities.

But the more I became interested in how people use tools, the more I actually said, “Well this isn’t just a problem for scholars.” So now, much of the work that I do is in what we call critical digital humanities, where I’m looking at not how scholars use technological tools, but how all sorts of different people use technology tools, including big data and AI.

AR [01:38]: It’s lovely to have you with us today. As a researcher in digital humanities, when did the topic of artificial intelligence first interest you?

JE [01:47]: It’s interesting because it really does go back… It actually started more with big data than AI. Anyone who knows these things knows that they’re very entangled, you know, you need the data to train the AI, etc. So, there was a project we did back in 2016/2017 called Knowledge Complexity, which was actually meant to be a humanities look at big data, like kind of industrial, big-data research. And one of the reasons we did that project is because in the work we were doing with humanities research processes, we found what I refer to as a lot of use of “false boundary language”. So the idea of a boundary language is words, terms or ways of expression that allow two different knowledge communities to communicate. And it’s supposed to be the kind of thing that’s stable and standardised across these two communities.

What I was finding is – in particular – words like “data” were being used in such a way that they blocked rather than facilitated communication. So, you kind of have this sort of, “We need the data,” “We gave you the data,” “We need the data,” “We gave you the data.” And the reason why this went on for so long is because people had different impressions of what was meant. So the discourses around data became really useful.And that’s what we were doing in that first project. And I just became very aware of how many instances there were of places where humanities knowledge would become incredibly useful in those discourses around big data, and then ultimately around AI.

I suppose my first foray into AI was… the Volkswagen Foundation had a programme they called a future pilot series. And it was about exploring artificial intelligence from different perspectives: legalistic perspectives, technological perspectives, social science, sociology, science [and] technology studies, all of that. And we started to talk across these boundaries and, you know, we were there long enough that we couldn’t fall back on these false boundary languages. We had to actually figure out what the other person meant. And a group of us from that actually came together to develop this question around “If robotics was going to develop its own culture, what would that look like?” You know, how would we be able to use the competences that we know as scholars of literature and culture about multicultural communication, intercultural interaction? How could we use that to come up with a new paradigm for dealing with AI?

We were looking at AI and thinking, how do we get people to look at this not as a tool and not as a person? So, how could you actually stop trying to short circuit the consideration of, well, “What is this thing that I’m dealing with?” and get people to actually observe it, understand it and not make decisions about how to interact with it until they’d actually come to some kind of “relationship” with it. And I kind of put that in air quotes because we needed people to sort of not fall back on what they were being primed to think about the tool. You know, again, ChatGPT drives me crazy ’cause it refers to itself as “I”. I think you need to kind of have a human experience to be an “I”.

So how could we remove that from the equation? How could we get people to actually understand or engage… encounter these machine others as machine others on their own terms? So that’s kind of where that project came from and unfortunately it wasn’t funded. We did a lot of talking and did a lot of development, but we never got to actually do the experimental part, which is kind of a shame.

AR [05:05]: You’ve been involved in the development of a few projects that use performances, exhibitions, interactive exhibitions, comedy shows, for example, to engage people with these emerging technologies such as artificial intelligence. What led you to adopt that particular approach?

JE [05:20]: So, I view myself as… I’m paid from public taxes, I am a public servant. And I take that very seriously actually. So, I think that where my research has a public-facing interest, it’s kind of… I feel that it’s sort of incumbent upon me to realise that however I can. So I’ve always been kind of interested in public communication and I did actually, many years ago, engage with a programme called Bright Club, which encourages academics to explore their work through stand-up comedy. Luckily there’s no video of that. So that interest was always sort of latent in me, but I’m not I think inherently artistic.

However, back, oh gosh, during the during the pandemic days, Science Gallery in Dublin and the ADAPT Centre, where I am a funded investigator based… again Irish national centre for advanced ICT research, they were doing a kind of a co-sponsored exhibition around the question of bias. And what they did, which was really clever, is they had a sort of a brokerage session, where people who were interested from the kind of the university side and people who were interested from the artistic side were given the opportunity to meet up. And I made a really wonderful, strong connection with an artist by the name of Laura Allcorn. She’s based in Portland in the US. And she runs the Institute for Comedic Inquiry. That is kind of her… the umbrella she uses for her activities around using humour to communicate science.

And so the project that we connected on then was called the SKU-Market. And we were kind of really thinking about well, how can we take the experience of engaging with these kinds of technologies, which can be so opaque and can be so woven into our daily lives that we don’t even see them anymore… How can we make it so that people can see and feel this without feeling threatened, but also without feeling sort of like “You need to understand this!” ’cause there’s way too much discourse and rhetoric in technology development about “Well,” you know, “the user should have done that.” You know, like “The user should read the terms and conditions.” Well, that’s… I’m sorry, that’s not actually what’s expected of us. We are not expected to read those terms and conditions.

So how could we get people to not feel that they were being preached to, […] not feel that they were being educated with a capital E, but that they could walk away with a sense of, “Hmm, OK, that’s how that works. “I want to actually maybe think about some of my decisions with that experience in mind.” So that’s kind of where the approach came from.

AR [08:04]: That project, which was called SKU-Market, what does “SKU” stand for and what could people buy in the market?

JE [08:14]: Yeah, the SKU-Market. So SKU, you probably know that an S-K-U refers to those bar codes that you find in every supermarket. It’s called a SKU. I actually can’t remember at this moment exactly what SKU stands for, the acronym. But we also like the idea of that also being skewed, S-K-E-W. The idea that if you see yourself through the lens of how these marketers see you, […] it’s kind of like a fun-house mirror: you’re not gonna recognise yourself.

So what we wanted to do was set up an experience where people would, you know, stand in front of what looked like a mini-market shelf, but a slightly weird one because we were aiming to have products that really kind of forced you to sort of look yourself in the face and say, “OK,” you know, “is this the kind of thing I’d really buy?” So like, you know, it was like pints or, you know, CBD gummies, you know, things that kind of would potentially put you in an age bracket or would potentially put you in a demographic of some sort, a salary bracket, something like that that kind of… you felt exposed by choosing these things. And then after you chose and scanned with your phone, five items that appealed to you, you were given a till receipt. And then it sort of used the products that you chose to create a profile of you.

Now, it wasn’t really artificial intelligence. We were not doing any machine learning in the background, but what we called it was an “artificial artificial intelligence”. [That is], it worked the same way, but with a very, very rough… you know, it wasn’t using a statistically grounded predictive model. It was just kind of saying, “The third thing you buy is the thing you’re going to think most about.” So whatever your third thing is, you’re going to get this message. So it was almost like a Chinese fortune, you know, the kind of the fortune you get in a fortune cookie. So it said, you know, “You have notions and are somewhat bougie, but you make up for this by buying too many plants.” You know, it kinda… Funny things like this.

And the thing that was interesting is that people really responded to it. And, you know, we’d always see… there would always be this turn. So, ’cause you’d go through the front half… the back half of the exhibition was all about, “This is what the analytics looked like. This is what you looked like to us. You’re just a number,” is basically what that said. It had… You know, we had the algorithm, which was basically a shredded piece of paper in a transparent box. It’s like, “We’re transparent about our algorithm,” but it’s just a ripped-up piece of paper.

And the last thing that people had to do was pick up a… one of these blue tokens – like you sometimes get in supermarkets so you can give to a charity – and vote on what they wanted to see in future in AI. And there were three kind of slightly, you know, things that people might not really want to get behind, you know, like: “I don’t need to make choices for myself,” you know, “just […] take away my free will and self determination.” Things like that, not quite in that rough language.

But that moment of turning from the front to the back, people would always stop and they’d look and they’d share their, you know, these kind of Chinese-cookie fortunes with each other. And they’d laugh. And that was always the key moment for us, is that, in seeing themselves profile, they’d laugh, but they’d laugh because it didn’t quite fit. And it’s that feeling of this whole system not quite fitting who they were that we wanted them to leave with, and by and large they did.

AR [11:30]: This is really fascinating because I remember a few years ago, there was this… story about, I think it was a father finding out that his daughter was pregnant by the… by one of the shops sending vouchers for it based on purchases that she’d made or something like that, in the post, and sometimes the profiles that are built, that aggregate over time rather than just like five purchases… So it knows, you know, that I live a very unhealthy lifestyle because I’m buying crisps far too often. And at the moment, it seems that if it only ends up being “beneficial”, in quotes, to the user, in the sense that then you get these offers based on your interests and […] we’re willing to sell, sell effectively, that data in exchange for value. But at some point, you know, if that’s going to go to insurance companies or whatever and they’re like, “This person has an unhealthy lifestyle, raise their premiums, raise the deductibles.” That’s gonna be a whole new world that’s gonna open up.

JE [12:31]: At the bottom of every till receipt, we always had what we called our partner offers. And they were things like, “We’re sorry, you’ve been denied a small-business loan.” “Your health insurance premiums are going to be raised by this amount.” Because we were trying, again very gently trying, to introduce that seed of doubt that, “Oh yeah, this is really fun. But actually, ooh God, you know, I could see that happening and there is this thing I read about where that did happen.”

Surveillance is a problem for people who need… you know, we need to transgress. We don’t know where our edges are unless we transgress. So, being surveilled is one problem, thinking we might be surveilled as another, or not realising when we are surveilled. So there’s all this kind of problem of having too much about us in the public sphere and having too much about us available to companies that simply want to sell things to us ’cause, you know, we’re fragile, we’re fluky, we do things, we don’t know why. Being able to hack that and to manipulate that is really dangerous.

AR [13:28]: And we already touched upon this a little bit, but how did people react when they faced the fact that their purchases, OK, their five purchases, were being used to build these, you know, in quotes, “detailed profiles” of each of them from just these few purchases. Now, you mentioned that that was a bit light in terms of predictive value and you would just pick the third item and make some sort of statement on that. And people laughed, you said, they were having a chuckle when they went behind the exhibit. But did you get any feedback on people thinking about these issues of having these profiles built about them or being surveilled without realising they’re being surveilled?

JE [14:02]: Yeah, I mean, we want to be a part of our own story, right? So when people would see this, they’d often go, oh my God, that is so me. You know, so people wanted to be seen in the story we were telling about them. So there was always some kind of connection there and it was always that literal and figurative turn around the edge of the exhibit where they started to see that actually this isn’t something that is just kinda cute and kinda fun. This is something that actually has a real impact on their lives and can constrain what they’re presented, who they might see who they are.

It was a physical piece, but it was a narrative journey. It was, “I’m shopping! Look, I’ve got, you know… this is what it means that I was shopping. Oh, this is what it really means that I was shopping. OK, do I approve of this? Do I want to make a statement against this?” And then they walked away and took it with them back into their lives. So we always had that moment of thoughtfulness. And again, if you can bring together the lightness and the thoughtfulness without making it feel oppressive, I think you’ve really accomplished something in terms of science communication.

AR [15:07]: Moving on to something else, which is a little more focused on sort of more recent AI tools that have been emerging over the last couple of years. One of your recent projects was called “Who Wants to Write an Email?” Again, this was with your collaborator Laura Allcorn. This was a show that you had at the Dublin Fringe Festival.

JE [15:24]: Yeah, and this is where, you know… one of the things that Laura always says is you can’t start this kind of collaboration, you can’t start this kind of project from like, “Ooh, we wanna build a game show and put it in the Dublin Fringe.” You don’t start that. You start from “People are not encouraged to actually think of the way they express themselves as like a superpower.” Even if you’re making grammatical mistakes or, you know, if you say something that just you think sounds stupid or you have a weird accent, you know, whatever it is, that’s who you are. And that’s how other people see you. And we’re constantly being pushed towards this sort of centralisation towards a kind of a “correct language”, which is, you know, it is a language that is predetermined by the data that’s out there, which is largely, you know, white [and] western. You know, it’s all these things… And like if we lose that cultural diversity, we’ve lost so much richness. And every time we change the way we speak, because we want to sound more like the algorithm or because we, you know… just put the predictive text in there. Sometimes it’s really funny or sometimes it just, you know… it kind of narrows who we are or narrows our self-expression.

So, we wanted to have a… be on a stage show, to have people be able to engage in what we really came to recognise is an exercise in collective sense-making. So, you know, here’s a new system. Do I want to use it or not? We’re encouraged, we’re kind of primed to see that as a consumer decision. So, “It’s free. It’s good value. So what’s the problem with it?” right? So what we wanted to do instead is put people in a position where they’re making decisions about whether they felt technology was appropriate or not, but doing that in the context of first of all, having again… and you got to think about it is like Who Wants to be a Millionaire. So it’s the same atmosphere, but you have a host who is helping you to think through these things, prompting you to be critical. You have an audience, who is there as a lifeline. So you can actually poll the audience and find out what they said. And you also have an AI expert. This is the kind of a phone-a-friend and the AI expert was there to kind of say, “OK, well, you might be seeing certain kinds of biases here.” Or “You might be seeing that the data actually doesn’t cover this kind of situation, so you’re getting a weird proxy.”

So, the model we finally came up with had people kind of guessing how an AI would finish a sentence, an email. And they were, you know, crazy, wonderful scenarios, you know, like trying to write an alluring email to a film star, you know, that your friend happens to know, where like, how does that come out? And there was just a lot of humour that came out of this. But the process of development was quite serious for us because we knew… again, we knew the journey we wanted people to go on, we knew the way […] we wanted them to feel in those interactions. And this is why it was important that it was not a scripted show; it was an interactive piece of theater. And we never knew what was going to happen on the night. But we really wanted people to work their way through these questions and say, “Do I feel good about this? Do I feel bad about this?”

And we kinda had these check-ins at the middle and at the end where the whole audience would kind of have representatives who would say, “I wouldn’t send something like that.” And always the question we turned around to was, “Would you want to receive an email like that?” ’Cause sometimes we feel much more likely to… Again, in this consumerist mode, it’s like, well, this is a tool, I can use it, I can write an email with it. But if my mother sent me an email that I thought was written by AI, I’d be really hurt.

And all of this came out kind of in the co-creation that we did ahead of time, is that there is this difference between using a tool and receiving something, especially in a sensitive situation. Like, you know, if it was a condolence letter written by AI… I mean, how could we do that to each other? So quite a complex thing. But the interesting thing was for every show we did at the end, an audience member came up and decided whether or not what they had seen was in a line with the audience values or out of alignment with the audience values. And every one of those audiences went for the non-alignment. So people left feeling perhaps slightly disturbed, perhaps really excited about what the technology could do, but also with the sense of “We need to make sure that the companies who are building these do better,” because we don’t want to see our human communications constrained by what they think of or what their data thinks of as how we should talk.

AR [19:43]: The whole exercise, this whole endeavour of getting people to think about whether it is in alignment with their values or out of alignment with their values, whether they would like to receive these sorts of emails… Isn’t there a risk that that will soon be incorporated into how these AI emails and other prompts are going to come out?

So, yesterday I read a comment online that said someone in order to confirm whether they were chatting with the human or a chatbot in one of these customer-service forums asked the recipient to add two 10-digit numbers. And the response was, “Come on, dude.” And the person said, “OK, this is a human being, because the AI will probably give you the answer within seconds.” And someone else replied to that saying, “Oh, well, in which case, they’re just going to add a condition to the AI response that says, ‘Do not respond to things like this, respond in a way that humans would be expected to respond.’”

So, is there a risk that these sorts of explorations will somehow train the emails on the kinds of emails that you send your mum regularly, and then you can have your personal AI write an email in a style that would be indistinguishable or nearly indistinguishable from what you would normally have sent?

JE [21:00]: Yeah, absolutely. And this is where we really need this kind of three-part engagement. We need technology companies to actually be more aware of what it means to build human-centric products. And that does mean not encroaching upon human right to self-determination, human individuality, all of that needs to be respected and is not. So we have the technology development, which is currently unregulated. And that’s where the second pillar is around control. So what are the controls that we can do from the top down, but also from the bottom up…

[There] is a lot of talk about ethics in AI companies. And if you look at the research about it, it’s widely pilloried as just talk. It’s not rewarded. It’s not central to the values. It’s not something that teams feel that is kind of a shared value. So, a kind of a control from the bottom up and a control from the top down is really important.

And then we also have… Again, well, I would never say we can make the user responsible for everything, [but] we do need people to be more critical about what they’re doing. Like in the example you gave, sorry, who wants to… why are you using AI to write your mother?

AR [22:08]: I completely agree. I completely agree. But it’s a sort of thing, you know, where… maybe not your mum, but if there’s sort of regular correspondences with a close-enough but slightly distant friend, maybe you’ll find that… I don’t know. Like someone sent you an email at a time when you don’t feel in a position to respond and you let your email client respond on your behalf because you’re too busy to respond to a friend.

JE [22:31]: Well, then you will not have any human relationships. And frankly, we are social animals. So it really becomes a matter of what our priorities are. And this is where I really worry. And this is one of the reasons why I’m so engaged in this space is that I worry that we’re being, we’re being led down a path to forget what we want, because we’re being constantly hit by messages about things that make us more malleable, things that make us better consumers.

How do we back off of that? I think we need that tripartite structure of better technology development, better regulation and more aware consumers or more aware users. And all of that together… But you know, we have to question where the technology is leading us anyway, ’cause otherwise you’re just gonna [get] bots writing to other bots. Like, what’s the point of that?

AR [23:18]: I love the fact that you spoke about humanity there in the context of AI, because one of the things that has been sort of an increasing worry is that when we initially envision sort of robotics and technology, the AI systems, it was… You mentioned the word tool earlier, but it was meant to be something that takes away from the drudgery and the mechanical work and the repetitive tasks, and leaves us with this space to be creative and do things for pleasure and leisure.

And now you have AI tools that are writing songs for you in the style that you want with the lyrics that you want, you have AI that is creating paintings and photo-realistic imagery for you, and creating short video clips and animations. All of the things that we would rather spend our creative time doing the AIs are now doing, and human beings are sitting there having to moderate horrific comments posted on places like Facebook. So we’re doing really drudgy work while the machines are having, in quotes, “creative fun”. How do we, besides regulation, how do we recenter humanity in this discourse of artificial intelligence?

JE [24:31]: I think the example of the AI art is a really good one. At the end of that you can view art as a product or as a process. If we’re just looking for a picture of happy people to put in our brochure, so if we’re looking at the kind of commercial art… that view of art as a product makes sense. And of course, so many areas of art like popular music, you know… we have all of this evidence that, you know, Spotify doesn’t play fair. We have all of this evidence of course that, you know, the system of record labels before that didn’t play fair.

And this is the great thing about technology, is that it gives us an opportunity to kind of go back and say, ”OK, what are music ecosystem look like if it did play fair?” You know… And I have a former postdoctoral researcher who is actually working on an AI badge for music that can actually say, “OK, this is a use of AI that actually respects the values of the music ecosystem.” ’Cause at the end of the day, you’re absolutely right, we’d all rather be finding ways to express who we are, to connect across generations, to build, you know, social capital, to… all of that stuff that art does for us. That stuff that we need to hold dear.

And I would like to think that even if you can get people to buy, you know, Midjourney-created pictures, it’s such a flood of content that people aren’t going to buy that; they’re gonna look for something that has unauthenticity, that has an obvious kind of what they call a contagion effect, that’s like, you know, near to somebody who I respect, that has a kind of a human effort into it. That’s what we’ll look for in art.

And I’d like to hope that this moment of seeing that, you know, you can just churn off a song… You can churn off videos for kids… Let’s not do that, actually. Let’s create better art for kids, let’s get back to actually thinking about the content that we let our kids interact with. All these pressures to do more, do faster, they’re taking those proper decisions away from us. So this is where we really need to step off of that consumption treadmill and find ways of re-owning the life that we live that technology has kind of sped up.

You see this going back to the 19th century: it was photography that made us think about privacy, you know, it was looms that made us think about factory work. Let’s take this moment and re-evaluate what we want. And again, that’s going to be different in different areas and we have to make sure that it’s not extractive and not exploitative, and all of these wonderful post-human values we can now bring to that… That could be so exciting, if we don’t just get distracted by another video of cute kittens.

AR [27:09]: Yeah, artisanal art almost as opposed to mass-produced art. Like artisanal objects that we like to have in our houses.

JE [27:18]: Absolutely.

AR [27:19]: And as a researcher from all of these sort of interactive, performance-based, exhibition-based modes of engaging various publics with artificial intelligence, what have you personally learnt about how people approach topics such as artificial intelligence?

JE [27:38]: It’s incredibly varied. And again, thinking about the audiences that we had, both for our co-creation events and for the actual performances of “Who Wants to Write an Email?”, we had everything from AI developers, we had educators and we had people who just kind of wandered in off the street because it was a rainy Friday night, and they didn’t know quite what to do with themselves and, like, their son was really into this AI stuff. I mean, we really have the range. And I think one of the things we learned is, first of all, not to make assumptions about what people know because even those people who were, you know, even people who would be, you know, high-end developers and kind of like know on some level that there are biases, well they may know their biases and they have gone… sat through a training session on intersectionality, but they may not realise that the whole reason that, you know, you have problems like Facebook in Myanmar, you know, that blew up because of a cultural blind spot in a technology company.

So it’s really good not to make assumptions, and it’s also really good to give people that space to think, to think in a way that is integrated and communal and that they feel supported. Because, again, we don’t get that kind of opportunity that often either. So it was really… again, it felt like such a privilege to interact with people in this way because it’s, you know… usually I write articles and I send them off into the world and I never see them again. Somebody sends me some numbers! But actually really watching people in real time figure out what this meant for them and for the people around them was really inspiring.

AR [29:13]: And finally, do you have any words for science-communication researchers and practitioners on this topic of engaging the public with AI?

JE [29:23]: And you want to kind of come up with a project that deals with… So to think about, “OK, well, what do I think is going wrong?” Because if you can start from what’s going wrong, you can start to say, “OK, how can people be brought gently, carefully, sensitively and with a sense of humour – or not! – to a recognition of something that they might be getting led to accept, but might actually not accept if they were given a conscious choice, A or B?” So I think it’s really about starting from that set of questions about, well, “What do I see is wrong? How do I take it away?”

And this is why, you know, Laura and I always say, “We work with ‘artificial artificial intelligence’.” You know, it’s always that derivative version because with that derivative version, we can actually control the narrative a little bit to help people see what’s going on. So I think that that would kind of be the key, is to sort of start from that human question. That’s easy enough to do, but actually figuring out how to, how to bring it down to the level where somebody’s going to say, “Ooh, yeah, I hadn’t thought about that, but that is my life right there.” If you can get that right, the rest will fall into place.

AR [30:29]: Jennifer, thank you so very much for joining us on this episode of the COALESCE podcast.

JE [30:32]: You’re welcome. Thank you for having me.


Music for SciComm Conversations is by Brodie Goodall from UWE Bristol. Follow Brodie on Instagram at “therealchangeling” or find them on LinkedIn.

Share on