SciComm Conversations: “The Role of AI in Education”

26.1 min readBy Published On: 22.08.2024Categories: SciComm Conversations

Listen to “The role of generative AI in education. Guest: Sam Illingworth” on Spreaker.

Transcript

Narrator [00:09]: Hello and welcome to SciComm Conversations. In our first season of the podcast, we hear from experts on the subject of science communication in the age of artificial intelligence.

Your host for today’s episode is Corragh-May White, who is pursuing an MSc in science communicationfrom the University of the West of England in Bristol.

Corragh-May spoke with Sam Illingworth – poet, games designer and Professor of Creative Pedagogies at Edinburgh Napier University – about generative AI and its role in education.

Sam Illingworth [00:44]: Hi Corragh-May, my name is Sam Illingworth. I’m a professor in creative pedagogies at Edinburgh Napier University in the UK, where my work and research revolve around using poetry, games and generative AI as a way of discussing staff and student belonging within higher education, and then beyond that as well, looking at ways in which we can use these creative methodologies to help to diversify and democratise science across various publics.

Corragh-May White [01:19]: In the context of learning and teaching, which is one of your areas of interest in research, what do you tend to focus on when you’re talking about generative AI? So, how can AI be used in learning and teaching?

SI [01:31]: Such a broad question. I think, for me, I should preface this by saying, you know, I’m a GenAI advocate, and I think that it’s a fantastic tool. I guess I think that’s all it is, it’s a tool. I think that, and research shows us as well, that it’s really important that we embrace it. You know, if we look at what the role of the university is, it’s ultimately to prepare our students for graduation, but it’s also to prepare them for society, so they can have a positive impact on that.

And generative AI is here to stay, and it’s going to form a large part of people’s working lives. I don’t think it’s going to steal jobs or anything like that. I think it’s just going to change jobs. And so it’s really important that students and staff are aware of the opportunities and the challenges that generative AI faces, and that within a university environment, no matter what the curricula, we create a space in which those difficult conversations can take place, but also exciting ones as well.

So for me, the role that generative AI can play in learning and teaching is not an addendum, but kind of fundamental to the approach of higher education. You know, all the way from things such as authentic assessment through to student belonging, and even really thinking about how we might use it to create[an] individual student experience as well.

CMW [03:06]: In your research, you’ve explored how generative AI is also good for increasing student engagement through gamification. So, can you tell our listeners a bit more about this?

SI [03:15]: Okay, yeah, I don’t know if I would use the word gamification, just because I see problems with that word. I’ve almost certainly written it somewhere, which is where the question is come from. But basically… Yeah, so I’m a game designer as well. I do love work with analogue games. And gamification for me has issues because it’s really a bit like “playbour”. You know, a portmanteau of the word “play” and “labour”. And I think a lot of the time people use games as a way to trick people into doing work for them, or to do a… play something that isn’t really anything like what I would classify to be a game.

So I think that generative AI in this space could be really interesting. There’s not really not really much work that’s been done in terms of the relationship between generative AI and gamification. What I think is… where I think there’s huge overlaps there though is how we use students data and how we are responsible with students’ privacy and students’ well-wishes as well. So, you know, that’s where some of the issues come with the commodification of games or playbour. And oftentimes people are playing them without really realising that their data or indeed their work is being swept up.

And I think there’s the same danger with generative AI that people’s, I guess, personal experiences, but also their personal work and lived experiences as well can be potentially misused. So I think there’s a lot of overlap there that needs to be considered, but I wouldn’t necessarily say that in the first instance, there’s a lot of work that’s been done in relation to gamification and generative AI.

A lot of my work in the field of generative AI is more looking at how students are actually using it, rather than, I guess, the hyperbolic suggestions that come through the mass media with regards to how they might be using it.

CMW [05:28]: As you were saying there it’s safe to say that AI isn’t perfect and we know that it can make mistakes as well. In your research, have you discussed how the biases in AI can shape the curriculum and reinforce harmful stereotypes? Can you tell our listeners a bit about why this happens and what we can do to reduce the negative impact?

SI [05:46]: Yeah, so one of the things I think that’s not really talked about is this exact issue, which is on the one hand within higher education, we want to try and rightly diversify the curriculum so that it moves away from being white, Western, male into being a much more diverse collection of pedagogies and thoughts and knowledges. But the problem is that generative AI is trained on large language models, which themselves are often written by white, Western men, of which I classify myself, and they’re trained on a dataset, which tends to be dominated by white, Western men, i.e. the internet.

So what we really need to be careful of is… there’s no problem with inviting our students to use generative AI to supplement their learning, but they need to be taught that the responses have limitations. So actually, it’s a great way of getting students to think critically and to think that “Yes, this particular response that generative AI has generated for me has been created within the limitations and frameworks of the training data that it has been provided with, so it’s almost certainly going to be biased.” So we need to be really careful about that.

And the way to do that is to be upfront and to talk to our students and to say “Look, academia has a problem with this anyway, you know, the way in which are reading lists are often dominated by white, northern-hemisphere, Western thought. And so how can we move away from that.” And actually – providing that we don’t as staff and students just take the responses from generative AI, but instead use it as the starting point for a critical debate – then that can be a really powerful way of actually exploring what those limitations are and using it to actually help to diversify the curriculum and beyond.

CMW [07:46]: I suppose another issue with using generative AI in education is not everyone is going to have the same access to technology and to WiFi. Will a movement towards… will a continued movement towards using generative AI in education exacerbate disparities, like existing disparities in education equality?

SI [08:09]: Yes, if it’s unchecked. So I guess, like, it’s a great question talking about the digital divide. And you know, we saw this hugely evidenced during COVID, especially where we think about… I mean, and this is just from the UK context, it’s even more pronounced in other countries where there’s an even greater disparity of wealth amongst the populations… But you know, we saw during COVID where there’d be some, like… secondary education is an example, there’d be some family units where there was one laptop between seven or eight members versus other students in the same class that had access to high-speed internet, etc.

So there’s this huge digital divide and the danger here is that actually theoretically, generative AI just like the internet before it is a fantastic way to democratise education. You know, if I hold my smartphone up, I theoretically have the whole of human knowledge in my smartphone, which is phenomenal. But what we need to think about is that if some students are able to have access to the paid programs of ChatGPT, Gemini, [Anthropic], like Claude [Anthropic], any of these generative-AI tools and other students don’t have access to a laptop, a computer, high-speed internet, then there is going to be this great exasperation of the digital divide.

But again, this comes through… It’s a great opportunity because what we know with our students, like certainly in higher education and second education and primary education as well, is that young people and children in particular have a great capacity for empathy. And actually the only way we can do that is by talking about it. So by creating these opportunities where we talk about the issues without stigmatising it, that is a way that we can hopefully bridge that digital divide. But if we don’t acknowledge it and if we don’t also make allowances for it…

And that’s where somewhere like myself, you know, who is a great advocate for GenAI needs to be very careful because I might be wanting my students or my colleagues or my collaborators to be using generative AI in their assessments, in their work, whatever, but I need to think about my own positionality and what I have access to as, you know, a middle-class, white man versus what other people might have access to because of their own societal or cultural backgrounds as well.

So in answer to the question, if left unchecked, there’s a real danger that the advent of generative AI could exasperate the digital divide. But if we meet it with openness and transparency, it’s actually a great opportunity to interrogate it, to open it up and to hopefully start to bridge that gap as well.

CMW [11:04]: So you were saying there that by educating people on digital poverty is one of the ways that we can overcome this. Is there any other ways that we can implement this change into curriculum to mitigate the issue of exacerbating educational inequality?

SI [11:25]: Yeah, so the other thing apart from educating, like, from the bottom up and having those conversations with classmates and colleagues is to provide resources. You know, this is something you’d expect more within a secondary environment, like, to make sure that all the… all the children, all the students have access to generative AI and those tools, like, from an early age as well, that they feel comfortable with it.

And you know, one of the ways in which they might do that is that, because it can be expensive is for, you know, for institutes to come to arrangements and sponsorships with many of these generative-AI firms, which obviously have large amounts of money. So, I would say there kind of needs to be a bottom-up and a top-down approach. Like this bottom-up approach of like discussion, dialogue, opening up the problem and then this top-down approach of making sure that individual schools, people, universities are resourced effectively.

And then also again, you know, thinking about if people have specific access needs, it might be that not everybody has… is given the same, but some people might require slightly different or slightly more specific resources. So it’s thinking about the needs of the individual and making sure that they’re provided for. And the way in which we do that is you know to – within the university setting – work with the accessibility and inclusion teams to make sure that it’s, you know, part of anybody’s specific personal plan or anything like that and again, really thinking about the needs of the individual there.

CMW [13:00]: I’m curious to know actually how do the students themselves feel about all these changes? Are they kind of intrigued by the shiny new technology? Are they worried about it taking over the world Matrix-style? How do they feel about it?

SI [13:12]: Yeah, it’s really interesting. So we’ve done like a preliminary study at Edinburgh Napier University, led by my colleague Dr Louise Drumm and others, where basically we invited students to participate in an “amnesty Padlet”. So kind of to post up any of their thoughts about how they’re using ChatGPT or other generative-AI tools and it’s really interesting. And we did this like in, let me say, I want to say around spring 2023, so you know when it was just after… well, ChatGPT came on the market in November 2022, so like, to the public. So kind of when there was this initial explosion – and it’s obviously still very popular – but looking at the responses now like a year later… is changed, is much more nuanced.

I think the students recognise that there are limitations. A lot of them talk about fairness and justice and, you know, making sure that if people… how will people be judged if they have or haven’t used it, wanting greater clarity. Very few of them using it for plagiarism actually and actually many of them acknowledging that there are limitations.

For me plagiarism’s a bit of a red herring, like, plagiarism’s always existed, right? And actually ChatGPT democratises plagiarism to an extent. So in the past, you know, you’d be able to plagiarise because you’d go to a family member or a friend who had access to a higher-educational background or privileges. And then you would… more recently, you know, contract-cheating’s been a big thing, where you would pay someone to do it and then now we have ChatGPT and other things. But from this… the data we’ve been looking at, there’s not that many students that are using it in that way.

They’re more using it to supplement their learning. And you know, the acknowledgement that when you want to talk to… University lecturers and professors probably have very different working patterns to a lot of students and understandably there’s not many – and I would argue there shouldn’t be any – educators that will be checking their emails at like two in the morning, which is when a student might really have need of something. So if instead they’re able to go to ChatGPT to supplement that learning, then that can be really beneficial.

I also think there’s huge opportunities in terms of personal welfare, just as an addition, just as a starting point you know, we know that that transition into higher education, especially for young men, is very, very difficult with mental health and well-being. And that if we can maybe use generative AI as a way of creating a toolkit or a response, um, just somebody to talk to just to guide you over, not as like a medical replacement but just as that individual need, would be really beneficial and again that’s something we’re seeing through some of the data that we’re looking at. That’s the way that students want to use it, rather than just to use it to regurgitate text that they then put into a response to [an] assessment query.

CMW [16:23]: You said earlier that you don’t think that AI is actually going to take people’s jobs away, that that’s kind of a bit of an over-hype… I don’t think that’s an actual quote but I think I remember you said something along those lines. But if AI can do all of these amazing things and fill all of these roles, why wouldn’t it eventually take over people’s jobs? I think that every time I see AI online, it’s kind of accompanied by this fear of “This is going to be doing our job someday.”

SI [16:53]: Hmm. Well AI is going to replace some jobs definitely, like, but it’s also going to create new jobs as well you know. The very obvious most-direct thing’s thinking about “prompt engineers”, you know extremely well-paid jobs. They’re thinking about people… how they can use it in that way. But it is a tool! Like, the internet didn’t replace jobs, it didn’t remove critical thinking. You know we need to think about what the barriers are as well as the opportunities and where those… where those limits are.

But you know I… One of the things that people always say about the dangers of the jobs that might be replaced is in the creative sector, which you know, is already not necessarily the best paid and has… is quite a difficult sector in the first instance. But you know speaking as a poet, the poetry that’s generated by generative AI is terrible! Like, oh… it’s okay! But it can be as a starting point and you can have these conversations, those dialogues. There’s always going to be a need for a human element of that interaction, because ultimately how large language models work and how generative AIs work is they’re just second guessing what the next word’s going to be. And yeah, you can train them on a large dataset and everything but they don’t have the lived experiences, the personal touches that make certainly education as a job like absolutely essential.

And the way to think of it is instead think about the banal aspects of your job that you really don’t enjoy or that… and they’ll be different for different people, right? You know, I’m not massively enamoured when I have 137 emails to answer every day. But ChatGPT can help with that and it can help to construct it and it can help to make responses and it can help to do many, many things. And actually I think what will happen is that once people start to realise that ChatGPT or, sorry, any generative-AI tool – I say ChatGPT because that’s the one I use the most – but [any] generative-AI tool can be used to do a lot of these tasks and that makes you think, “Well, why are we even doing those tasks if there’s actually a more efficient way of working for all of us?”

So you know it’s not… it’s not the creatives or the copyrighters or any of these, you know, interesting and innovative jobs that are going to be lost. It’s just… it’s just going to be a different way of working. You know, go back to the time of Byron, you know, Lord Byron made his maiden speech in the House of Lords, it was basically to say that the looms are going to destroy the textile industry. And they changed the textile industry but then jobs changed and situations changed. And obviously there might be individual losses but it’s just a change in landscape. And I think the most dangerous thing that we can do is to stick our heads into the sand.

Equal to that is maybe to just allow progress unchecked. And I guess that’s the role of academia – and of students as well as staff – is to question, like, how far and how fast are we going and how far and how fast do we want to go. And alongside that and for me personally it’s not necessarily the speed, it’s more the openness and the transparency that’s there as well that we really need to think about going forward. And again I just want to I guess frame all of this with the caveat that I am an advocate for it but people have very different opinions to me, which are equally valid and that’s because they have different experiences of using it, I guess.

CMW [20:34]: Well that’s an excellent point about not stealing people’s jobs. I think that’s an interesting perspective on it. Another kind of worry that people have surrounding AI that I think we kind of touched on earlier, was is it being used in academic plagiarism? But you said earlier that you think that’s a bit of a red herring, which I think is interesting because I know that’s a big fear that a lot of people have, especially because it’s quite difficult, as far as I know, to detect whether AI has been used to cheat in an essay. So can you tell our listeners a bit about that?

SI [21:13]: Yeah, of course. So for me, I guess what I’m really interested in is actually this puts the owners back on the educator. If you’re designing assessments that can be that can be completed by ChatGPT, I would argue that the assessment maybe isn’t up to much. So you know again, why are we writing 5,000-word essays? Yes, sometimes we need to have the criticality that’s near, but how many of us will go into jobs at, like, you know… even me as an academic, as a career academic pretty much, how many of us will go into jobs where we’re having to replicate that? Not many.

So actually what we need to think about is… we need to think about what is assessment and in particular what is authentic assessment. And you know there’s many definitions of authentic assessment. Whereas for me, I guess I’d look at the work of Jan McArthur and others, thinking about authentic assessment being something that, yes, prepares the graduate for the workforce but not in a way to just go into that workforce but a way to challenge that workforce. So it’s a thinking about assessment that’s relevant to your potential employment status as a graduate but also relevant to the society that you want to change, that you want to be a part of as well.

So you know, for example, if you were doing a…an electrical-engineering course, let’s say, you could be asked to do a 3,000-word essay on the importance of electrical engineering in local, community-based projects or alternatively you could be asked to go and work with a local community to create a solution to a problem using your electrical-engineering skills that they found. And which of those is going to be the more interesting and fun to do, to mark, to be part of?

And it’s obviously going to be the latter and I don’t think ChatGPT… ChatGPT can’t be used to possibly cheat but what ChatGPT could be used to do – or like sorry generative AI could be used to do – in that instance would be to help you identify some community groups, would be to help you to put together a business plan, would be to help you to think about potential solutions and, like, that then would be using the toolset to actually help you answer an assessment that would then be very, very useful. I mean you’re not going into an interview situation, for example, saying can you give me an example of when you’ve done this, yeah I wrote a 5,000-word essay on it. No-one’s going to be interested in that.

So I think that actually it’s an opportunity for educators to really re-think assessment and you know, without sounding too trite, to re-think the role of the university. Like, what is our purpose? Is it to create people who can regurgitate information? Or is it to provide students with critical-thinking skills that can enable them to create a more just society? Like, I would hope it would be the latter but that challenges hundreds of years of what certain people think a university might be for.

CMW [24:18]: So all of the things that we’ve discussed so far considered, do you think that AI is going to have a net positive or a net negative effect overall?

SI [24:30]: I think most people would agree that the internet has been a net positive. Like, there’s lots of issues not least social… Social media I would say is probably a net positive, it’s just that it exasperates some of the worst of the human condition. I am, as well as being I guess a technophile, I’m also an optimist and I think it will have a net positive effect. I think it will be different to maybe what people think but I think that within that you need to have checks and balances.

I mean social media is a great example actually because if you think five, ten, 15 years ago, let’s say, imagine if people have come in with a framework that created, you know, rules of engagement and, like, ways to protect people, ways to remove toxicity, especially male toxicity, ways to support people, ways to stop pile-ons, that would be amazing! Like, ways to hold to account the organisation to themselves who run it and who make huge amounts of money off it.

So I think that there’s a danger that we’re not doing that and that we… I think there’s probably a lot of people, by which I mean governments, like running to a lot of the tech companies for their investment and their patronage, without really giving them any checks and whistles and I think that’s a shame. Because I think that if we do that and if we put those checks and whistles in place then it would be a great way of using it for a genuine net positive. ’cause a lot of the organisations talk a good game but ultimately they exist to make money and to commodify the platform. So I think having those open exchanges and thinking about what net positive we’d like to see might be overly optimistic but that would be a way that we could certainly be in a stronger position to do so.

CMW [26:36]: On to something slightly different now. Your research interests also concern poetry for science communication. I was actually listening to one of your podcasts earlier, The Poetry of Science, the Red Crater Clock episode earlier. Can you tell our listeners a bit about your work on integrating poetry into pedagogies for enhancing student wellbeing?

SI [26:59]: Absolutely so I guess, yes, a lot of my work with poetry and science communication… So I mean the podcast you’re talking about, like, stems from a blog I set up about a year, er, ten years ago, so a decade ago, wow! And that was I guess my first foray into poetry and science communication and this idea of how can we use poetry to communicate research to a different audience.

You know it’s the idea being that science is amazing but it’s not always the most accessible so maybe I could write a poem about a recent piece of research to instead use that as a different lens for people to be introduced to the research. And I did a lot… I used to do a lot of performance poetry as well. But then as I was doing it I was also becoming more au fait with the science-communication literature itself and realising that actually this one-way dissemination of knowledge is one way of doing science communication.

But that actually science communication exists on this spectrum right, of like dissemination, one-way dissemination on one end of the spectrum, working our way through to two-way interactions and dialogue through to full participation on the other end, and not a hierarchical spectrum but just a spectrum. And I think for me I was, like, great poetry is this really cool way of telling people about science in this one-way exchange of knowledge.

But then as… I struggle with using the word poet, I guess I am a poet. But like, in a lot of poetry-writing workshops that I’ve always taken part in, they’re this really powerful and safe space for dialogue, by which I mean like a genuine two-way conversation. And so I thought, “OK, cool, maybe we could use poetry as a way to help develop dialogues between scientists and non-scientists.” And you know in that way we think about such dialogues being rather than the scientists telling the non-scientists, “Here’s my research, isn’t it cool?” it’s instead working with the non-scientists to understand that they have their own expertise, knowledges, lived experiences that can feed into the research process as well.

But when we have these conversations between scientists and non-scientists, of which many publics exist, we have a hierarchies of intellect. So you know, this idea that even though somebody might… The example I always use is flood risk. Like somebody who has been living in a flood zone for 30 years has a lot of knowledge on how floods work even if they’re themselves are not a university professor or whatever. But when we try to engage scientists and non-scientists sometimes, like I said, we have these hierarchies of intellect, where the non-scientists might think, “Oh, I know a little bit about this topic but I don’t I don’t feel comfortable talking about it.”

So instead, the way that I do is we use poetry as a starting point. So we write poetry together. And writing poetry works because it creates this sense of shared vulnerability. So you know when you see a professor stand up and read a terrible haiku or a filthy limerick or just a badly-constructed-stroke-normal poem, you realise that, okay we’re all just part of the same society.

And from… And then the other thing about poetry is it’s incredibly rich and dense, and you know a lot of thought goes into every word, into every into every line, so that you can then start using the poems themselves as a data set rather than an interview or a focus group. And then… I could go on!

And that’s kind of how a lot of my research works and I like thinking about how we can use poetry to not just disseminate information but to generate dialogue and to ultimately – hopefully – elicit and promote action as well as we democratise science for marginalised and underserved communities.

CMW [30:41]: Well, thanks so much for coming out to talk to me today, I really appreciate it.

SI [30:45]: Oh no worries, thanks very much for having me.


This episode of the podcast was hosted by Corragh-May White and was edited by Achintya Rao. Music for SciComm Conversations is by Brodie Goodall from UWE Bristol. Follow Brodie on Instagram at “therealchangeling” or find them on LinkedIn.

Share on