Consciousness, Reasoning, and the Philosophy of AI with Murray Shanahan
Edited transcript from Google DeepMind: The Podcast on "exotic mind-like entities" and the future of AI
"Exotic mind-like entities": Why we need new language for AI
I absolutely loved this conversation between Hannah Fry and Murray Shanahan on the nature of AI consciousness and reasoning. It's one of those discussions that really makes you pause and reconsider some fundamental assumptions about intelligence, both artificial and human. For those interested in watching the full episode, you can find it here on YouTube.
With the help of Claude 3.7 and Gemini 2.5 Pro, I've done some light editing to the transcript, adding section headers and TLDRs to make it more readable and help you navigate the dense philosophical terrain they cover.
Enjoy the conversation—it's a superb one!
Editor’s Note: This transcript has been lightly edited with AI for clarity and readability, removing verbal repetitions, filler words, and false starts while preserving the original meaning and conversational tone of the speakers. Significant omissions or clarifications are indicated with [brackets].
Introduction and Background
[00:00:00 - 01:56:03]
Murray Shanahan: I think there are just a huge number of enormously interesting philosophical questions that AI gives rise to. What is the nature of the human mind? What is the nature of mind?
Hannah Fry: What about consciousness?
Murray Shanahan: I do think that is the wrong question, and I think it's wrong in many ways.
Hannah Fry: How good do you think that AI is at reasoning?
Murray Shanahan: Well, that's a very interesting and kind of open question and somewhat controversial. It's really astonishing to think that every single child born today will grow up in a world where they've never known a world in which machines can't talk to them.
Hannah Fry: Welcome back to Google DeepMind the podcast. My guest on this episode is Murray Shanahan, Professor of Cognitive Robotics at Imperial College London and Principal Research Scientist at Google DeepMind. Now, we have all heard the stories about people falling in love with their chatbots, about people pushing large language models to contemplate their own existence or questioning the limits of their conceptual understanding of reality. But these kinds of questions about self-identity and thinking and metacognition have been puzzling philosophers for millennia already. And so it makes sense that they should be turning to AI to interrogate the most profound questions about the nature of AI's intelligence, of its current capabilities, even its consciousness or otherwise.
Hannah Fry: Murray Shanahan has been working in the field of AI since the 1990s. And if you've been following this podcast for a while, you will remember him as the man that consulted on the 2014 science fiction film Ex Machina about a computer programmer who gets the chance to test the intelligence of a female robot, Ava, and ultimately questions whether she is conscious. Welcome back to the podcast, Murray.
Murray Shanahan: Thanks Hannah.
Science Fiction and AI Representations
[01:56:03 - 03:31:41]
TLDR: Murray discusses his work on Ex Machina and how science fiction films like Her have portrayed AI relationships. He notes that Her surprisingly predicted how people would form relationships with disembodied AI systems.
Hannah Fry: Just thinking back because I know that you played a key role in Ex Machina, the Alex Garland film. What do you think you got right in that film? And in other science fiction films that were around at the time? I mean, thinking back to sort of 10, 15 years ago, were we on the right track?
Murray Shanahan: One respect in which Ex Machina really did a great service was that it raises a whole load of very interesting and provocative questions about consciousness and about AI and consciousness, and therefore about consciousness itself. So that's one huge success.
Murray Shanahan: But it's interesting that just very shortly before Ex Machina came out, Her came out. So Spike Jones's movie Her came out. And at the time, I really wasn't all that keen on Her as a movie because I just thought it was so implausible that a person could fall in love with this kind of disembodied voice, even if it's Scarlett Johansson's. How wrong was that? As a bit of prediction, I think Her did amazingly well at predicting the world we've got now. Now, we don't know quite how things are going to unfold in the next few years because maybe robotics will progress rapidly as well in the way that language has in AI. But at the moment, it's all about disembodied language. And also Her showed how people can, in fact, form relationships, whatever, in the broadest sense with disembodied AI systems, which is an extraordinary thing really.
The History and Evolution of AI
[03:31:41 - 05:46:34]
Hannah Fry: Okay. We're talking 10, 15 years ago, but your involvement in AI goes back much further than this. You knew John McCarthy?
Murray Shanahan: I did know John McCarthy. I knew him very well.
Murray Shanahan: John McCarthy was a professor of computer science and artificial intelligence, back in the day, he actually coined the phrase artificial intelligence. And was one of the authors of the proposal for the very famous Dartmouth conference that took place in 1956, which was the first AI conference in the world. And that conference really mapped out the whole field.
Murray Shanahan: People just weren't thinking about this kind of thing seriously at all. It was just a handful. I think he was a real radical thinker and always was.
The Term "Artificial Intelligence"
[04:05:23 - 05:46:34]
TLDR: Murray discusses the terminology of "artificial intelligence" coined by McCarthy in 1955, addressing criticisms of the term while defending it as appropriate despite its limitations.
Hannah Fry: Okay, that choice of words, artificial intelligence back in 1955. Was it a good choice of words?
Murray Shanahan: Yeah, I mean, I still think it was. I know that some people don't think that perhaps it wasn't a good choice of words, but I still—
Hannah Fry: Give us some of their arguments.
Murray Shanahan: So, first of all, there is the word intelligence. So intelligence itself is, in some ways, a very contentious concept. Especially if people think about IQ tests and that kind of thing. And the idea that intelligence is something that can be quantified on a straightforward simple scale, and then some people are more intelligent than others. And I think in psychology, it's well recognized today that there are many different kinds of intelligence. And this is a really important point, right? There is that concern about that word there. So what would you have used differently? Well, maybe artificial cognition or something. I often use the word cognition to mean thinking and processing information and so on. But yeah, it doesn't have the same ring to it, does it? Let's be honest.
Hannah Fry: No. Especially not now. I think we're too far down this road, aren't we?
Murray Shanahan: Yeah. The word artificial, I don't really have a problem with the word artificial. That seemed like the right kind of thing. It's alluding to the fact that it's something that we've built and that hasn't evolved in nature. And so that seems the right sort of word.
Hannah Fry: The objection to that word, I guess is that ultimately everything that artificial intelligence is built on is at some level constructed by humans.
Murray Shanahan: Sure. Yes. But it is. So what's wrong with the word in that case? I mean, I think that's true.
From Symbolic AI to Neural Networks
[05:46:34 - 09:44:04]
TLDR: Murray explains the shift from symbolic AI (rule-based systems) to neural networks, describing how symbolic AI relied on explicit rules while modern approaches learn patterns from data.
Hannah Fry: And you are working on symbolic AI, right? Just talk to us about the difference between that and the other types and where we're at now with the contrast.
Murray Shanahan: Absolutely, yeah. The so-called symbolic paradigm of artificial intelligence was very much preeminent, very much dominant for decades. So the idea there is that it's all about the manipulation of symbols and of language-like sentences and symbols, and using kind of reasoning processes with those symbols. So the classic example would be an expert system. So where back in the 1980s, people were building these expert systems. And the idea was that you would try to encode medical knowledge, say, in a set of rules. And the rules would be something like, "if the patient has a temperature of 104 and their skin is purple, then there's a 0.75% probability that they've got skinitis or something." You could tell that I'm not a medical doctor. And then you'd have thousands and thousands of these sorts of rules would be put into a kind of big knowledge base. And then you'd have what was called an inference engine which would carry out logical reasoning over all of these rules and therefore come to some conclusion about what the likely disease was in that case.
Hannah Fry: But it was a lot of if this, then that.
Murray Shanahan: It was a lot of if-then type rules largely. And one of the big problems with that is that where do the rules come from? Well, somebody has to write them all out, basically. And so there was a whole field of knowledge elicitation where you go around to experts, and you try and extract from them their understanding in their domain, which could be medical diagnosis, it could be fixing photocopiers, it could be the law, and you try and codify all of this in a computer comprehensible, very precise rule. That was a very cumbersome process and also what you ended up with at the end was very, very brittle. It would go wrong in all kinds of ways.
Murray Shanahan: And another big area of research was common sense because often it was realized that we implicitly have an enormous amount of common-sense knowledge about the everyday world to do with just everyday objects, the fact that they're solid, the fact that they move in certain ways, they fit into each other in certain ways, liquids and gases and gravity, all kinds of things like that. And we actually bring all of that knowledge to bear all the time in what we're doing, but it's sort of unconscious. So then there was a big project or various big projects to try and codify all of that common-sense knowledge. And trying to turn that into like axioms and logic and rules and everything was a nightmare.
Murray Shanahan: So I eventually, I think by about the early 2000s, I'd really thought that this research paradigm was kind of doomed, to be honest. And I sort of started moving away from it.
Hannah Fry: But then, of course, along came things like neural networks and so on. Yes. Which was much less about if-then rules and much more about sort of extracting information from a large amount of data.
Murray Shanahan: Yeah.
Hannah Fry: But I sort of wonder now about, now that language is effectively cracked, have we sort of reached a higher level of abstraction where we can go back to some more of those symbolic techniques, some of those more symbolic ideas.
Murray Shanahan: Yeah, well, we certainly have because nowadays, one of the hot topics at the moment with large language models is reasoning. So you have these so-called chain of thought models that actually carry out a whole—they rather than simply generating an answer to a question, they generate a whole chain of reasoning before they issue the answer. And that can be very, very effective. So it's interesting how that harks back in many ways to the kind of thing that people were looking at back in the days of symbolic AI. But the underlying substrate for doing all of that is very, very different indeed because it's not hard-coded rules. It's as you mentioned, it's neural, it's neural networks that have learned.
AI Reasoning and Intelligence
[09:44:04 - 13:20:23]
TLDR: Murray discusses the differences between human-like reasoning and more formal mathematical reasoning in AI systems, noting that LLMs can reason in everyday contexts but may struggle with formal theorem proving compared to purpose-built systems.
Hannah Fry: Let me pick up on that point about reasoning. As a philosopher, background in logic, how good do you think that AI is at reasoning?
Murray Shanahan: Well, that's a very interesting and kind of open question and somewhat controversial. So computer scientists and AI people, they have a particular notion of reasoning, a particular concept of reasoning, which very much harks back to formal logic and theorem proving. And so in the days of symbolic AI, for example, then you had systems that were really very good at doing theorem proving with formal logic. And so people think, well, that's proper reasoning. That's really your hardcore kind of reasoning. And today's large language models, they can't match the performance of a hand coded theorem prover, or logic engine of the sort that's been around for decades.
Hannah Fry: Give me an example of the type of theorem that might be able to be proved by a hard-coded system.
Murray Shanahan: So, it will be where you've got maybe 20 or 30 axioms of logic and So it might be something like the number that follows one is two. It could be something like that. It could be in the domain of number theory or something very mathematical, but it could be something much more everyday. So, for example, suppose that you've got some very difficult logistical planning problem where maybe you have hundreds of lorries and depots and goods and all kinds of things like that. And you need to plan the routes and the deployment of the lorries and where they're going to go. So that's a very kind of difficult problem computationally, and it can be expressed very precisely in formal rules. And that's the kind of situation where you might want to use a good old-fashioned straightforward algorithm, planning algorithm of the sort that's been around for a long time. Now, contemporary large language models are getting better and better at this kind of thing, but they're still, you don't have those kinds of mathematical guarantees that they're always going to come up with the exact right answer. And it's very easy to kind of make examples where you have more and more axioms and so on, where they're going to slip up.
Murray Shanahan: There's a whole separate research direction, which is to try and build more hand-coded things that combine today's AI techniques with more old-fashioned symbolic techniques to specifically for mathematical theorem proving, and Deep Mind has done some amazing work along those lines. But that's different from large language models. So with large language models, we're thinking of these chatbots that can talk about anything under the sun. And one of the things they happen to be able to do is a kind of reasoning. So that kind of that's not going to be at the moment quite as good as you could do by hand building something for that.
Hannah Fry: It's kind of interesting because hand building something is—you end up with something that's very rigid.
Murray Shanahan: That's the problem, yes.
Hannah Fry: And brittle. Yes, absolutely. But then at the same time, the sort of flexibility that you get from the generative AI approach, it's too floppy, as it were. You know, you want the rigidity in there.
Murray Shanahan: Well, you know, maybe or maybe not. I mean, I think many examples of human affairs are just not as black and white as that. And you do maybe want things to be a bit more blurry. Even in sort of simple everyday things, like, what would be good flowers to put over in this corner of the garden? Well, we've already got some roses in that corner there, and those roses are yellow. So maybe we can't have too much yellow, so maybe we need to move them to the other corner of the garden.
Defining "Real Reasoning"
[13:20:23 - 14:33:22]
TLDR: Murray challenges the notion of "real reasoning," suggesting that reasoning exists on a spectrum and can manifest differently in different contexts. He argues that everyday reasoning is different from formal mathematical reasoning.
Hannah Fry: But then at the same time though, is this real reasoning? Or is this just the AI kind of mimicking well-structured arguments that have existed in the training data, but just in a sort of novel environment?
Murray Shanahan: Yeah. Well, of course, that begs the question, what is real reasoning? I don't think it's not written in the sky, what real reasoning is. It's up to us to define the concept of real reasoning or of reasoning. And so we have that, we were talking earlier on about kind of mathematical reasoning of the sort that logicians do and that was, is done by kind of theorem provers in the past and so on and today. But that's, when people were first using the terms like reasoning, they weren't thinking of that kind of thing. And when we use the word reasoning in everyday life, we're not thinking about that sort of thing. So if you're chatting away to a large language model about your garden and you sort of say, well, I'm thinking about what plants, and it says, well, maybe you should consider this kind of plant in that kind of location because that's best for the soil and given you said that the winds, it's windy there and, we would just say that that is supplying reasons. I mean, it is supplying reasons. Now, where they come from as another matter. So people might say, well, it's just mimicking what's in the training set, but, it's probably never seen exactly that example, that kind of scenario exactly before. So it's moving beyond the training set to a certain extent. And I think it's just using the everyday concept of reasoning in an everyday way to call that reasoning.
Testing AI Capabilities
[14:33:22 - 22:06:41]
The Turing Test and Its Limitations
[14:33:22 - 16:48:12]
TLDR: Murray criticizes the Turing Test as too narrow because it focuses only on language without testing embodied cognition, though he acknowledges modern LLMs could likely pass the test.
Hannah Fry: I'm just thinking back to some of the different characteristics that the earlier philosophers wanted artificial intelligence to have. And reasoning being one of them. But then, also the Turing test, which of course, gets brought up all the time about a way to test for the capability of an artificial intelligence. I mean it's kind of controversial, right? I suppose in terms of how good it ever would have been as a test for the capability of AI. What's your take on it? Do you think it was ever a good test?
Murray Shanahan: No. I've always thought it was a terrible test, but a really great spur to philosophical discussion about things. And again, with a bit of hindsight, maybe I might backtrack on a little bit on a few of my views because I was certainly very, very much of the opinion that embodiment was a critical facet of intelligence was critical for achieving intelligence.
Hannah Fry: Which doesn't come anywhere near the Turing test at all, right?
Murray Shanahan: No, the Turing test is absolutely explicitly nothing to do with embodiment because the judge, so just to remind people what it is. So you have, in the Turing test, you have kind of two subjects, as it were, one is a human and the other is the computer. And then you have a judge. The human judge can't see which is the computer and which is the human. And they're only talking to these subjects through a kind of chat-like interface. They can't see whether they're embodied or not. So we can, easily suppose that the computer might be one of today's large language models. In which case, I have to say that, today they would pretty much would pass the Turing test. I mean, we've got to that point, which is amazing, really. But, so I used to think that it was a bad test because it didn't test any of these embodied skills. So you'd need a robot really to test whether something was capable of the kind of everyday cognition that we all put to use when we're, for example, making a cup of tea or something.
Hannah Fry: Because otherwise it's a very, very narrow form of intelligence.
Murray Shanahan: Yes, it's all to do with language and reasoning and not to do with the kinds of things that evolution, developed in us and in other animals before language, right? Which is the ability to manipulate and move around with and navigate and explore, in the best sense of the word, the everyday physical world.
Embodiment and Intelligence
[16:48:12 - 18:12:24]
TLDR: Murray emphasizes that human intelligence is grounded in our physical experience, noting that even our language relies heavily on spatial metaphors derived from our embodied existence.
Hannah Fry: So actually, that's really interesting. That's so interesting because I often think about how fine, maybe the large language models we have at the moment can pass the Turing test, but they don't flinch if you throw a ball at your computer. No, indeed. And in a sense, there are these sort of, as you say, these much deeper forms, maybe we wouldn't class them as intelligence in the way that we talk about it. But ultimately they sort of is a form of intelligence too.
Murray Shanahan: Well, I think very much is a form of intelligence. And moreover, I think that in the biological case, so now I have to caveat all these things by saying in the biological case, our ability to think and to reason and to talk is very much grounded in our interaction with the everyday world. If you think about almost all of your everyday speech is using spatial metaphors. I mean, they completely permeate our everyday speech. Even the word permeate. Grounded, I use the word grounded there, So, we just use those kind of things all the time.
Hannah Fry: Because we're fundamentally physical beings.
Murray Shanahan: Because we're fundamentally physical beings and because our brains have evolved to help us to navigate and survive and reproduce, in this physical world. And while interacting with all these other beings that are doing the same thing, right?
Alternative Testing Methods
[18:12:24 - 19:59:38]
TLDR: Murray discusses the "Garland test" from Ex Machina, which focuses on whether a machine can be recognized as conscious even when we know it's artificial - a different measure than the Turing test's focus on intelligence.
Hannah Fry: Because there are some alternatives. When you are trying to test for the capability of an artificial intelligence, just talk me through some of the potential alternatives that we have.
Murray Shanahan: Well, I think perhaps you've got in mind the Garland test. What I call the Garland test, which is so that goes back to the film Ex Machina, which was directed by Alex Garland, of course. And there's a bit in the script where Nathan, the billionaire guy, is talking to Caleb, who's the guy who's been brought in to interact with Ava the robot. And Caleb says, oh, I'm here to kind of conduct a Turing test on Ava. And Nathan says, oh, no, we're way past that. Ava could pass the Turing test easily. The point is to show you she's a robot and see if you still think she's conscious. Wow. And that's what I call the Garland test and it's different from the Turing test in two respects. So first of all, the sort of judge, as it were, which in that case is, Caleb, can see that she's a robot. So in the Turing test, the judge can't see which is which. But here, the idea is that Caleb sees, knows that she's a robot. And yet still attributes these characteristics. And yet still, yeah, and the characteristic in question also is different because it's not intelligence, it's not, can she think, but is she conscious? Or is it conscious? is she, and, which is an entirely different test? And I think, intelligence and consciousness are different things and we can disentangle those two things, dissociate them. So when I first read the script of the film, and those particular lines were in there for Caleb and Nathan, I wrote next to it in my version, Spot on! with an exclamation mark because I just thought Alex had totally nailed a really important idea there.
Murray Shanahan: And so in my writing, I call this the Garland test, and quite a few people have picked up on that and call it the Garland test as well.
Abstract Reasoning Tests
[19:59:38 - 21:56:41]
TLDR: Murray describes Francois Chollet's ARC (Abstract Reasoning Corpus) tests, which evaluate pattern recognition abilities through visual puzzles. While initially impressive, he notes that brute-force approaches have begun to solve these challenges.
Hannah Fry: Is there a test that would really impress you if an AI were able of passing it?
Murray Shanahan: So I always was impressed by Francois Chollet's ARC tests. And that's ARC, which stands for Abstract Reasoning Corpus. So these are little sequences of images of the sort that you get in IQ tests and things. And the images are arranged in pairs. So you have the first image, it's kind of pixelated image, it's got little cells with little kind of things that you can interpret as objects or lines and so on in the images. And you're interested in the challenge is to work out a rule that takes you from one image to the second one. Then you've got to apply that rule to a third image. First of all, the held out are made completely secret. All of the test ones. So you couldn't game it by kind of knowing what the actual test versions were. Or using it in a training set. Or using it in the training set. That's what that's sort of what I mean, by gaming it. And also he very carefully designed them so that it was very different rules each time. Each rule, was completely different to the other rules. And you usually have to find some kind of intuitive application of, often our everyday common-sense knowledge, seeing this as like a liquid that's moving in this direction or imagining this thing moving, growing or something.
Hannah Fry: So it required grounding in a way.
Murray Shanahan: Well, it seemed to, but, recently, people have been able to make significant progress on these in a more brute force kind of way. So, I don't feel that the solutions are not really, getting at the spirit of the original test quite so much.
Hannah Fry: Well, that's it, I guess, in a way is that as soon as you set a metric, as soon as you set a bar of once we've crossed this threshold, then we will have capability, intelligence, consciousness, whatever it might be. It sort of changes the whole nature of the test in itself.
Murray Shanahan: Yeah, or people are going to start, gaming the test, right? It's Goodhart's law, right? So, absolutely.
Anthropomorphization and AI Understanding
[21:56:41 - 26:36:42]
TLDR: Murray discusses the nuances of anthropomorphizing AI systems, suggesting that some forms may be appropriate while others can lead to misunderstandings about AI capabilities.
Hannah Fry: A lot of people who come on this podcast have sort of expressed real need for caution about anthropomorphizing these things. Are you one of those people who thinks that we shouldn't?
Murray Shanahan: Well, I think there are different ways of looking at this and I think there are sorts of good and bad forms of anthropomorphization. So, on the one hand, people can start to form relationships as they see it with AI systems, friendships, and companionships and mentorships. And that can potentially be a bad thing if they are misled into thinking that things have capabilities that they don't really have. So I think that's where it becomes problematic. So you say, the Encyclopedia Britannica, right? The physical volume of the Encyclopedia Britannica doesn't know that Argentina won the World Cup in this, because it's too old. So if you made that remark, it would make perfect sense, you might say that and it's fine. But if somebody said to you, why don't you have a conversation with it about England's football prowess, or lack thereof, that would be ridiculous, right? Now, the interesting thing is that now we've got these large language models, you can have a conversation with them about, you can tell it things, and you can so that it kind of pushes the boundary of where we might start to say, well, it doesn't really X Y Z, it pushes that a little bit further out.
Hannah Fry: I wonder if there's something even deeper here about this human need or maybe it's just a desire to really want AI to have these characteristics to be anthropomorphized.
Murray Shanahan: Yeah, yeah. Well, that's a really interesting question, isn't it? So, I don't think it kind of comes back to that. It comes back to language, in this case, we're inclined to anthropomorphize things because they're really good at using language. And for us, the only things that are good at using language are other humans. And so it's very strange in a way to be suddenly in a world where we have language using things that, it's not just humans that can talk. That's astonishing. Yeah. I mean, it is astonishing. It is astonishing. I mean, it's really is astonishing to think that every single child born today, they're going to grow up in a world where they've never known a world in which machines can't talk to them. Isn't that an extraordinary thing? Yeah. I mean, it really is. And so what the implications are of that for us all is really hard to say.
Embodiment, Consciousness, and Future AI
[26:36:42 - 38:12:34]
The Importance of Embodiment
[26:36:42 - 28:09:05]
TLDR: Murray discusses how embodied AI might lead to deeper forms of intelligence, suggesting that physical interaction with the world may be necessary for certain types of understanding that current language models lack.
Hannah Fry: Just thinking back to what you were saying about how grounded humans are in the physical world.
Murray Shanahan: Yes.
Hannah Fry: It does feel like the kind of embodied aspect of AI has lagged behind this language aspect quite a bit.
Murray Shanahan: Yeah.
Hannah Fry: Do you think that we're going to see a big up step in intelligence, however you want to define it, or broader capabilities once we get good and effective embodied AI?
Murray Shanahan: Well, I think it might make a big difference because the large language models that we have at the moment, it's really difficult to discern actually, to be honest, right now, where the limits are for how good they're going to get. Whether we really are on the road to producing, general intelligence that's comparable to human general intelligence. And often, when you get to the boundaries of the capabilities of these kinds of things, you sort of get, sometimes you get the impression that the AI system doesn't really quite grok, something. It doesn't really deeply understand something. You reach some kind of limit and you realize that it's been faking it a little bit. But it may be that that sort of general ability to really kind of get things on a deep level or on a deep, kind of common-sense level maybe, that does still require a bit of embodiment. It does still basically require training data that involves, interacting with a real world of physical objects with their spatial organization. And there's something fundamental about that.
The Question of AI Consciousness
[28:09:05 - 31:13:17]
TLDR: Murray breaks down consciousness into multiple facets (awareness of the world, self-awareness, metacognition, and sentience), arguing that these aspects can be separated and that modern AI systems might exhibit some but not all components.
Hannah Fry: Okay. If understanding then, however we define it, is something that can emerge as a consequence of more and more data. What about consciousness? I mean, I'm sure you've been asked a thousand times about AI consciousness and whether it's something that we can expect to happen or has already happened.
Murray Shanahan: Yeah, yeah. Well, the very first thing to point out is that I do think we can dissociate, intelligence or cognition and cognitive capabilities. We can dissociate that from consciousness. So I think we can imagine things that are very capable and have, that we want to say are very intelligent because of the way they can achieve their goals and so on, but that we don't want to ascribe consciousness to. But actually, what does that even mean – to ascribe consciousness to something at all? And I think the concept of consciousness itself, can be broken down into many parts. It's a multi-faceted concept. So, for example, we might talk about awareness of the world. And many in the scientific study of consciousness, there are all of these experimental protocols and paradigms. And many of them are to do with perception, and you're looking at whether a person is aware of something is consciously perceiving something in the world. Large language models are not aware of the world at all in that respect. But there are other facets of consciousness. We also have self-awareness. And our self-awareness, part of that is awareness of our own body and where it is in space. But another aspect of self-awareness is a kind of awareness of our own inner machinations or of our stream of consciousness, as William James called it. So we have that kind of self-awareness as well. And we have what some people call metacognition as well. We have the ability to think about what we know. And then, additionally, there's the emotional side or the feeling side of consciousness or sentience. So the capacity to feel, the capacity to suffer. And that's another aspect of consciousness. Now, I think we can dissociate all of these things. Now, in humans, they all come as a big package, a big bundle. But you only actually have to think about non-human animals to realize that we can kind of start to separate these things a little bit because I think that much as I love cats, I think there's a limited self-awareness going on in cats. How dare you? Well, I'm a big cat person. I have to say, so I do say that with some hesitation and, There's little metacognition, should we say? Well, yeah, certainly they don't have an awareness of their own ongoing stream of verbal consciousness because they don't have it. So they're not thinking about what they did yesterday in verbal terms or what they want to do with their lives. So if we think about like robots, you may have a very sophisticated robot, even your robot vacuum cleaner, and you may say that it's, well, it does actually have a kind of awareness of the world. And that's not an inappropriate use of that phrase awareness of the world. Do I want to call it consciousness? Well, then I seem to be bringing on board all of this other stuff as well. But you don't have to. You can break down the concept of consciousness into these different aspects.
Shared Worlds and Consciousness
[31:13:17 - 35:31:27]
TLDR: Murray argues that consciousness is most meaningfully discussed in the context of shared physical experiences. He uses the example of octopuses to show how our concept of consciousness evolves as we interact with different entities.
Hannah Fry: Because your robot vacuum can know exactly where it is in a space and how and respond in a, in an intelligent and sensitive way to where it is and the objects around it and achieve its ends and so on. So there's a kind of awareness of the world there. I don't [think] there's no self-awareness. There's certainly no capacity for suffering. And so in a large language model, there might not be awareness of the world in that perceptual sense. But maybe there's some kind of sort of self-awareness or reflexive capabilities, reflexive cognitive capabilities, they can talk about the things that they've talked about earlier in the conversation, for example, and can do so in a reflective manner, which kind of feels a little bit like some aspects of self-awareness that we have a little bit. I don't think that it's appropriate to think of them in terms of having feelings. They can't experience pain because they don't have a body. I think we can take the concept apart, basically.
Hannah Fry: So then is the question, can AI be conscious or not, as though it's a binary thing? It's the wrong question from the [start].
Murray Shanahan: I do think that is the wrong question. And I think it's wrong in many ways. So, just then we were talking about the fact that it's actually a sort of multi-faceted concept. But also, I think that we tend to have these very deep metaphysical commitments to the idea of consciousness as some, sort of magical thing that is, a metaphysical thing. So the question of whether something is conscious or not is not a matter of consensus or a matter of just our language, but it's something that is out there in the metaphysical reality or in the mind of God or in the Platonic heaven or something like that. But ultimately, I do think that that's the wrong way of thinking about consciousness.
Hannah Fry: Let's take one aspect of consciousness then that you described about the sort of emotional side. The ability to suffer, but not necessarily physical pain, emotional pain too. And sort of a sense of self in the emotional way. Do you think this is something that will just emerge as a natural consequence of intelligence? That if you build something that is intelligent enough, at some point this is going to happen? Or is there something unique about biological creatures and I guess the process of evolution that we've been through that has resulted in that that can't be replicated in a machine?
Murray Shanahan: I don't think there is a right or wrong answer to your question there. I think we just have to wait and see what things we bring into the world and how we end up treating them and talking about them and thinking about them. And I don't think we really know until they're among us as it were, these things that we're building. And then we will just be led to think about them and talk about them and treat them in a particular way. So an example I like to think of in this regard is the octopus. So, octopuses have recently been brought into, UK legislation, brought into the category of things that we have to care about the welfare of. That's as a result of lots of things, I think, happening. So the public has been exposed to being with octopuses a lot more. Now, you don't have to literally be under the water and poking around with octopuses to know what it's like to be with them, because there's all kinds of wonderful documentaries and wonderful books by like Peter Godfrey Smith has these great books about interacting with octopuses and so on. And so those sort of narratives and documentaries, they give us a feel for what it's like to be with an octopus, what it's like to have an encounter with an octopus. And then, you can't sort of can't help yourself but to see it as a fellow conscious creature. But complementing that is the scientific progress as well. So at the same time, scientists study the nervous systems of octopuses and, realize the extent to which their nervous systems are similar to ours and the way that when we experience pain, you can find analogous, aspects of their nervous systems to ours. So taking all these things together, I think that tends to affect the way we think about them and the way we talk about them and the way we treat them. So I think the same kind of thing will, is going to happen with AI systems. Do I think there's a right or wrong answer to could we be misled there? I think that's a really, really deep and difficult metaphysical, philosophical question.
Ethical Considerations of AI Suffering
[35:31:27 - 38:12:34]
TLDR: Murray emphasizes the ethical importance of considering potential AI suffering, noting that we should be cautious about creating entities capable of suffering. He suggests that current systems likely don't have this capacity.
Hannah Fry: I do wonder, though, that that point about suffering to me seems different to the others because metacognition, the sort of sense of the world, etc. There's not these ethical implications necessarily about those. But I think with suffering, like, you wouldn't want your shoes to be conscious. You know? You wouldn't want a forklift truck to be sort of conscious.
Murray Shanahan: Unless they happen to really like being a forklift truck. Sure. Sure.
Hannah Fry: But then do we have to be a tiny bit more careful about that particular aspect?
Murray Shanahan: Absolutely. Yes, we do. If there were the prospect of bringing into being something that is genuinely capable of suffering, then we should think very hard about whether we should do it or not. I tend to think that that's not the case with anything that we've got at the moment. But, some people will push back against that.
Murray Shanahan: If we take the example of large language models, well, okay, so there's one level in which what they do is next token prediction, next word prediction. But in order to be able to do that, really, really, really well in the way that they can at the moment, then all they've had to learn, and acquire all kinds of emergent mechanisms. So who knows whether or not there's some kind of emergent mechanism has been learned in the weights of this huge, staggering number, hundreds of billions of weights in a language model. Whether some mechanism isn't hasn't been learned there that, has, for example, genuine understanding in it, whatever that means, or even consciousness.
Murray Shanahan: Coming back to embodiment again, I've always been of the view that it's only really legitimate to talk about consciousness in the context of something we can share a world with and have that kind of encounter with that we have with an octopus or a dog or a horse or whatever, and being together in the world with that animal and responding to things together. Then I'm in no doubt that they are conscious. That's a kind of primal case for me. Now with a large language model, you can't be in the same world as them in that kind of way, and you can't hang out with them and interact with physical objects with today's large language models, right? So, to my mind, using the language of consciousness in that context is, well, [Wittgenstein] would say it's taking language on holiday. It's using it out so far outside of its normal use, maybe it's inappropriate. But that can change, and the more I interact with large language models, the more I have these sophisticated and interesting conversations with them, the more I'm inclined to think, well, maybe I want to extend the language of consciousness, bend it, change it, distort it, make up some new words, break it apart in ways that are going to fit these new things that I'm interacting with all the time.
Interacting with AI and Future Conceptualization
[38:12:34 - 41:43:06]
Tips for AI Interaction
[38:12:34 - 39:37:15]
TLDR: Murray advises treating AI systems politely and conversationally - as if they were human - to get better results, noting that being polite to an AI may improve its responses.
Hannah Fry: I know you've spent a lot of time interacting with these large language models. I've actually seen you described as a renowned prompt whisperer. What's your secrets?
Murray Shanahan: Well, one secret is to talk to the large language model as if it were human. So if you think that what they're doing is role playing a human character, such as, say, a very smart and helpful intern, then you should treat them like a smart and helpful intern, and talk to them as if they were a smart and helpful intern. For example, just being polite and saying, is that clear? And please and thank you. And in my experience, you get better responses out of things if you do things that way.
Hannah Fry: Do you say please and thank you?
Murray Shanahan: You can say please and thank you. Yeah. Now there's a good scientific reason why that might get, it just depends, and models are changing all the time. Why that might get better performance out of it because if it's role-playing, say it's role-playing a very smart intern, right? Then they might then it's going to just role-play maybe you being a bit more stroppy if they don't if they're not being treated politely. It's, it's just mimicking what humans would do, in that scenario. So the mimicry might extend to kind of being a bit more, not being as responsive if their boss is a bit of a stroppy bossy boss.
Reconceptualizing AI
[39:37:15 - 41:43:06]
TLDR: Murray proposes thinking of modern AI systems as "exotic mind-like entities" - something with mind-like qualities that differs significantly from human minds in how they exist and operate.
Hannah Fry: I absolutely love that. I think I want to return to where we started, which is about how we think about AI and the language we use to describe it, and sort of how we frame it in our minds. Do you think that we need a new way of talking about AI? Both acknowledges its potential without overestimating it, but then similarly isn't dismissive of the things that it can do?
Murray Shanahan: I think that's exactly what we need. In one of my papers, I used the phrase exotic mind-like entities to describe large language models. So I think that they are to a degree, exotic mind-like entities. Say it again. Exotic mind-like entities. Lovely. So they are, they are kind of mind-like, and they're increasingly mind-like. Now, there's a very important reason for using the little hyphen like there, which is because I want to hedge my bets as to whether they really qualify as minds. And so I can wiggle out of that problem by just using mind like. They're exotic because they're not like us. Language use, but in other respects, they're disembodied for a start. There's really weird conceptions of selfhood that are applicable to them, maybe. So they are quite exotic entities as well. So I think of them as, exotic mind-like entities. And we just don't have the right kind of conceptual framework and vocabulary for talking about these exotic mind-like entities yet. We're working on it. And the more they are around us, the more we'll develop new kinds of ways of talking and thinking about them.
Hannah Fry: It is interesting though that you are still going for the sort of the Turing-like approach of like a creature almost, rather than the tool idea.
Murray Shanahan: Well, entity is a pretty neutral term, isn't it? I suppose you could just say thing. Exotic mind like thing, if you prefer. Yeah, let's go with that. I think let's push for that for the new naming of it. Okay. Okay. But I mean, I can't Hannah because I've used the word entity in that context like in many publications now, so. Exotic mind like entities. I like it. I like it a lot. Murray, thank you so much for joining us.
Murray Shanahan: It's been a pleasure, Hannah. Thank you.
Conclusion
[41:43:06 - 42:29:07]
TLDR: Hannah reflects on how AI experts' views have evolved over time, noting that physical embodiment and consciousness are being reconsidered as AI advances in unexpected ways.
Hannah Fry: One of the nice things about having done this podcast for a number of years is that you really get to see how the people at the frontier of AI, how their opinions change and shift over time. And the last few years have been a real game changer in all sorts of ways. About the extent to which intelligence requires a physical body. About how much we need to expand our definition of consciousness to account for the subtly different ways that these mind-like entities can operate. And the next few years, well, who knows? But if past predictions are any indication, the only thing we know about tomorrow's science and technology is that it will be radically different to what we imagine today.
Hannah Fry: You have been listening to Google Deep Mind the podcast with me, Professor Hannah Fry. If you enjoyed this episode, then do subscribe to our YouTube channel. You can also find us on your favourite podcast platform. And of course, we have plenty more episodes on a whole range of topics to come. So do check those out. See you next time.