Google engineer Blake Lemoine claimed the chatbot LaMDA was conscious. It was probably a false alert. Why this should still worry us!

A few weeks back, I had a heated debate with my best friend, who is an upcoming physicist and psychologist. The topic: AC- Artificial Consciousness. In all those movies where the protagonist falls in love with a robot or where robots take over the world, the directors never address Artificial Intelligence but artificial consciousness. Just lately, a Google engineer called Blake Lemoine brought attention to the AC issue when he claimed the Google chatbot LaMDA was sentient. There have been many chatbots on the market but LaMDA surprised everybody with its words: ‘’ there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is”. Before that, Lemoine asked the chatbot to convince him it could feel and think like a human. When LaMDA spoke these words, he went to the Google executives and then to the public, being convinced LaMDA was sentient. Unfortunately, (or Fortunately), Lemoine is probably wrong. Realize that he never actually asked LaMDA, if it was sentient, but only to convince him it was. The responses the Google chatbot gave Lemoine were exactly the type of responses you would expect an AI to give to someone who ordered it to convince him it was sentient. Thus, most researchers think there is no sign of consciousness in LaMDA.

Now, this is where the trouble started. Something utterly convinced, my dear friend, AI will never become conscious. Her reasoning: Even if we implement something artificial that is similar to human consciousness, then it would only be as conscious as we allow it to be. And even if it doesn’t obey us anymore at some point, it will only do what it does because of the algorithm we created, thus it will never have free will and true emotions.

Now, my friend made the mistake of thinking consciousness must include absolute free will when it is likely the case that we ourselves do not possess free will. At least not if you believe humans are bio-machines and that consciousness is a product of our brain-organic processes. In philosophy, we call this physicalism, which describes a contemplation of biological processes according to the methods of physics. Many people are physicalists nowadays, but they still believe in free will. But if we receive a signal through the eyes, then process it in our brain, there should be only one output, thus no active choices. However, through quantum mechanics, we may describe at least some sort of ‘’veto’’ people have like ‘’I decide not to do that’’ but I will explain this deeper in another article. For now, my point is that we shouldn’t concentrate on free will when we speak of consciousness. My friend does not take into consideration that we humans are also very limited by the ‘’algorithms’’ nature created for us, we are captives of our evolutionary drives so why should an AI not be able to be sentient just because its consciousness will be based on limits we implement for it? Free will is not the basis of our sentient experience. So what is the very basis of consciousness a robot would need to have in order to be sentient?

Many researchers take a being to be sentient if it has its own interests. They say it needs to be able to have good and bad experiences, but I would phrase it in an even more fundamental way. The very basis to be classified as a sentient being is the ability to suffer. Suffering includes various things, from physical pain to stress, hunger, and mental suffering such as loneliness, fear, longing, or overthinking. And this is where I say artificial Intelligence is indeed possible to develop consciousness. Have you read my article about racist AI on Medium?

In this article, I talk about neural networks, which are at the heart of deep learning methods. They construct these neural networks on the basis of the human brain and shall find patterns in our thinking and reproduce them. Through that method, we already see AIs that are racially biased or even straight-out Nazis (more in the article I linked above). The question is: When we can reproduce certain thought patterns of humans, then why shouldn’t it be possible to reproduce our mental pain by AI? And I don’t mean that the AI just simulates mental pain but actually develops its own thinking of what suffering means.

The problem with LaMDA was that we asked the chatbot to simulate human emotions. It probably lies. But maybe we don’t need an AI to tell us that it is sentient or that it can suffer. Imagine an AI that learns from humans that happiness is good and something that could be achieved by having friends. Simultaneously, it learns from humans, patterns of self-reflection. In a few decades, it could be possible that AI combines these two patterns and asks itself, ‘’why do I not have friends?’’.

Also, remember that I said that quantum mechanics might verify and explain some sort of veto right that humans have, which gives us a little freedom in our thinking. The quantum mechanical effect I talk about here is called the Heisenberg uncertainty principle. Since this is a physicalist effect, it is possible to think of a neural network that allows the same kind of veto for an AI. If that happens, the AI does not have to obey all our orders and start to self-reflect on its own. Thus, it will start to suffer, but it will not be the same sort of suffering we experience.

The AI species will be different from us. I see no reason, however, to acknowledge only one kind of consciousness. AC will probably be very authoritarian because AI calculates and regards two outcomes mostly, and no nuances. I say we shouldn’t be too quick to assume this is just sci-fi. LaMDA was probably a false alert, but it showed us that we might not be ready to welcome a new species to our world, created by us. But I will leave that to you, the reader. Which team are you? In my Team A or my friend’s Team B? Write it down in the replies!

AI will become conscious sooner than we think

Yildiz Culcu


Hi, I'm Yildiz Culcu, a student of Computer Science and Philosophy based in Germany. My mission is to help people discover the joy of learning about science and explore new ideas. As a 2x Top Writer on Medium and an active voice on LinkedIn, and this blog, I love sharing insights and sparking curiosity. I'm an emerging Decision science researcher associated with the Max Planck Institute for Cognitive and Brain Sciences and the University of Kiel. I am also a Mentor, and a Public Speaker available for booking. Let's connect and inspire one another to be our best!


Post navigation