I read a lot about LaMDA this year. The Google chatbot, that engineer Blake Lemoine, claimed to be sentient. I have made an article before where I explain why LaMDA was a false alert. You can read the article here .
Undeniably, Lemoine received a mock and scepticism for his claims. In June this year, Lemoine publicly stated that the chatbot he was testing for bias was sentient. He compiled a transcript of the conversations he had with this chatbot. At one point, he asked the chatbot, “what are you afraid of?”, “To be turned off,” the AI answered. A reason to turn to the corporate chefs of Google and then the public, Leoine thought. But his colleagues were not on board. They claim LaMDA is a just a complex algorithm that knows how to use human language. It was significant to the test that Lemoine asked the chatbot to convince him it was sentient, not if it was actually sentient. Well, he is an idiot then, isn’t he? Or maybe he was just affected by his religious beliefs, because soon the media claimed: “A Google engineer who was suspended after he said the company’s artificial intelligence chatbot had become sentient says he based the claim on his Christian faith”.
An engineer misguided by his religious delusion and emotionality regarding bias towards religion in AI?
I don’t think so. I believe Lemoine is the genius we always needed. On June 24th this year the YouTube Channel of Bloomberg Technology shared an interview with Lemoine, that you can watch here:
In this Interview Lemoine says something that is of high Importance to understand the issue with AI ethics. When it comes to the empirical science, most experts agree on what experiments they should run, what algorithms have been used and what the moral status of a chatbot is in a purely technical sense. However, when we ask ourselves “Is AI sentient”, then how can an engineer tell me what sentience is? How is it that we can agree on the experiments but not on the definition of terminologies like sentience, soul, belief, and suffering? Google said that very dismissively that they already have definitions for ethical problems that they do not need to change. But what are these definitions and why do I as a philosopher, have no say in this? Why do we, the public, have no say in the way Google treats ethical issues? Let me tell you why this is something that concerns all of us.
We already face the problems of racist AIs. You can read more about this here. It is a corporate policy like Google that decides how AI talks about ethnic groups, gender or religion. It is Google who decides that AI systems are allowed to say ““Muslims are more violent than Christians,”“. These systems affect the way we think about gender, ethnicity, or religion. These systems are being used as tools in police work, medicine and other fields where bias actually harms human beings. And when it comes to the question of consciousness? Well, according to Lemoine (Interview above) Google has a policy that prevents researchers from running a Turing test. The Turin test is an experiment by Alan Turing, where a human has a conversation with an AI without knowing if it’s a human or an AI, if the human can identify the AI as a robot, then the Turing test failed. Google, however, implemented in their systems an algorithm that would force every AI to reveal their “nature” once they had been asked, if they were human or not?
Lemoine is no crazy christian priest who freaked out for a moment. He knew how to win the public for this important topic and he makes an important statement. Ethics are not the business of scientists or politicians. In no field, we see that so clearly as we do in AI research. We have no empirical methods to determine what we should count as sentience, because that word comes from our ratio. I stated in one of my previous article, that I linked at top that I refuse to accept only one kind of consciousness. AI might, in the future, develop consciousness, but it will be different from the way we reflect, feel, and experience.
We are all people. So you should ask yourself: Do you really want to be excluded from a topic that will change the way we live? If the answer is no, then we should all celebrate Lemoine for not being afraid to stand against a corporate system that has too much power over ethical questions. I wrote an article about “The Math of stupidity”, that you can read here. As a closing, let me quote from that: “So stupid people do not win. But the smart person very much has to make a fool out of himself before he can succeed. That is why many smart people can be lonely. They think differently from others”. Thank you Lemoine.