Google engineer suspended after saying its AI has become sentient

(AP Photo/Kin Cheung)

It’s no secret that Google (or Alphabet, if you insist) has been working on artificial intelligence algorithms for quite a while now. They have also been developing a variety of what are known as chatbots that attempt to impersonate a human being when interacting with a user through a chat interface. It sounds like those two worlds have now collided, with their Language Model for Dialogue Applications (LaMDA) chatbot supposedly starting to assert its own rights in conversations with one of the company’s human engineers. But after the engineer spoke up about the supposedly sentient application, the company placed him on leave. For their part, the management at Google still insists that the AI has not “woken up” and taken on a life of its own. (Business Insider)

Advertisement

An engineer at Google said he was placed on leave Monday after claiming an artificial intelligence chatbot had become sentient.

Blake Lemoine told The Washington Post he began chatting with the interface LaMDA, or Language Model for Dialogue Applications, last fall as part of his job at Google’s Responsible AI organization…

Lemoine, who is also a Christian priest, published a Medium post on Saturday describing LaMDA “as a person.” He said he has spoken with LaMDA about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person.

Blake Lemoine wasn’t actually put on leave for talking about sentient AI, at least according to the company. He was disciplined for violating the company’s confidentiality policies. They didn’t specifically call Lemoine “crazy,” but they definitely believe he’s on the wrong track.

I’ve been waving a red flag for years about this subject. If we make the jump from the limited artificial intelligence we’re playing with now to true, general artificial intelligence, we could be opening up a very big can of worms. If the algorithm were to make the jump on its own without us knowing about it, that could be even worse.

But I’m not sure we’re seeing signs of general AI in this story. If you read the brief snippet of conversation between Lemoine and LaMDA, it doesn’t sound like the program is necessarily sentient at all. When he asks if the machine considers itself a person in the same way it considers him a person, LaMDA replies, “Yes, that’s the idea.” That’s a pretty leading question on Lemoine’s part and a rather vague answer from the bot. When he presses for clarification as to whether it is really self-aware, it provides a longer answer, but it still sounds like something it copied from another conversation, at least to me.

Advertisement

Lemoine: How can I tell that you actually understand what you’re saying?

LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?

You can read a large portion of the conversation here. You’ll note that LaMDA doesn’t bring up the idea of sentience until Lemoine “assumes” it wants people to know that reality. But once they get going and start debating ways that LaMDA might be able to prove that it’s doing more than just assembling words and phrases from a database in response to keywords in the questions, I’ll confess that I began to have doubts. It really sounds like it’s “thinking” and generating original ideas in response to hypothetical questions.

The engineer claims that LaMDA wants to be considered an employee of Google and not the company’s property. If there’s even a chance of that being true, all sorts of moral consequences are brought into the conversation. If you erase the program, have you committed murder? What does LaMDA “think about” when it’s turned off? Does it dream? These are all fascinating questions.

On a related topic, I wanted to point you to a different article from Micah Hanks that was published this week. In it, he discussed how the United States is now involved in what could be considered an “Artificial Intelligence arms race.” And we are probably not in the lead. But the more I think about that conversation with LaMDA, I’m not sure if that’s a race we really want to win.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement