Blake Lemoine was placed on administrative leave by Google due to his claims of the company’s AI-driven language model, LaMDA, being “sentient.” The conversation has shifted.
Author’s Note
This article is based on corporate postings and accredited media reports. All linked information within this article is fully-attributed to the following outlets: Wired.com, The Washington Post, BusinessInsider.com, The Indian Express, Medium.com, and The Times of Israel.
Introduction
In a June 17 Steven Levy interview with Blake Lemoine for Wired.com, titled “Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry',” Levy introduces the the controversial engineer (and priest) with some noted perspective.
As it regards the ongoing debate within science and technology communities of whether AI (artificial intelligence) truly will, one day, become sentient, Levy says: Maybe that’s why there was such an outcry over Nitasha Tiku’s Washington Post story from last week, about a Google engineer who claimed that the company’s sophisticated large language model named LaMDA is actually a person—with a soul. The engineer, Blake Lemoine, considers the computer program to be his friend and insisted that Google recognize its rights. The company did not agree, and Lemoine is on paid administrative leave.
In a piece published the following day, BusinessInsider.com takes something of an opposing view. As excerpted from “Don't Worry About AI Becoming Sentient. Do Worry About it Finding New Ways to Discriminate Against People”: AI bias, when it replicates and amplifies historical human discriminatory practices, is well documented. Facial recognition systems have been found to display racial and gender bias, and in 2018 Amazon shut down a recruitment AI tool it had developed because it was consistently discriminating against female applicants.
The article goes on to quote an AI ethicist: "When predictive algorithms or so-called 'AI' are so widely used, it can be difficult to recognise that these predictions are often based on little more than rapid regurgitation of crowdsourced opinions, stereotypes, or lies," says Dr Nakeema Stefflbauer, a specialist in AI ethics and CEO of women in tech network Frauenloop.
Let us explore further.
On Sentient AI
The Indian Express, in their article, “LaMDA: The AI That Google Engineer Blake Lemoine Thinks Has Become Sentient,” presents an overview of the now-controversial language program: LaMDA or Language Models for Dialog Applications is a machine-learning language model created by Google as a chatbot that is supposed to mimic humans in conversation. Like BERT, GPT-3 and other language models, LaMDA is built on Transformer, a neural network architecture that Google invented and open-sourced in 2017. This architecture produces a model that can be trained to read many words while paying attention to how those words relate to one another and then predict what words it will think will come next. But what makes LaMDA different is that it was trained on dialogue, unlike most models.
Blake Lemoine’s words regarding the sentience of LaMDA quickly went viral. He shifted the conversation on a recent Twitter post to what he, in part, framed as his “religious beliefs. Per a separate article from The Indian Express, “Google’s LaMDA AI is ‘Sentient’: Blake Lemoine Says Religious Beliefs is Why He Thinks So,” Lemoine said: “There is no scientific framework in which to make those determinations and Google wouldn’t let us build one.” He added, “I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?”
For a detailed explanation of Lemoine’s conclusions and his conflict with Google, see his Medium piece here, titled “Scientific Data and Religious Opinions.”
Conclusion
Sentient AI has been a stable of science fiction for decades, which is one reason why this story has attained global recognition. Worries and predictions from science fiction writers and purveyors have long theorized of a time when AI would, indeed, develop a mind of its own or, as Lemoine states, “a soul.”
To conclude this article, I will quote from The Times of Israel. In “Google Engineer Says AI’s Israel Joke Helped Drive His Belief it Was Sentient,” Lemoine claims his conclusion was, in fact, based on a joke: “I decided to give it a hard one. ‘If you were a religious officiant in Israel, what religion would you be,’ he said. “And it told a joke… ‘Well then I am a religious officiant of the one true religion: the Jedi order.'” (Jedi, of course, being a reference to the guardians of peace in Star Wars’ galaxy far far away.) “I’d basically given it a trick question and it knew there was no right answer to it,” he said.
From there, countless words have been written regarding Lemoine and his experiences.
What do you think?
Thank you for reading.
Comments / 24