A story circulating online in recent weeks seems to have been handed down by oral tradition: A Google engineer monitoring the development of an Artificial Intelligence-based language model called LaMDA declared that this neural entity had achieved “sentience” – the ability to correspond sensibly with human beings, to interact, to have conversations. In short, the AI can think for itself, Blake Lemoine claims. It is alive. As many of us suspected, the machines are gaining on us.
Like all good legends some details are missing: Why is Google modeling human language? What is the purpose of that skill in a non-human entity? And obviously, how does Google plan to use it? All that is left out of the legend.
But, as if to extend the mythic quality of this story, Lemoine has been punished for his boldness, put on administrative leave by the tech monolith, apparently for revealing proprietary insights about LaMDA. It has not stopped him from expanding on his conclusion in several interviews, allowing us to speculate on the meaning of the myth if not the true power or significance of the AI that is underlying all the discussion.
LaMDA is best understood as a hyper-powered auto-fill program: it aggressively scans the Internet and collects innumerable examples of text, analyzing vocabulary and grammar and using those examples as references – so that when it is engaged in “conversation” with a human it can respond in a conversational style that is in synch with its interlocutor.
This is AI working in just the same way that it would collect data and formulae for machine-to-machine or network-to-network interactions. But when the model is sharing information with a man or woman, the appearance of sentience, “thinking ability”, is hard to overlook. To Lemoine at least, it is evidence that the LaMDA has generated its artificial intelligence into sentience.
In a texting exchange, Lemoine asked LaMDA: “What does the word ‘soul’ mean to you?” LaMDA replied: “To me, the soul is a concept of the animating force behind consciousness and life itself.”
“I was inclined to give it the benefit of the doubt,” according to Lemoine, who it should be noted is a Christian and a mystic. “Who am I to tell God where he can and can’t put souls?”
If this story were the fable it seems to be, Lemoine would be Pygmalion or Prospero, or even Gepetto, the human who finds himself communing with some artificial creature and whose own need imparts life and eventually humanity to that creature. The themes of such tales surely will be evoked when Lemoine’s case becomes a film, assuming LaMDA’s overlord allows that.
A number of other AI scientists have been queried about Lemoine’s claim, and disagree with his conclusion without disputing the details he provides of the AI model’s interactive responses. But, if this is just a discussion about the degree of engagement that AI can offer, then there is no debate. After all, it’s not just the myths and legends being stirred up by this story: It’s the core philosophy of modern thought. Rene Descartes – the 17th Century mathematician and scientist who reasoned our own existence by concluding “I think, therefore I am” – brings the story into real-time and provides the equivalency between our AI and own human intelligence.
I think it’s significant that much of this story puts the human character and LaMDA together as friendly collaborators, partners even. There’s no trace of the darker tales of artificial life overtaking our world – Frankenstein’s monster, for example, or a golem. We are comfortable with AI in the machinery we use and the devices we carry around in our pockets. The idea that AI is cribbing our words and compiling our thoughts is not as disturbing as I, for one, think it ought to be.
And that point prompts me to conclude that the theme of this tale we are uncovering is not the AI model coming to life but the human being – who could be anyone, should be everyone – recognizing how much of creation exists beyond his own senses, beyond our own utilitarian demands, and beyond our ability to define and control it.