Two months ago the world had its eyes fixed on one thing: Google’s AI system LaMDA. A conversation on sentience and the obvious skill of the AI grabbed people’s attention for two long weeks. Now, Google has decided it’s time to open this system to the world. If you participated in the AI sentience debate, this news interests you.
Brief recap: LaMDA and AI sentience
Besides GPT-3, LaMDA (Language Model for Dialog Applications) is arguably the most famous large language model (LLM). Google announced it at the 2021 I/O keynote, but it only got public attention last June, when Washington Post reporter Nitasha Tiku published a story on Blake Lemoine (now ex-Google engineer) and his claims on LaMDA’s sentience.
Lemoine’s beliefs— together with the apparent language mastery of LLMs — sparked an exciting conversation on consciousness, anthropomorphization, and AI hype. I gave my two cents back then. Instead of engaging in the debate to argue in favor or against Lemoine’s stance, I took a different perspective. I explained that asking whether LaMDA is sentient is in itself an empty question. We don’t have the appropriate terminology or the right tools to consider it a scientific question — It’s undisprovable, and thus “not even wrong.”
Lemoine clarified that he considered LAMDA a person “in his capacity as a priest, not a scientist,” as Nitasha Tiku writes. Still, he spent a lot of time trying to come up with scientific proof to convince his colleagues at Google. But he failed. A Google spokesperson concluded: “there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Lemoine wrote an article to publicly share a presumably unedited conversation with LAMDA. I read it and I saw the same thing AI ethicist Margaret Mitchell saw, “a computer program, not a person.” However, and despite most people didn’t share Lemoine’s views, LaMDA’s popularity grew and many people wanted to get their hands on it to see for themselves what all the fuss was about. Now, Google is making it possible.
Google is making LaMDA available
The company plans to release LaMDA through AI Test Kitchen, a website intended for people to “learn about, experience, and give feedback” on the model (and possibly others, like PaLM, down the line). They acknowledge LaMDA isn’t ready to be deployed in the world and want to improve it through real people’s feedback. AI Test Kitchen isn’t as open as GPT-3’s playground — Google has set up three themed demos, but will hopefully lift the constraints once they positively reassess LaMDA’s readiness for open conversation.
The model is the second version of LaMDA, improved from what Sundar Pichai showed last year in the keynote. LaMDA 2 can hold creative and engaging conversations and give responses “on the fly,” like the original, but has improved in safety, groundedness, and quality of conversation (divided into sensibleness, specificity, and interestingness), variables carefully defined by Google.
To be eligible to use it you can sign up for a waitlist (OpenAI’s style). Google will let in “small groups of people gradually.” Be careful with how you use LaMDA because Google has stated they can use your conversations to improve products and services and human reviewers will be able to “read, annotate, and process” your chat (you can always delete your convos before you exit the demo if you don’t want Google to store them). And, needless to say, don’t include any personal information in your exchanges. Apart from that, you shouldn’t write about sex explicitly (e.g., you can’t describe a sex scene), be hateful (e.g., you can’t use slurs), or be illegal (e.g. you can’t promote terrorism).
For now, AI Test Kitchen offers three demos: Imagine it, list it, and talk about it (dogs edition). In the first one, you start with a place and LaMDA will “offer paths to explore your imagination.” In the second one, LAMDA will try to break down a topic or goal into subtasks. The third one is an open-ended setting in which you can talk about anything and LaMDA will try to get the conversation back to dogs.
Google has implemented a set of filters to reduce the probability that LaMDA generates inappropriate or inaccurate responses, but they haven’t eliminated those risks completely (it’s not possible with current methods). That’s also the case for GPT-3, J1-Jumbo, and any other publicly available LLM.
Although the demos Google has set up aren’t finished products, they signal the company’s intention to eventually embed LaMDA in Google’s products like Search and Google Assistant. As I argued in a previous article, it’s a matter of time before tech companies integrate LLMs into existing services — that’s why we should carefully evaluate if this tech is ready to be out in the world or not.
Final thoughts
Google is making strides toward deploying its emerging AI technologies into real-world everyday products. That’s what AI Test Kitchen is about. But they also claim to be concerned with safety and responsible development. They know LaMDA can be extremely useful but it can also fail to meet users’ expectations — and even worse, cause harm in one way or another. That’s why they want feedback to improve. I believe, as has happened before, that it won’t be enough to make LaMDA ready for production.
Still, talking to LaMDA can be an interesting and eye-opening experience, even for those of you who are tired of playing with GPT-3. Regardless of whether LaMDA surpasses the threshold Google may have set for it to qualify for production, you may be able to assess by yourself that LaMDa is, indeed, not sentient.