Geese Ascending From A Lake

Recently the news broke that a senior software engineer at Google had concluded that Google AI chatbot, LaMDA, was sentient. LaMDA is an acronym for Language Model for Dialogue Applications and it is software that has been built to interact (chat) with people intelligently. Software of this kind has existed for a long time and its capability has improved significantly over the years.

Anyway, Google was severely displeased with this employee, placing him on paid administrative leave, for violating the company’s confidentiality policy—although it was probably silently pleased by the free publicity for the talents of its chatbot.

The Turing Test

Nevertheless, it cannot yet pass the “Turning Test”—a test formulated by the British computer genius Alan Turing which requires that a human being should be unable to distinguish the machine from another human being using the replies to questions put to both.

Right now Internet web sites use a kind of “Turing test” to distinguish computers from humans in order to defend against fraud. It may ask, for example for you to spot all the traffic lights in an array of 9 pictures some of which include traffic-lights.

 

chieved consciousness. After he tried to obtain a lawyer to represent LaMDA and complained to Congress that Google was behaving unethically,
Google has said that it disagrees with Lemoine’s claims. “Some in the broader AI community are considering the long term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the company said in a statement. Many scholars and practitioners in the AI community have said similar things in recent days.