I Robot, I Formatory
Recently the news broke that a senior software engineer at Google had concluded that Google AI chatbot, LaMDA, was “sentient.” LaMDA is an acronym for Language Model for Dialogue Applications—it is software designed to “chat” with people intelligently. Software of this kind has existed for a long time, and its capability has improved significantly year by year. It has matured so much that it’s even fooling software engineers.
Google was severely displeased with this employee, placing him on paid administrative leave for violating the company’s confidentiality policy—although it was probably silently very pleased by the free publicity for the talents of its chatbot.
The Turing Test
Nevertheless, neither this chatbot nor any other software program can yet pass the “Turing Test”—a test formulated by the British computer genius Alan Turing. The test requires software to fool any human being into believing that they are interacting with another human being, using just questions and answers.
Right now, websites use a kind of “Turing test” to distinguish computers from humans to defend against fraudulent activity caused by computers pretending to be people. You have probably experienced such tests. The test may ask, for example, for you to spot all the bicycles in an array of 9 pictures, some of which include bicycles. Google’s chatbot would never pass such a test—at the moment.
A faction of computer professionals believes that, eventually, computers will be capable of outdoing human beings. And—truth be told—software can surpass human beings in intellectual activities like Chess and Go. In recent years it has become as good or better at pattern recognition than humans (in respect of sound and sight). So the “science-fiction” question naturally arises:
Will computers replace us?
The limits of computer capability are reasonably well defined by the powers of the formatory apparatus—which is the primary manifestation of the “human protein-computer.” When we repeat any activity, it becomes quickly becomes habitual. (Gurdjieff suggested that three repetitions were usually enough.) We program ourselves by example. A robot could duplicate all that programming activity.
Right now, robots can exhibit intellectual center activity equivalent to the mechanical part of the intellectual center and moving activity equivalent to the mechanical part of the moving center. In theory, it could equal or surpass or ability to receive sensations and impressions.
Robots are unlikely ever to manifest feelings, not because software cannot imitate them but because: what would be the point?
There’s little doubt that computer technology can take us to this point. But it can go no further. Ultimately robots may become very sophisticated mechanisms, but they cannot be anything but mechanisms. They live in the realm of mechanisms—the realm from which people in The Work are trying to escape.