Mind Matters Natural and Artificial Intelligence News and Analysis
machine-learning-artificial-intelligence-ai-deep-learning-blockchain-neural-network-concept-stockpack-adobe-stock
Machine learning , artificial intelligence, ai, deep learning blockchain neural network concept.
Photo licensed via Adobe Stock

How Google’s LaMDA Resolved an Old Conflict in AI

Will two conflicting views always be in opposition? Or can they sometimes be resolved at a higher level?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the movie Fiddler on the Roof there is a debate at one point. After listening to the cases made, a listener agrees with conclusions made from both sides of a conflict. Someone points out that “they can’t both be right!” to which the agreeable listener says “You know, you are also right.”

Interestingly, the claim that the two sides of an issue will always be in opposition is not always true. The two sides can be in apparent conflict and both be right. Sometimes, but not always. The classic example is the blind men and the elephant. After feeling the elephant’s leg, one blind man says the elephant is like a tree. After feeling the elephant’s tail, another says the elephant is like a rope. The blind men can argue, but both are right.

John Polkinghorne, Cambridge University physics professor turned Anglican priest, used this observation to reconcile apparent discrepancies between science and faith. As an example, he points out the debate in physics about whether light is a particle or a wave. The seemingly irreconcilable conflict was resolved by quantum mechanics, which showed that light has both particle and wave properties. Both sides were right.

Here’s another example. In Christianity, there is an ongoing debate about whether we are predestined (Calvinism) or have free will (the Arminian view). Are the two sides reconcilable? Some argue that the debate is resolved when we take perspective into account.

Most agree that God exists outside of time. So God has access to the whole time line. He knows exactly where you and I will be and what each of us will be doing one year from now. This is predestination. You and I, on the other hand, are constrained to flow with time and are free to make choices. From our viewpoint, we have free will. These two different perspectives, some claim, provides a higher resolution to the seeming conflict between free will and predestination.

A conflict arose in artificial intelligence (AI) in the 1980’s. At that time, AI specifically did not refer to neural networks but only narrowly to so-called expert systems. Expert systems, basically, queried human experts and coded their response. Follow-up questions enabled the construction of decision trees to arrive at final answers. Neural networks, on the other hand, use lots of training data that, without elaboration by experts, they eventually learn.

The neural network community and sister communities at the time wanted to separate their identity from expert systems. They came up with computational intelligence as the name for their discipline. I wrote an editorial in 1993 specifying the difference between computational and artificial intelligence as it was at that time.

As outlined in my book Non-Computable You, the battle between expert systems and neural network proponents in the 1980’s was fierce. In the expert systems camp, Marvin Minsky (1927–2016) and Seymour Papert (1928–2016) wrote a scathing assessment of neural networks in 1987 titled Perceptrons. Minsky, at MIT, had clout. He played a big part in founding what is known today as the MIT Computer Science and Artificial Intelligence Laboratory. The conflict eventually dried up funding for both sides of the argument in the United States and Europe and led to what some call the first AI winter. Quoting again from Fiddler on the Roof, “If you spit in the air, it lands in your face.” Minsky and Papert spit into the air and funding, including their own, evaporated.

Let’s now turn to Google’s impressive chatbot LaMDA. The acronym stands for Language Models for Dialog Applications. As its name indicates, the chatbot was designed specifically for dialog. That’s why dialog with LaMDA is so good. Dialog with humans is what LaMDA was trained to do.

In an informative paper, coauthored by over fifty people including Ray Kurzweil, LaMDA is described as “a family of Transformer based neural language models specialized for dialog.” A transformer is a type of neural net used frequently in natural language processing.

LaMDA is trained using a merger of experts with neural networks that makes the historical conflict between expert systems and neural networks today look silly. Here’s what happens. After pretraining, LaMDA is fine tuned with human dialog. The humans, dubbed crowdworkers in the paper, had thousands of conversational back-and-forth dialogs with LaMDA. For example, “we collect 6400 dialogs with 121K turns by asking crowdworkers to interact with a LaMDA instance about any topic.” Crowdworkers were asked to interact with it “in a safe, sensible, specific, interesting, grounded, and informative manner.” The crowdworkers were also asked to rate the effectiveness of LaMDA’s responses. LaMDA was updated in accordance to heuristic measures of the responses like sensibleness, specificity, groundedness, interestingness and informativeness.

LaMDA was not the first AI to combine topical experts with neural networks but it is the most obvious. So-called fuzzy expert systems have been reduced to practice in air conditioners, washing machines, vacuum cleaners, rice cookers, microwave ovens, clothes dryers, electric fans, and refrigerators. Parameters for such devices can be initialized heuristically and then fine-tuned like a neural network for optimal performance.

As initially disjoint disciplines in the large arena of AI studies mature, they broaden. Eventually they can intersect. Such is the case of the formerly disjoint AI areas of expert systems and artificial neural networks.

Here’s the scene from Fiddler on the Roof:


How Google’s LaMDA Resolved an Old Conflict in AI