Mind Matters Natural and Artificial Intelligence News and Analysis

Language experts: Three reasons AI doesn’t model human language

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Last March, three language experts wrote a letter to top science journal Nature saying that the way chatbots (large language models or LLMs) are developed — while successful for the purpose — should not be confused with the way human language is commonly acquired: First,

First, LLMs are probabilistic models of externalized language data, whereas human language is truly generative: it yields an unbounded number of hierarchically structured expressions (M. B. A. Everaert et al. Trends Cogn. Sci. 19, 729–743; 2015). Second, language acquisition in infants does not depend on massive amounts of input data, but includes knowledge of language’s generative nature. Therefore, children can acquire any language rapidly with minimal linguistic input (C. Yang et al. Neurosci. Biobehav. Rev. 81, 103–119; 2017). Bolhuis JJ, Crain S, Fong S, Moro A. Three reasons why AI doesn’t model human language. Nature. 2024 Mar;627(8004):489. doi: 10.1038/d41586-024-00824-z. PMID: 38503912.

The third issue they note is that LLMs can generate “impossible” languages that don’t follow the principles that govern all human languages of which researchers are aware. And they can learn them just as well as an actual human language. And they don’t know the difference. (A. Moro et al. Cortex 167, 82–85; 2023).

The researchers have nothing against the chatbots (LLMs); they just want to be clear that there are some important differences between the reproductions of language organized by the chatbot and actually understanding and speaking a language.


Language experts: Three reasons AI doesn’t model human language