Robert J. Marks, computer scientist and host of the Mind Matters Podcast, has a new article out in Newsmax on AI’s failure to meaningfully replicate itself, or build on its own programming apart from a human hand. He calls it “AI inbreeding” and notes that the process always ends in “idiocy,” writing,
But what happens if someday much of the content of the web is written by generative AI? Many web scrapings will be from LLM’s and not creative humans. The generated material will be inbred and suffer from early signs of model collapse.
Unchecked, the web might contain a lot of content that resembles a blubbering idiot.
LLM’s like ChatGPT produce spectacular results. Under the hood, LLM’s impressively manipulate relational syntax to do their magic. They learn arrangements of words and phrases to create well-formed documents.
Humans on the other hand are motivated by semantics – the meaning of words and phrases. We pay attention to syntax, but the meaning of the message is of primary importance.-Robert J. Marks, AI Inbreeding Produces Artificial Idiocy | Newsmax.com
While AI is impressive, notes Marks, it needs the creative intervention of human beings to stay fresh and relevant. Because it can’t understand the meaning of what it generates, it will in turn promote nonsense. If you put something like GPT4 and DALL-E in “conversation,” the result will ultimately be model collapse. Marks links to an example of this in the text. Read the whole article here.