Internet Pollution — If You Tell a Lie Long Enough…
Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...ChatGPT, Bing, Bard, and other large language models (LLMs) are undeniably astonishing. Initially intended to be a new-and-improved autocomplete tool, they can generate persuasive answers to queries, engage in human-like conversations, and write grammatically correct essays. So far, however, their main successes have been in providing entertainment for LLM addicts, raising money for fake-it-till-you-make-it schemes, and generating disinformation efficiently. Earlier this year Jeffrey Funk and I predicted a potentially debilitating feedback loop for LLMs. As the internet they train on becomes increasing polluted with LLM hallucinations and disinformation, LLMs may become increasingly prone to generating hallucinations and disinformation. I recently saw a concrete, personal example of this. One embarrassment for the early versions of OpenAI’s ChatGPT was that it kept Read More ›