Mind Matters Natural and Artificial Intelligence News and Analysis

TagEmily Bender

concept-image-to-explain-how-ai-hallucinations-occur-when-an-671664701-stockpack-adobestock
Concept image to explain how AI hallucinations occur when an AI model generates false or illogical information that isn't based on real data or events, but is presented as fact.

Just When Human Reason is Most Productive — AI Makes Things Up

In Part 2, we see how the ability to handle only one type of truth limits AI. AI models are fundamentally untethered from reality
The gap between human and machine ways of knowing can be thought of as the correspondence horizon. When AI crosses that, it produces plausible bullshit. Read More ›
concept-of-artificial-general-intelligence-achieving-human-l-1057608663-stockpack-adobe_stock
Concept of artificial general intelligence achieving human-level understanding

Machine Intelligence and Reasoning: We Are Not on a Path to AGI

AI guru François Chollet’s Abstraction and Reasoning Corpus (ARC) proves we’re not on a path to AGI
A reality check for LLMs is inevitable, and with OpenAI, “Project Strawberry,” and the new hype cycle — thanks to Chollet and others — it’s already here. Read More ›
person on the precipice
Man standing in a high place looking up in wonder to the Milky Way galaxy. Small silhouette of a man under the Milky Way and the magical starry sky. Concept of human smallness.

Time for Artificial General Intelligence? Not So Fast, OpenAI

OpenAI CEO is ambitious about the company's direction, but are his hopes profoundly misguided?

OpenAI CEO Sam Altman is ambitious about his company’s future, promising the world that they are developing “artificial general intelligence” (AGI) that will supposedly compete with human intelligence, per a recent Futurism piece. However, the ambition is misguided. Or more than that, the ambition is simply delusional. AI is “not even close” to attaining the creativity and intelligence of human beings, and Altman shouldn’t be parading OpenAI products as if it is. Victor Tangermann writes, In reality, however, LLMs have a very long way to go until they’re able to compete with the intellect of a human being — which is why several experts are calling foul on Altman’s recent blog post, calling it meaningless and misleading. After all, AGI Read More ›

inner-life-of-super-human-ai-stockpack-adobe-stock.jpg
Inner Life of Super Human AI

GPT-3 Is “Mindblowing” If You Don’t Question It Too Closely

AI analysts sound unusually cautious in pointing out that it doesn’t live up to a lot of the hype

Last week, Jonathan Bartlett wrote about the somewhat misleading buzz around the new OpenAI third-generation software, GPT-3 (Generative Pretrained Transformer). And now—for a change—much of the industry has begun to seem socially distant, so to speak, from the reckless hype that has accompanied other releases. For example, one article starts off breathlessly: The artificial intelligence tool GPT-3 has been causing a stir online, due to its impressive ability to design websites, prescribe medication, and answer questions… Its predecessor, GPT-2, made headlines for being deemed “too dangerous to release” because of its ability to create text that is seemingly indistinguishable from those written by humans. While GPT-2 had 1.5 billion parameters which could be set, GPT-3 has 175 billion parameters. A Read More ›