CategoryArtificial Intelligence
About the claim that chatbot Claude 3 showed self-awareness…
Has anyone noticed the resemblance between the conviction that an AI project thinks like a human and that extraterrestrials are visiting us?Pink Dancing Hippos: Don’t Tell ChatGPT-4 Not to do Something
Can OpenAI fix this flaw in GPT4? Absolutely. Time will tell if they do or not.If AI Speeds Up Science, Does It Risk Squashing Some Parts?
A Yale anthropologist and a Princeton psychologist warn of the dangers of overreliance on AI in scienceHow Materialism Handicaps Us in Understanding AI’s Limits
Sabine Hossenfelder acknowledges AI’s limits, yet she is convinced that it will become consciousIn “Scientists warn of AI collapse,” theoretical physicist Sabine Hossenfelder warns, “We’ve all become used to AI-generated art in the form of text, images, audio, and even videos. Despite its prevalence, scientists are warning that AI creativity may soon die. Why is that? What does this mean for the future of AI? And will human creativity be in demand after all? Let’s have a look.” She discusses the problem that chatbots and other generative AI create; they end up reprocessing and degrading their own information, essentially eating their own tails: [1:28] The more AI eats its own output the less variety the output has. For example in a paper from November, a group of scientists from France tested this for Read More ›
Wrestling with AI: Making More and Better Disciples
AI may have "knowledge," but it lacks wisdom.The Singularity — When We Merge With AI — Won’t Happen
Futurist predictions depend on the assumption that the human brain is like a machine, says, computer scientist Erik Larson. But it isn’tSora: Life Is Not a Multiple-Choice Test
With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.Programmer: How We Know Computers Won’t Surpass the Human Mind
Winston Ewert points out that we can only devise a “halting detector” less powerful than the ones our own minds haveOver a Cliff? It’s That Bad for Venture-Backed Startups?
Jeffrey Funk and Gary Smith think that much high-tech today is not producing value. Chatbots? Their “main successes have been in generating disinformation and phishing scams”Google Gemini Presents a Past That Never Happened
You can't trust a bot to give you a history lesson, turns out.Is AI the Triumph of Left-Brained Thinking? What Follows?
Psychiatrist Iain McGilchrist argues that it is and asks us to consider what its cultural lean toward the “left brain” is doing to usIain McGilchrist, psychiatrist and author of The Matter With Things (Perspectiva 2021), defends the left-brain/right-brain psychological distinction often made in psychology. But his view is far more careful and nuanced than what’s offered in the pop psych books on the flea market table. In an essay just published at First Things, which started out as a lecture delivered at the 2022 World Summit AI in Amsterdam, he warns against the growing AI dominance over our lives — which he interprets as left-brained: The things that used to alert us to the inadequacy of our reductionist theories are fading away. They were: the natural world; the sense of a coherent shared culture; the sense of the body as something we live, Read More ›
AI and Wall Street’s Hype Curve
Almost all new tech has a hype curve. Here are the stages.Astrophysicist: Don’t Say That Chatbots “Hallucinate”
Adam Frank points out that human-type “hallucination” is not at all what drives a chatbot to claim that the Russians sent bears into spaceIf AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?
With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do theyIn yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›