Mind Matters Natural and Artificial Intelligence News and Analysis

TagErik J. Larson

ai-concept-humanoid-robot-with-artificial-intelligence-fictional-woman-android-generative-ai-stockpack-adobe-stock
AI concept, humanoid robot with artificial intelligence, fictional woman android, generative AI

The Singularity — When We Merge With AI — Won’t Happen

Futurist predictions depend on the assumption that the human brain is like a machine, says, computer scientist Erik Larson. But it isn’t
Larson tells his EP podcast host that the real danger is that powerful AI in the hands of bad actors could bring down banking systems and cripple grids. Read More ›
ai-chatbot-robot-assistant-sitting-at-desk-using-computer-as-artificial-intelligence-business-concept-stockpack-adobe-stock
AI chatbot robot assistant sitting at desk using computer as artificial intelligence. Business concept

Once a Supporter, Science Writer Airs Doubts About Free Will

ChatGPT has set John Horgan thinking about whether he is just "ChatGPT-Me" himself
Horgan’s underlying doubts about the reality of his free will and his mind, really, seem rooted in his passionate belief in Darwinian evolution. Read More ›
trees-garden-and-plants-people-in-portrait-community-service-sustainability-collaboration-and-eco-friendly-project-gardening-sustainable-growth-and-happy-worker-in-teamwork-forest-or-nature-park-stockpack-adobe-stock
Trees, garden and plants people in portrait community service, sustainability collaboration and eco friendly project. Gardening, sustainable growth and happy worker in teamwork, forest or nature park

Against the Tyranny of Data

Computer scientist and tech entrepreneur Erik J. Larson is launching his own Substack channel dedicated to promoting human flourishing in the computer age
We are not machines and were made for more than dataistic input and output. It’s to that central idea that Larson is dedicating his new project. Read More ›
machine-learning-artificial-intelligence-ai-deep-learning-and-future-concept-wireframe-brain-connect-with-circuit-electronic-graphic-and-binary-code-abstract-backgroundblue-tone-stockpack-adobe-stock
Machine learning , artificial intelligence , ai , deep learning and future concept. Wireframe Brain connect with circuit electronic graphic and binary code abstract background.Blue tone

The Myth of Artificial Intelligence

Tech entrepreneur Erik J. Larson on why the AI hype is profoundly misplaced

In today’s featured COSM video, watch author Erik J. Larson discusses ideas underlying his book, The Myth of Artificial Intelligence, as well as what he is exploring in his next book, which focuses on the history of the 21st Century so far. Here’s the summary of the book from Amazon: Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren’t really on the path to developing intelligent machines. In fact, we don’t even know where that path might be. A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape Read More ›

young-mechanic-repairing-the-robot-in-his-workshop-stockpack-adobe-stock
Young mechanic repairing the robot in his workshop

AI Should Be Less Perfect, More Human

Authors Angus Fletcher and Erik J. Larson point us toward a more sustainable future working alongside artificial intelligence

Artificial intelligence is fragile. When faced with the ambiguity of the world, it breaks. And when it breaks, our untenable solution is to erase ambiguity. This means erasing our humanness, which in turn breaks us. That’s the problem Angus Fletcher and Erik J. Larson address in their piece published this week in Wired. AI can malfunction at the mildest hint of data slip, so its architects are doing all they can to dampen ambiguity and volatility. And since the world’s primary source of ambiguity and volatility is humans, we have found ourselves aggressively stifled. We’ve been forced into metric assessments at school, standard flow patterns at work, and regularized sets at hospitals, gyms, and social-media hangouts. In the process, we’ve lost Read More ›

servers-at-sunset-cloud-technology-concept-3d-rendering-stockpack-adobe-stock
Servers at sunset, cloud technology concept. 3d rendering

Harvard U Press Computer Science Author Gives AI a Reality Check

Erik Larson told COSM 2021 about real limits in getting machines that don’t live in the real world to understand it

The speaker told the audience that although computers can do many impressive things, they will never achieve artificial intelligence. Who is “they” in the sentence you just read? The audience or computers? You immediately know the answer. It’s computers, because we know that researchers are struggling how to figure out how to endow computers with AI. It makes no sense to talk about an audience having artificial intelligence. You intuitively understanding the meaning of “they” in the sentence without even having to think about it. What if the sentence had read: The speaker told the audience that although computers can do many impressive things, they will be sorry if they bought one of this year’s models. Again, it is obvious Read More ›

influencer-marketing-concept-of-information-and-influence-propagation-in-social-networks-stockpack-adobe-stock
Influencer Marketing, Concept of Information and Influence Propagation in Social Networks

Erik Larson To Speak at COSM 2021, Puncturing AI Myths

A programmer himself, he is honest about what AI can and can’t do

If you’ve ever got the sense that we are all being played by the people marketing “Soon AI will think just like you or me!”, you may want to catch Erik J. Larson’s talk at COSM 2021 (November 10–12). Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Harvard University Press, 2021), is a computer scientist and tech entrepreneur. The founder of two DARPA-funded AI startups, we are told that he is currently working on core issues in natural language processing and machine learning. He has written for The Atlantic and for professional journals and has tested the technical boundaries of artificial intelligence through his work with the IC2 tech incubator at the Read More ›

hype-word-card-stockpack-adobe-stock
Hype word card

Isn’t It Time for an Artificial Intelligence Reality Check?

Why do we think we’re so close to artificial general intelligence (AGI) when there are so many obstacles to overcome?

The Singularity is coming! The Singularity is coming! If you’re getting tired of hearing that “strong AI” is just around the corner, you’re not alone. The Stephen Hawkings, Ray Kurzweils, and Elon Musks of the world have been putting humanity on notice with predictions of machines overtaking humans for decades. It’s either the dawn of utopia or the start of a nightmare, depending on who’s talking. And every time they’re issued, the media jumps on them, because being on the cusp of a new era of intelligent beings is news. What’s missing from these confident claims, however, is a realistic assessment of the problems that rank-and-file computer scientists wrestle with every day — namely, the problem of intelligence. In their Read More ›

person-in-black-long-sleeve-shirt-stockpack-unsplash
person in black long sleeve shirt

Danaylov: Right on Technology, Wrong on AI

Danaylov's confidence in the future of AI super-intelligence is exaggerated

Our future is determined by the stories we tell ourselves. So says futurist Nikola Danaylov in his online series exploring the years and decades to come for humanity. In our previous posts, we introduced you to Danaylov and examined his perspective on science. Now we will turn to his treatment of technology and artificial intelligence. The Technology Story Like his perspective on science, Danaylov brings a balanced understanding to technology. Technology “is not an end-in-itself,” he says. “Instead, technology is merely a means-to-an-end, a tool.”  Jonathan Bartlett has also written about technology as a tool. In 2019, Elon Musk and Jack Ma shared a stage to debate the future of technology and artificial intelligence. Here’s what Bartlett had to say about it: For Ma, Read More ›

mobile-connect-with-security-camera-stockpack-adobe-stock.jpg
mobile connect with security camera

How Much of Your Income — and Life — Does Big Tech Control?

Erik J. Larson reviews the groundbreaking book Surveillance Capitalism, on how big corporations make money out of tracking your every move

In a review of Shoshana Zuboff’s groundbreaking Surveillance Capitalism (2019), computer science historian Erik J. Larson recounts a 1950s conflict of ideas between two pioneers, Norbert Wiener (1894-1964) and John McCarthy (1927–2011). Wiener warned, in his largely forgotten book The Human Use of Human Beings (1950), about “new forms of control made possible by the development of advancing technologies.” McCarthy, by contrast, coined the term “artificial intelligence” (1956), implying his belief in “the official effort to program computers to exhibit human-like intelligence.” His “AI Rules” view came to be expressed not in a mere book but in — probably — hundreds of thousands of media articles warning about or celebrating the triumph of AI over humanity. If you are skeptical Read More ›

spread-your-influence-and-opinions-to-other-people-good-cultural-and-powerful-bad-effect-undue-unwholesome-sway-business-leader-concept-stockpack-adobe-stock.jpg
Spread your influence and opinions to other people. Good cultural and powerful bad effect. Undue unwholesome sway. Business leader concept.

How Erik Larson Hit on a Method for Deciding Who Is Influential

The author of The Myth of Artificial Intelligence decided to apply an algorithm to Wikipedia — but it had to be very specific

Here’s another interview (with transcript) at Academic Influence with Erik J. Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021). The book was #2 at Amazon as of 11:00 am EST today in the Natural Language Processing category. In this interview, Larson talks about how he developed an algorithm to rank people by the amount of influence they have, using Wikipedia. That was one of the projects that got him thinking about myths of artificial intelligence. It began with his reading of Hannah Arendt, a philosopher of totalitarianism: Excerpt (0:04:25.0) Erik Larson: And she has a whole philosophy of technology that I was reading as background to write The Myth of Artificial Read More ›

chat-bot-concept-stockpack-adobe-stock.jpg
Chat bot concept

Here’s a Terrific Video Featuring Myth of AI Author Erik Larson

Larson, an AI professional, explains why the popular noise we hear about AI “taking over” is hype

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, here, and here. Here’s a terrific video interview that Larson did with Academic Influence. It was done before his book was released and gives a succinct summary of the book. It’s short (15 minutes, compared to the hour-long interview with Brookings described in my previous post). For not only the full video of this interview with Larson but also a transcript of it, go to the Academic Influence website here. For a nice period-piece video on Joseph Weizenbaum’s ELIZA program, check out this YouTube video:

detective-board-with-photos-of-suspected-criminals-crime-scenes-and-evidence-with-red-threads-stockpack-adobe-stock.jpg
Detective board with photos of suspected criminals, crime scenes and evidence with red threads

Why Computers Will Likely Never Perform Abductive Inferences

As Erik Larson points out in The Myth of Artificial Intelligence, what computers “know” must be painstakingly programmed

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, and here. Larson did an interesting podcast with the Brookings Institution through its Lawfare Blog shortly after the release of his book. It’s well worth a listen, and Larson elucidates in that interview many of the key points in his book. The one place in the interview where I wish he had elaborated further was on the question of abductive inference (aka retroductive inference or inference to the best explanation). For me, the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data. This may seem like a mouthful, but the idea is straightforward. Read More ›

Digital Brain
Digital brain and mind upload or uploading human thinking concept as a neurological organ being tranformed to digitalized pixels uploaded to virtual space or a cloud server as an artificial intelligence symbol or neuroscience technology in a 3D illustration style.

Are We Spiritual Machines? Are We Machines at All?

Inventor Ray Kurzweil proposed in 1999 that within the next thirty years we will upload ourselves into computers as virtual persons, programs on machines

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, and here. The event at which I moderated the discussion about Ray Kurzweil’s The Age of Spiritual Machines was the 1998 George Gilder Telecosm conference, which occurred in the fall of that year at Lake Tahoe (I remember baseball players Sammy Sosa and Mark McGwire chasing each other for home run leadership at the time). In response to the discussion, I wrote a paper for First Things titled “Are We Spiritual Machines?” — it is still available online at the link just given, and its arguments remain current and relevant. According to The Age of Spiritual Machines , machine intelligence is the next great step in the evolution of intelligence. That man Read More ›

in-the-futuristic-laboratory-creative-engineer-works-on-the-transparent-computer-display-screen-shows-interactive-user-interface-with-deep-learning-system-artificial-intelligence-prototype-stockpack-adobe-stock.jpg
In the Futuristic Laboratory Creative Engineer Works on the Transparent Computer Display. Screen Shows Interactive User Interface with Deep Learning System, Artificial Intelligence Prototype.

A Critical Look at the Myth of “Deep Learning”

“Deep learning” is as misnamed a computational technique as exists.

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, and here. “Deep learning” is as misnamed a computational technique as exists. The actual technique refers to multi-layered neural networks, and, true enough, those multi-layers can do a lot of significant computational work. But the phrase “deep learning” suggests that the machine is doing something profound and beyond the capacity of humans. That’s far from the case. The Wikipedia article on deep learning is instructive in this regard. Consider the following image used there to illustrate deep learning: Note the rendition of the elephant at the top and compare it with the image of the elephant as we experience it at the bottom. The image at the bottom is rich, Read More ›

computer questions.jpg
Businessman with a computer monitor head and question marks

Artificial Intelligence Understands by Not Understanding

The secret to writing a program for a sympathetic chatbot is surprisingly simple…

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my two earlier posts, here and here. With natural language processing, Larson amusingly retells the story of Joseph Weizenbaum’s ELIZA program, in which the program, acting as a Rogerian therapist, simply mirrors back to the human what the human says. Carl Rogers, the psychologist, advocated a “non-directive” form of therapy where, rather than tell the patient what to do, the therapist reflected back what the patient was saying, as a way of getting the patient to solve one’s own problems. Much like Eugene Goostman, whom I’ve already mentioned in this series, ELIZA is a cheat, though to its inventor Weizenbaum’s credit, he recognized from the get-go that it was a cheat. Read More ›

car and bus accident.jpg
Car and bus accident, bumper to bumper

Automated Driving and Other Failures of AI

How would autonomous cars manage in an environment where eye contact with other drivers is important?

Yesterday I posted a review here of philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. There’s a lot more I would like to say. Here are some additional notes, to which I will add in a couple of future posts. Three of the failures of Big Tech that I listed earlier (Eugene Goostman, Tay, and the image analyzer that Google lobotomized so that it could no longer detect gorillas, even mistakenly) were obvious frauds and/or blunders. Goostman was a fraud out of the box. Tay a blunder that might be fixed in the sense that its racist language could be mitigated through some appropriate machine learning. And the Google image analyzer — well that was just pathetic: either retire the image Read More ›

technology-and-engineering-concept-stockpack-adobe-stock.jpg
Technology and engineering concept

Artificial Intelligence: Unseating the Inevitability Narrative

World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI

Back in 1998, I moderated a discussion at which Ray Kurzweil gave listeners a preview of his then forthcoming book The Age of Spiritual Machines, in which he described how machines were poised to match and then exceed human cognition, a theme he doubled down on in subsequent books (such as The Singularity Is Near and How to Create a Mind). For Kurzweil, it is inevitable that machines will match and then exceed us: Moore’s Law guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines..  Kurzweil’s respondents at the discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong Read More ›

people on chip.jpg
Tiny people betwixt logic board

Why Did a Prominent Science Writer Come To Doubt the AI Takeover?

John Horgan’s endorsement of Erik J. Larson’s new book critiquing AI claims stems from considerable experience covering the industry for science publications

At first, science writer John Horgan (pictured), author of a number of books including The End of Science (1996), accepted the conventional AI story: When I started writing about science decades ago, artificial intelligence seemed ascendant. IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions. John Horgan, “Will Artificial Intelligence Ever Live Up to Its Hype?” at Scientific American (December 4, 2020) But that year, 1984, ushered in an AI winter, in which innovation stalled and funding dried up. By 1998, problems like non-recurrent engineering Read More ›