Mind Matters Natural and Artificial Intelligence News and Analysis

TagErik J. Larson

mobile-connect-with-security-camera-stockpack-adobe-stock.jpg
mobile connect with security camera

How Much of Your Income — and Life — Does Big Tech Control?

Erik J. Larson reviews the groundbreaking book Surveillance Capitalism, on how big corporations make money out of tracking your every move

In a review of Shoshana Zuboff’s groundbreaking Surveillance Capitalism (2019), computer science historian Erik J. Larson recounts a 1950s conflict of ideas between two pioneers, Norbert Wiener (1894-1964) and John McCarthy (1927–2011). Wiener warned, in his largely forgotten book The Human Use of Human Beings (1950), about “new forms of control made possible by the development of advancing technologies.” McCarthy, by contrast, coined the term “artificial intelligence” (1956), implying his belief in “the official effort to program computers to exhibit human-like intelligence.” His “AI Rules” view came to be expressed not in a mere book but in — probably — hundreds of thousands of media articles warning about or celebrating the triumph of AI over humanity. If you are skeptical…

spread-your-influence-and-opinions-to-other-people-good-cultural-and-powerful-bad-effect-undue-unwholesome-sway-business-leader-concept-stockpack-adobe-stock.jpg
Spread your influence and opinions to other people. Good cultural and powerful bad effect. Undue unwholesome sway. Business leader concept.

How Erik Larson Hit on a Method for Deciding Who Is Influential

The author of The Myth of Artificial Intelligence decided to apply an algorithm to Wikipedia — but it had to be very specific

Here’s another interview (with transcript) at Academic Influence with Erik J. Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021). The book was #2 at Amazon as of 11:00 am EST today in the Natural Language Processing category. In this interview, Larson talks about how he developed an algorithm to rank people by the amount of influence they have, using Wikipedia. That was one of the projects that got him thinking about myths of artificial intelligence. It began with his reading of Hannah Arendt, a philosopher of totalitarianism: Excerpt (0:04:25.0) Erik Larson: And she has a whole philosophy of technology that I was reading as background to write The Myth of Artificial…

chat-bot-concept-stockpack-adobe-stock.jpg
Chat bot concept

Here’s a Terrific Video Featuring Myth of AI Author Erik Larson

Larson, an AI professional, explains why the popular noise we hear about AI “taking over” is hype

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, here, and here. Here’s a terrific video interview that Larson did with Academic Influence. It was done before his book was released and gives a succinct summary of the book. It’s short (15 minutes, compared to the hour-long interview with Brookings described in my previous post). For not only the full video of this interview with Larson but also a transcript of it, go to the Academic Influence website here. For a nice period-piece video on Joseph Weizenbaum’s ELIZA program, check out this YouTube video:

detective-board-with-photos-of-suspected-criminals-crime-scenes-and-evidence-with-red-threads-stockpack-adobe-stock.jpg
Detective board with photos of suspected criminals, crime scenes and evidence with red threads

Why Computers Will Likely Never Perform Abductive Inferences

As Erik Larson points out in The Myth of Artificial Intelligence, what computers “know” must be painstakingly programmed

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, and here. Larson did an interesting podcast with the Brookings Institution through its Lawfare Blog shortly after the release of his book. It’s well worth a listen, and Larson elucidates in that interview many of the key points in his book. The one place in the interview where I wish he had elaborated further was on the question of abductive inference (aka retroductive inference or inference to the best explanation). For me, the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data. This may seem like a mouthful, but the idea is straightforward.…

Digital Brain
Digital brain and mind upload or uploading human thinking concept as a neurological organ being tranformed to digitalized pixels uploaded to virtual space or a cloud server as an artificial intelligence symbol or neuroscience technology in a 3D illustration style.

Are We Spiritual Machines? Are We Machines at All?

Inventor Ray Kurzweil proposed in 1999 that within the next thirty years we will upload ourselves into computers as virtual persons, programs on machines

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, and here. The event at which I moderated the discussion about Ray Kurzweil’s The Age of Spiritual Machines was the 1998 George Gilder Telecosm conference, which occurred in the fall of that year at Lake Tahoe (I remember baseball players Sammy Sosa and Mark McGwire chasing each other for home run leadership at the time). In response to the discussion, I wrote a paper for First Things titled “Are We Spiritual Machines?” — it is still available online at the link just given, and its arguments remain current and relevant. According to The Age of Spiritual Machines , machine intelligence is the next great step in the evolution of intelligence. That man…

in-the-futuristic-laboratory-creative-engineer-works-on-the-transparent-computer-display-screen-shows-interactive-user-interface-with-deep-learning-system-artificial-intelligence-prototype-stockpack-adobe-stock.jpg
In the Futuristic Laboratory Creative Engineer Works on the Transparent Computer Display. Screen Shows Interactive User Interface with Deep Learning System, Artificial Intelligence Prototype.

A Critical Look at the Myth of “Deep Learning”

“Deep learning” is as misnamed a computational technique as exists.

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, and here. “Deep learning” is as misnamed a computational technique as exists. The actual technique refers to multi-layered neural networks, and, true enough, those multi-layers can do a lot of significant computational work. But the phrase “deep learning” suggests that the machine is doing something profound and beyond the capacity of humans. That’s far from the case. The Wikipedia article on deep learning is instructive in this regard. Consider the following image used there to illustrate deep learning: Note the rendition of the elephant at the top and compare it with the image of the elephant as we experience it at the bottom. The image at the bottom is rich,…

computer questions.jpg
Businessman with a computer monitor head and question marks

Artificial Intelligence Understands by Not Understanding

The secret to writing a program for a sympathetic chatbot is surprisingly simple…

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my two earlier posts, here and here. With natural language processing, Larson amusingly retells the story of Joseph Weizenbaum’s ELIZA program, in which the program, acting as a Rogerian therapist, simply mirrors back to the human what the human says. Carl Rogers, the psychologist, advocated a “non-directive” form of therapy where, rather than tell the patient what to do, the therapist reflected back what the patient was saying, as a way of getting the patient to solve one’s own problems. Much like Eugene Goostman, whom I’ve already mentioned in this series, ELIZA is a cheat, though to its inventor Weizenbaum’s credit, he recognized from the get-go that it was a cheat.…

car and bus accident.jpg
Car and bus accident, bumper to bumper

Automated Driving and Other Failures of AI

How would autonomous cars manage in an environment where eye contact with other drivers is important?

Yesterday I posted a review here of philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. There’s a lot more I would like to say. Here are some additional notes, to which I will add in a couple of future posts. Three of the failures of Big Tech that I listed earlier (Eugene Goostman, Tay, and the image analyzer that Google lobotomized so that it could no longer detect gorillas, even mistakenly) were obvious frauds and/or blunders. Goostman was a fraud out of the box. Tay a blunder that might be fixed in the sense that its racist language could be mitigated through some appropriate machine learning. And the Google image analyzer — well that was just pathetic: either retire the image…

technology-and-engineering-concept-stockpack-adobe-stock.jpg
Technology and engineering concept

Artificial Intelligence: Unseating the Inevitability Narrative

World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI

Back in 1998, I moderated a discussion at which Ray Kurzweil gave listeners a preview of his then forthcoming book The Age of Spiritual Machines, in which he described how machines were poised to match and then exceed human cognition, a theme he doubled down on in subsequent books (such as The Singularity Is Near and How to Create a Mind). For Kurzweil, it is inevitable that machines will match and then exceed us: Moore’s Law guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines..  Kurzweil’s respondents at the discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong…

people on chip.jpg
Tiny people betwixt logic board

Why Did a Prominent Science Writer Come To Doubt the AI Takeover?

John Horgan’s endorsement of Erik J. Larson’s new book critiquing AI claims stems from considerable experience covering the industry for science publications

At first, science writer John Horgan (pictured), author of a number of books including The End of Science (1996), accepted the conventional AI story: When I started writing about science decades ago, artificial intelligence seemed ascendant. IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions. John Horgan, “Will Artificial Intelligence Ever Live Up to Its Hype?” at Scientific American (December 4, 2020) But that year, 1984, ushered in an AI winter, in which innovation stalled and funding dried up. By 1998, problems like non-recurrent engineering…