Mind Matters Natural and Artificial Intelligence News and Analysis
influencer-marketing-concept-of-information-and-influence-propagation-in-social-networks-stockpack-adobe-stock
Influencer Marketing, Concept of Information and Influence Propagation in Social Networks

Erik Larson To Speak at COSM 2021, Puncturing AI Myths

A programmer himself, he is honest about what AI can and can’t do
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

If you’ve ever got the sense that we are all being played by the people marketing “Soon AI will think just like you or me!”, you may want to catch Erik J. Larson’s talk at COSM 2021 (November 10–12). Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Harvard University Press, 2021), is a computer scientist and tech entrepreneur. The founder of two DARPA-funded AI startups, we are told that he is currently working on core issues in natural language processing and machine learning. He has written for The Atlantic and for professional journals and has tested the technical boundaries of artificial intelligence through his work with the IC2 tech incubator at the University of Texas at Austin.

But now here’s the fun part: Larson hit on a method for deciding who is influential, especially in academia. His algorithm, applied to Wikipedia, needed to carefully constructed. Influence depends on subtler measures than, say, one post going viral and collecting one million views. Influence?

… A two-headed kitten may attract a lot of attention without having any influence. As Larson puts it, “The problem is that you can have extremely influential people who are not, by and large, popular in modern or in broad media terms, right? Like you can have somebody that’s an expert in string theory or something, but they’re not sort of… They don’t have a huge Instagram following.” [08:52.00 EL]. No, but they may dominate a field in science that transmits basic ideas about our universe to the public. Sometimes such people, Stephen Hawking for example, are well-known. Often they are not. Mind Matters News (April 30, 2021)

Finding out who they are would help us interpret cultural changes better.

Some items from reviews of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do give us a glimpse of Larson’s book:

A useful starting point is to first get a grasp on the problems with AI as it stands today. Computer intelligence tends to be very “narrow” in scope, and this is by design: a chess-playing AI, because of its high degree of specialization, can’t also play checkers. An extreme case of this is what the author calls the “brittleness” problem: not only can a narrow AI not accomplish other tasks, but even slight deviations in the setup- which wouldn’t even register to humans- messes up computer output completely. Consider an AI that can play the game Breakout perfectly, which requires moving a paddle back and forth to bounce a ball back to the bricks. Moving the paddle a few pixels closer to the bricks wouldn’t drastically affect the performance of a human player, yet do the same for an AI and its “entire system falls apart”. The same goes with image detection software: they usually have a very high rate of success, but just changing a few pixels here and there messes up the system completely.

Hassan uz-Zaman, “Can computers think like humans? Reviewing Erik Larson’s “The Myth of Artificial Intelligence”” at Medium (June 13, 2021)

Larson’s central chapters concern a problem that can be illustrated with an example from a paper of 1979 by the philosopher John Haugeland: Jones, entering, says: “I left my raincoat in the bathtub because it was wet”. Smith effortlessly understands Jones to have said that his raincoat was wet, not that the bathtub was, although his utterance of “it” might grammatically have either referent. How does Smith do this? And how could a computer do it, as it must if it is to engage in normal conversation? Deductive logic seems not to be the tool for this job.

Although computers are exceedingly good at applying deductive rules, those rules can only generate lines of reasoning that are as watertight as mathematical proofs. That is not what is needed here: Jones was probably talking about the wetness of his raincoat, but there is no deductive guarantee. Nor is the problem made tractable by finding patterns in large sets of data. Computers are good at that, too, but the statistics may point in the wrong direction: the wetness of bathtubs may have been talked about more frequently than the wetness of raincoats.

Christopher Mole, “Famous wet raincoat” at Times Literary Supplement (June 25, 2021)

If you want to hear and talk to Larson at COSM 2021, you can save hundreds of dollars by registering before October 1, 2021.

Note: Peter Thiel, Carver Mead, James Tour and Babak Parviz will all be there in person too, along with other tech people who make things happen.


You may also wish to read:

How Erik Larson hit on a method for deciding who is influential. The author of The Myth of Artificial Intelligence decided to apply an algorithm to Wikipedia — but it had to be very specific. Many measures of influence depend on rough measures like numbers of hits on pages. Larson realized that influence is subtler than that.

and

New book massively debunks our “AI overlords”: Ain’t gonna happen. AI researcher and tech entrepreneur Eric J. Larson expertly dissects the AI doomsday scenarios. Many thinkers have tried to stem the tide of hype but, as an information theorist points out, no one has done it so well.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Erik Larson To Speak at COSM 2021, Puncturing AI Myths