In “Hyping Artificial Intelligence Hinders Innovation” (podcast episode 163), Andrew McDiarmid interviewed Erik J. Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021) (Harvard University Press, 2021) on the way “Machines will RULE!” hype discredits — and distracts attention from — actual progress in AI.
Erik Larson has founded two two DARPA-funded artificial intelligence startups. Inthe book he urges us to go back to the drawing board with AI research and development.
This portion begins at 01:59 min. A partial transcript and notes, Show Notes, and Additional Resources follow.
Andrew McDiarmid: Can you paint a picture first for us of what the AI landscape looks like today and why it’s not heading in the right direction?
Erik Larson: Basically starting around the year 2000, AI went from the old way of doing things to a kind of data-driven way. Basically we’re dealing now with what I call Big Data AI, which is basically “AI works the best when you have massive data sets.”
Certain things like image recognition, face recognition on Facebook, personalizing news feeds, it put those on steroids. But it’s also resulted in AI being done as induction, basically.
In the old way of doing AI, we would write rules. I actually came out of this camp and then ended up working in the modern AI … the year 2000. At that time, it was just taking off. Google was pretty much unknown except for little pockets in California and there was no Facebook, there was no Web 2.0 in the year 2000, and AI was in one of these winter periods where people had lost a little bit of confidence, funding dried up, and so on.
So 20 years ago when I started, the field was still trying to basically write rules or use what I call in the book a deductive approach, right? Deduction in philosophy is well understood.
The classic example is
All men are mortal.
Socrates is a man.
So Socrates is mortal.
It’s a way of specifying premises and then reaching a conclusion. Those inferences are rule-based because you actually write or specify the knowledge and then you reach the conclusion that way. At the first company I worked at, we were still using that method. In fact, it’s a famous AI company in Austin, Texas.
But the web had taken off a few years earlier. I mean I think the first commercial company was on the web in 1994. But the traffic really started to accelerate in terms of the growth of pages on the World Wide Web in 1995 to 2000. So all of a sudden there was all this data.
Current AI technology actually dates back to the 1950s; the big change after 2000 was more data
Erik Larson: The old methods… have been around for a long time. Deep learning is based on something called the neural network and a neural network actually is one of the older machine learning algorithms in AI. I think the original neural networks were called perceptrons and they actually emerged in the 1950s pretty much at the inception of the field itself. The learning algorithms weren’t new going into this next decade in the 2000s, but the amount of data available to feed them was just exponentially growing because web pages are basically text and images, right?
Deep Learning for instance is basically a stacked neural network, so you take more than one neural network and you stack them on top of each other to create a deep hidden structure for the learning algorithm.
So we ended up with what we used to call empirical techniques or learning techniques with 10x, 100x, 1,000x, 1,000,000x the amount of data — and all of a sudden they were doing things that they wouldn’t do before. So there was a period where the AI community kind of slowly, and then really all at once, dropped the rule-based approaches and adopted the empirical methods or the machine learning approaches.. So the strength is that you can do a lot with data. The weakness is that you need a lot of data to do anything basically, right?
Note: Larson offered a classic example of deductive reasoning above. Traditionally, computers were programmed to do deductive reasoning. Inductive reasoning combines observations with information to reach a conclusion. For example, “if you review the population information of a city for the past 15 years, you may observe that the population has increased at a consistent rate. If you want to predict what the population will be in five years, you can use the evidence or information you have to make an estimate.” – Indeed Career Guide
Andrew McDiarmid: Yeah. The switch from deductive to inductive was basically to deal with the amounts of data that we were acquiring?
Erik Larson: Basically yes. Empirical approaches (the inductive approaches) started to show immense promise because there were huge data sets to feed them.
Andrew McDiarmid: Okay, so was the next progression…
Erik Larson: I’ll just give you an example. On Facebook, if you want to personalize your newsfeed, that means that basically it’s going to be looking at what you’re looking at, what news you’re clicking on. By looking at lots of prior examples, it comes up with what’s called a model, a predictive model, of what you’re likely to want in the future. So that’s the AI system that’s actually personalizing your news. Inductively, it’s looking at what you will prefer, what your preference will be in the future. That’s basically how all of the AI today works on the web. You could go to other examples but that’s the basic model for how AI works today
My field is natural language processing, information extraction. So I deal with text, how to extract information from text, how to classify text. Everyone sort of went from this old way of doing AI to this data hungry way of doing AI because those methods worked better basically.
Andrew McDiarmid: Okay, so we have this landscape with largely inductive reasoning and inference. We have tons of data, but we’re nowhere close to the general intelligence that we’re being promised by futurists. We’ll touch on that as we go. Another basic question, how did you get interested in AI? Did you always look like you were going to study that in school and have that as a job?
How did Erik Larson go from philosophy and math to computer science?
Erik Larson: I started out studying philosophy and math actually. Those were my two majors as an undergraduate, and I actually didn’t take very much computer science as an undergraduate… I started in a PhD program in philosophy. And then I found myself getting interested in the core philosophy of AI issues, like can a machine actually show, simulate, or reproduce human intelligence? What are the limits of machines? And there’s all kinds of interesting explorations and inroads into those questions in math and in computer science and in philosophy.
Erik Larson: So I started really focusing on the philosophy of AI as a PhD student in philosophy, and then at some point… frankly, the true answer is is that my wife got pregnant with Brooke, our first child who is now 21, and I was in a philosophy program making pennies teaching as a teaching assistant.
So I basically taught myself how to program and ended up getting a job — which was a huge break for me — at Cycorp and transitioning from an academic into a professional computer scientist…
Andrew McDiarmid: So if you hadn’t taken the philosophy classes, you might be a millionaire now, because you’d have the math, you’d get into the tech and you’d make millions. But you took the philosophy, you decided to think about it as well.
Erik Larson: Well I did start two companies and sell them, so –
Andrew McDiarmid: That’s right, you are a tech entrepreneur as well, and that is cool. That’s what’s cool. One of the cool things about your voice is you’re not just, “Hey, we should watch out for this stuff!” You’re working in it, you’re innovating in it, you’re not afraid of what’s to come, but you want to caution people and make sure we’re on the right path and doing it the right way.
Next: One really important way minds differ from machines
Here is the whole discussion:
- How AI changed — in a very big way — around the year 2000 With the advent of huge amounts of data, AI companies switched from using deductive logic to inductive logic. Erik Larson, author of The Myth of Artificial Intelligence (Harvard 2021), explains the immense power using inductive logic on Big Data gave to Big Tech firms.
- Did Alan Turing’s change of heart set AI on the wrong path? Erik Larson, author of The Myth of Artificial Intelligence, thinks Turing lost track of one really important way minds differ from machines. Much interaction between humans requires us to understand what is being said and it is not clear, Larson says, how to give AI that capability.
- Why Big Data can be the enemy of new ideas. Copernicus could tell us how that works: Masses of documentation entrench the old ideas. Erik Larson, author of The Myth of Artificial Intelligence (2021) notes that, apart from hype, there is not much new coming out of AI any more.
- Understanding the de facto Cold War with China High tech is currently a battlefield between freedom and totalitarianism. At a certain point, Andrew McDiarmid thinks, it’s time to just turn it all off. But then, what’s left?
You may also wish to read: Harvard U Press Computer Science author gives AI a reality check. Erik Larson told COSM 2021 about real limits in getting machines that don’t live in the real world to understand it. Computers, he said, have a very hard time understanding many things intuitive to humans and there is no clear programming path to changing that.
- 00:44 | Introducing Erik Larson
- 01:59 | What is the AI Landscape?
- 04:03 | How did Erik become interested in AI?
- 12:39 | Mind and Machine
- 16:40 | The Simplified World
- 20:48 | Different Types of Reasoning and AI
- 29:53 | Lessons from the Past
- 34:02 | The Human Brain Project
- 38:23 | AI in the Future
- 42:27 | AI and Big Tech
- 53:58 | Turn it Off
- 57:41 | Stuck in the Modern World
- 58:51 | Human Exceptionalism
- Buy Erik Larson’s book: The Myth of Artificial Intelligence.
- Andrew McDiarmid at Discovery.org
- Erik Larson at Discovery.org
- The difference between deductive, inductive, and abductive reasoning.
- The Alan Turing Machine
- The Turing Test
- The Human Brain Project
- Buy Shoshana Zuboff’s book: The Age of Surveillance Capitalism
- Buy Jaron Lanier’s Ten Arguments for Deleting Your Social Media Accounts Right Now