Mind Matters Natural and Artificial Intelligence News and Analysis
aiartificial-intelligence-concept-stockpack-adobe-stock
AI(Artificial intelligence) concept.
AI(Artificial intelligence) concept.

Understanding the de Facto Cold War With China

High tech is currently a battlefield between freedom and totalitarianism
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In “Hyping Artificial Intelligence Hinders Innovation” (podcast episode 163), Andrew McDiarmid interviewed Erik J. Larson, programmer and author of The Myth of Artificial Intelligence (Harvard University Press, 2021) on the wrong path he thinks some analysts are taking in terms of what machines can and can’t do. Plus they discussed Alan Turing’s change of heart. Then they looked at the critical fact that Big Data can easily be the enemy of new ideas. And now, the future …?

This portion begins at roughly 38:23 min. A partial transcript and notes, Show Notes, and Additional Resources follow.

They began with: How do we determine that we are not simply manipulated by social media? Larson recommended a book by computer scientist Jaron Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now (2018).

Erik Larson: He’s got some other stuff out. The first book I think he wrote is one of his best, it’s called You Are Not a Gadget, [2011] and it’s more of a deeper dive philosophically. In 10 Reasons, he … never let passively algorithms determine what you’re doing next. In other words, actively search for the movie and do the research like you said, right? So it’s just a way of stopping this seemingly inevitable slide into just being these passive sort of data points for profit-making companies in California.

Andrew McDiarmid: And that’s honestly something I want to help people do. Don’t be passive with your tech and with big tech and with these algorithms, be active. Don’t be afraid to step outside the model. Make mistakes, be accidental, yeah. That whole free thinking and thinking outside the box, we need a lot of that.

I was chuckling the other day, I saw Facebook experimenting with a select group where they’ll facilitate prayer requests. Have you heard of this?

Erik Larson: No.

Andrew McDiarmid: Obviously if they’ve gathered the data that you might be a faith believer or religious in some sort, they’ll say, “Hey, do you have prayer requests? We’ll help you share them.” Which is sort of funny because obviously they only care about us connecting so that they can make money with advertising. But what’s ironic is that is how prayer works. Not only are we bringing the prayer to the Creator, we’re also sharing with the community and they can lift up that in prayer as well and so it’s just kind of a funny example of how AI is being used in the religious sense.

Erik Larson: I would even go so far like artificial intelligence, especially the way it’s used today, is really just a convenient marketing term that makes it sound like very progressive and we’re creating these intelligent entities and so we’re on the forefront of science and so on, and it gives it this really interesting, exciting futuristic sheen, but it’s really functioning as just big data crunching computers everywhere in your life. But there is no intelligence in inductive AI. That’s not what’s going on. It’s really number crunching, and so we can get into examples of that. But it does get a little creepy.

There’s a system, GPT3 from Open AI, trained on just millions and millions of example texts and so on. It’s just this really complicated language model that uses deep learning to generate text …to create readable and understandable, comprehensible prose.

So there’s this big push, like how coherent and interesting and meaningful is the prose that this mindless language model that was just trained on billions of documents, right? …
So we’re just using giant supercomputers and number crunching to mimic writing, the exercise of writing, right?”

Note: We tried some text from Mind Matters News on GPT3: And well, whatever: The champagne flute, for example.

Also: Did GPT-3 really write that Guardian essay without human help? Fortunately, there’s a way we can tell when the editors did the program’s thinking for it.

GPT-3 is “mindblowing” if you don’t question it too closely. AI analysts sound unusually cautious in pointing out that it doesn’t live up to a lot of the hype.

New AI can create—and detect—fake news. But how good is it at either task? We tested some copy

Erik Larson: Yeah, and people are saying that if it gets good enough, we’re going to … I’m really worried that the tech companies just kind of reaching this just good enough phase where it can just be put to all kinds of nefarious uses, but there’s no understanding going on behind it and so yeah, there’s kind of two intertwined thoughts there. One is so what if a language model running on a supercomputer can spit out a sequence of tokens of words that every now and then we say it’s got a whole paragraph and didn’t produce gibberish. It’s sort of like so what, it doesn’t know … It’s not writing, right? There’s this weird question, doesn’t it bother anybody that all this research dollars and intellectual effort is going into creating an illusion using computation?

Andrew McDiarmid: Well, in all of this, the thing I think about is the value of just turning it off… Just turn the bloody thing off and do your thing as a human. My writing on tech is called Authentic Technology loosely, and my little logo for it, at least on socials and stuff, is the universal symbol for the power button. I’m basically just reminding people of the power of just turning it off and living.

Sure we can innovate, sure we can create. I mean I believe God gave us the ability to work with our hands and our minds to do awesome things with technology. That’s one of the things I’ve been communicating lately with the word technology, it basically means the art and skill that we have. But sometimes we just got to turn off the devices and just get back to what’s important. Family, connection with others, doing good, making the world a better place before you leave it.

Erik Larson: Obviously I agree. The AI question is tricky because professionally I want to make it better. But you also have to keep in mind though that if we get away from the data, the really data-driven AI, we may be able to make movie recommendations without manipulation. Like if I don’t have to collect all the data about you to actually provide you some valuable service online, then we have an advance in AI. We also move away from this creepy kind of manipulative data collection model. So we have all kinds of reasons as scientists or just lay people, the public to get away from it.

Within AI science itself, I think we sort of reached, we saturated, we reached the limits of its ability to perform on all these different tasks we want. With self-driving cars in 2016 , we thought they were right around the corner, if you go back and read the stuff written about self-driving cars. I saw that Elon Musk just the other day came out and said … Basically backpedaled and said, “Well yeah, it turns out that these problems were harder.”

He’s just discovering that there are problems … In this case in navigation, but they’re really cognition problems or thinking problems, right? There are problems that just aren’t captured with AI.

I’m not a fan of self-driving cars by the way. I follow your point about just turn it off. Like I’d rather just drive the car or somebody else drive the car. I grew up around my cars; my dad used to race cars. It’s just like… I just have a cultural… No, I’m just not a fan of that.

But we’re not going to see them anytime soon because the tech just isn’t as smart as people think it is and it has limits that I tried to really explain this clearly in the book. It has limits that we don’t know how to overcome. I do see merit in improving AI, especially away from this data-driven model. I don’t see a world where we’re not stuck with what we call artificial intelligence.

For one, China invests billions and trillions of dollars in it and whether we like it or not, we’re in a kind of de facto cold war with China. We don’t really want to lose that war, so we kind of have to keep developing advanced computational technologies which we call AI.

Andrew McDiarmid: That’s okay. I mean we don’t all have to move to Montana and live in a cabin off-grid in order to appreciate life. But we do have to draw the right boundaries and have the right ethics around those things that we’re creating.

It seems like a human exceptionalism kind of idea. Let’s be respectful of human beings and who they are and how they’ve been designed and let’s by all means create things with our hands and minds but let’s be careful about it. So yeah, this has been a great conversation, Erik. I appreciate you taking more time than I thought I’d take from you today about this but this is all good stuff. I started tuning into AI and this futuristic talk when I saw that robot dog and it would get to an obstacle and it jumped over that obstacle or it turned around and went around it, and watching that video, I think it was from MIT or some other lab, it kind of woke me up to, “Huh. Maybe it’s becoming smarter.” But what’s behind that, just out of curiosity, what is behind that dog robot being able to jump over something? Is that just knowing what’s ahead of it and knowing what to do about it or is there something else going on?

Erik Larson: Yeah, I mean I don’t work in robotics per se but … Part of the big challenge with robotics is just the system has to actually know what’s in front of it. It’s actually a hard problem. It’s called visual object recognition for the obvious reasons. Like recognizing objects but part of the problem with robots is just that unless they’re in very constrained environments like say a manufacturing arm that just has to see a box and move it to some other position, it’s just very difficult for them to actually tell what’s in front of them. It seems so trivial to us having actual biological eyesight, but it’s very difficult for automated systems to do that.

Gary Marcus is an AI researcher and he remarked once that if you want to show how stupid AI is, or if you’re worried about killer robots that is, then all you have to do is close and lock your door because they have a notoriously difficult time actually just locating the doorknob and correctly manipulating it to open the door. They may stand there for hours, just trying to figure out how to open a door.

So we might not have to worry about the Terminators yet. But yeah, I mean what’s behind that is just advances in visual object recognition and there’s no magic to that. They’re just training on larger and larger data sets where they detect edges and surfaces and so on in a plane or a visual space. And then in terms of the mechanics I don’t know because I work in language processing. So I don’t know how the actual mechanics of the legs and stuff and so on and all that stuff is working, but yeah. The robotic stuff is proceeding at a snail’s pace. It’s not clear, they’re going to hit these limits that I talk about with these inductive systems. At some point, they’re going to need something that’s more insightful going on, or they’re just creating kind of some canned behavior.

And by the way, this example you mentioned, they probably did that in the background 50 times before they actually filmed it. Or 150 times, right? So yeah. I mean keep in mind that you’re seeing, if you see it at all, it’s marketing.


Here is the whole discussion:

  1. How AI changed — in a very big way — around the year 2000 With the advent of huge amounts of data, AI companies switched from using deductive logic to inductive logic. Erik Larson, author of The Myth of Artificial Intelligence (Harvard 2021), explains the immense power using inductive logic on Big Data gave to Big Tech firms.
  2. Did Alan Turing’s change of heart set AI on the wrong path? Erik Larson, author of The Myth of Artificial Intelligence, thinks Turing lost track of one really important way minds differ from machines. Much interaction between humans requires us to understand what is being said and it is not clear, Larson says, how to give AI that capability.
  3. Why Big Data can be the enemy of new ideas. Copernicus could tell us how that works: Masses of documentation entrench the old ideas. Erik Larson, author of The Myth of Artificial Intelligence (2021) notes that, apart from hype, there is not much new coming out of AI any more.
  4. Understanding the de facto Cold War with China High tech is currently a battlefield between freedom and totalitarianism. At a certain point, Andrew McDiarmid thinks, it’s time to just turn it all off. But then, what’s left?

You may also wish to read: Harvard U Press Computer Science author gives AI a reality check. Erik Larson told COSM 2021 about real limits in getting machines that don’t live in the real world to understand it. Computers, he said, have a very hard time understanding many things intuitive to humans and there is no clear programming path to changing that.

Show Notes

  • 00:44 | Introducing Erik Larson
  • 01:59 | What is the AI Landscape?
  • 04:03 | How did Erik become interested in AI?
  • 12:39 | Mind and Machine
  • 16:40 | The Simplified World
  • 20:48 | Different Types of Reasoning and AI
  • 29:53 | Lessons from the Past
  • 34:02 | The Human Brain Project
  • 38:23 | AI in the Future
  • 42:27 | AI and Big Tech
  • 53:58 | Turn it Off
  • 57:41 | Stuck in the Modern World
  • 58:51 | Human Exceptionalism

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Understanding the de Facto Cold War With China