Mind Matters Natural and Artificial Intelligence News and Analysis
kunstliche-intelligenz-konzept-stockpack-adobe-stock
Künstliche Intelligenz Konzept
Image licensed via Adobe Stock

Oxford’s John Lennox Busts the “Computer Takeover” Myth

AI is here to stay, he says, but in addition to doing a great deal of good, it raises vast problems we must address
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Earlier this month, we looked at claims that robots are going to scarf up everyone’s jobs. That was a bonus feature in the Science Uprising series. In another bonus interview, “John Lennox on the Transhumanist Claim AI Will Turn Humans into Gods” (October 17, 2022), Oxford mathematician Lennox talks about claims that 1) computers are taking over and that 2) we will merge with them (transhumanism). Lennox is the author of 2084: Artificial Intelligence and the Future of Humanity (2020). This is the first of two parts, where he talks mainly about narrow AI but then gets into the topic of artificial general intelligence (AGI). A partial transcript and notes follow:

John Lennox: The typical AI system consists of a very powerful computer with a large database and an algorithm that recognizes particular patterns within that database. To take a simple example, we’re in the midst of a COVID pandemic at the moment and it affects people’s lungs. (0:59)

So let’s imagine that our database consists of one million x-rays of diseased lungs. They are labeled with their respective diseases by the top medical people in the world. That’s our database. Now, a photograph is taken of my lungs if I catch Covid-19 and the algorithm compares the photograph that’s taken of my lungs with the million in the 1:29 database and comes up very rapidly with the diagnosis. (1:34)

Now, these days, that diagnosis is likely to be much more accurate than I would get from my local hospital and systems like this are being employed to find vaccines Covid-19 and do many other wonderful things. So there’s a very positive side to this kind of AI. (1:58)

The second thing to mention about it is that, characteristically, such a system does one single thing that normally requires human intelligence. That’s why it’s called artificial intelligence. It’s not real intelligence. The machinery, the computer, doesn’t think. It’s not conscious. There’s intelligent input from the doctors and the computer scientists and the people that develop the algorithms. What comes out looks as if it’s been made by human intelligence but it hasn’t. It’s been made by this sophisticated piece of equipment. (2:40)

Note: Doctors won’t be obsolete anytime soon, as IBM’s recent Watson blunder shows. Used properly, AI can help medics a lot. Used improperly, it can be a source of phantom patterns. Either way, the machine is not doing any thinking at all.

John Lennox: Another example is — if we stick with pattern recognition, which is enormously popular — facial recognition technologies. They have developed to a very advanced state and you can see that they’re very useful for police forces to pick out a terrorist or a criminal in a crowd and be used therefore to capture people who are a danger to society. (3:10)

But there’s a downside because facial recognition technology is being used at the moment in certain parts of the world to invade the privacy, not only of individuals but of whole people groups, and actually control them and suppress them (3:29.)

Note: That’s Bridge Man’s world in China today, where words disappear and constant surveillance means that dissent must take ever new creative forms.

John Lennox: Now, I mentioned that example to say that, very rapidly, AI — narrow AI — raises huge ethical questions. Now remember, this is the stuff that’s actually working — self-driving cars, autonomous vehicles. [There’s an ]AI system built in there but you have to build into it some kind of ethical decision-making. If the car sensors pick up an old lady crossing the road and it can see, so to speak, that if it misses her, it’ll hit a queue of children waiting in line for a bus, what does it decide to do? (4:09)

So you’ve got to program that ethically. And of course the ethical norms that will be built in are the ethics of the people doing the programming. That raises huge questions. Even more difficult is the matter of autonomous weapons. (4:27)

Now, the final example I would give you is the fact that we’re all involved in AI — that is any of us who own a smartphone. It’s tracking us all the time. (4:38)

What many of us don’t realize is that, for example, we make a purchase on Amazon. A few days later we’ll get a pop-up saying: People that bought this book were interested in that book. And they’re trying to encourage you to buy something else, okay? (4:55)

We put up with that but what we don’t often realize is, we are feeding a vast amount of information into that system. And large companies are selling that on to third parties without our permission, and this has become a real problem. And it is really what is called by Shoshana Zuboff — who’s an expert on it and has written a book talking about it — Surveillance Capitalism. (5:29)

Note: Just yesterday, we addressed the fact that Google’s Incognito mode tracks users, despite what they may think and — just for example — Beijing-owned Tik Tok surveils US citizens. When you tell anyone anything online, Big Tech is listening.

John Lennox: So we have surveillance in that sense, which is economically vastly profitable. We’re contributing without our permission and we’re doing this voluntarily now. This is eventually going to hit us really hard and we begin to see that there are problems with it. (5:48)

What I’m saying here is, AI is here to stay. It is going to rapidly be — if not already — a trillion-dollar business per annum and it raises vast problems in every direction, as well as doing a lot of good. (6:05)

And, as a Christian, I encourage young scientifically and computer literate people to get into AI. Firstly, to do the good stuff and develop, say, medical research that will help humanity but also to be there to make a contribution to the ethics which is very necessary. That’s a huge topic on its own. So that’s “narrow AI.”

The quest for superintelligence (transhumanism)

AGI [artificial general intelligence], on the other hand, is basically represented by the quest
for superintelligence. That has two parts. Firstly, the idea of enhancing current human intelligence by bioengineering, by cyborg technologies, by drugs and all kinds of things, to create a superintelligence, starting with human beings. (7:01)

That’s the agenda of one of the best known people on this topic who’s not a scientist, Yuval Noah Harari. In his book Homo Deus (the Man Who Is God), what he is trying to do is achieve the
transhumanist agenda. (7:19)

First, he feels that the problem of human death will be solved by technology fairly soon and therefore the main effort of artificial intelligence and related technology will be to enhance human happiness so that we turn human beings into gods — “small g” Greek-style gods — having superintelligence. (7:48)

Now, the other research is on trying to replicate human intelligence in silicon, to make it independent of organic material. And it’s in that kind of context that you get notions of people hoping that their brains will one day be capable of being uploaded onto silicon and so they live forever. And we’re all aware of people who go to great expense to have their brains frozen on death so that they hope that one day they will be woken up and their brain contents will be put onto silicon. (8:29)

Inside a cryonics facility preserving terminally-ill people (and pets) to wake up in the future (October 12, 2022)

Now, all of that is highly speculative. There’s much more science fiction than science.

They tell us that there’s a the Singularity coming. That’s Ray Kurzweil, who’s a guru in this field. This is now not the singularity of the original Big Bang. This singularity is when human beings become merged with technology and the technology essentially takes over. (9:40)

Does the rapid growth in computer processing contribute to these ideas of technology taking over? Moore’s Law, says it roughly doubles every 18 months or so. There will be an eventual limit to that. But the idea is that it will be able to go so far that it would be able to solve all these problems (10:13)

But I have very big questions about that. One of the deep questions in this whole field is the connection between intelligence and consciousness… (11:21)

John Searle, at Berkeley some years ago, pointed out that a system could appear to be intelligent without being intelligent at all. It is the famous Chinese Room experiment. (11:40)

What is John Searle’s Chinese Room experiment?

Well, the idea is that you have a closed room and a man inside it, and he has got various instructions as to how to deal with Chinese symbols. He doesn’t speak any Chinese and people come up to a hatch in the door and they feed in a question in Chinese. The man in the room doesn’t understand it but he compares it with all the instructions he’s been given. And the instructions say, if you see this, then hand out that. And he hands out something he doesn’t understand either, which is the answer to the question that was fed in:

John Lennox: So the people outside think that they’ve had an intelligent being responding to their question when they have no such thing. (12:26)

Will a computer ever be conscious?

How could you answer something like that when we have no idea what consciousness is? Consciousness is the huge barrier no one that I’ve ever spoken to or read has any idea what it is. So speculation is irrelevant. I doubt it very much because computers appear to [have] certain limits. Roger Penrose is very interesting on this topic. He argues that you will never be able even in principle to simulate the activity of the human mind on a computer because the human mind can do things that a computer cannot do. They’re not computable. And one of them is, for example, in mathematics the proof of Gödel’s famous theorem that arithmetic is not complete. (13:21).

Now these are very complex things but it’s very interesting to see that someone as bright as Penrose, with his recent Nobel Prize, argues that no, it isn’t computable so computers will never be able to do that. (13:42)

Next: Transhumanism’s problem is already solved

You may also wish to look at: As the robot fry cook takes over the kitchen … Business prof Jay Richards’ new short in the “Science Uprising” special dismisses philosophical claims that humans are mere “meat machines.” With any luck, people using new technology can increasingly design the jobs they want to do and have the skills for.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Oxford’s John Lennox Busts the “Computer Takeover” Myth