In last week’s podcast, Oxford mathematician John Lennox talked about AI surveillance and the danger of misusing the technology for purposes of suppression. He said,
But there’s a downside because facial recognition technology is being used at the moment in certain parts of the world to invade the privacy, not only of individuals, but of whole people groups and actually control them and suppress them. Now, I mentioned that example to say that very rapidly AI, narrow AI raises huge ethical questions.
Now remember, this is the stuff that’s actually working, self-driving cars, autonomous vehicles, AI system built in there, but you have to build into it some kind of ethical decision making. If the car sensors pick up an old lady crossing the road and it can see, so to speak, that if it misses her, it’ll hit a cue of children waiting in line for a bus. What does it decide to do? So, you’ve got to program that ethically. And of course, the ethical norms that will be built in are the ethics of the people doing the programming. That raises huge questions. Even more difficult is the matter of autonomous weapons and so on and so forth.
In this article here we reported on a study that found that China was exporting facial recognition technology to politically unstable countries. Countries that lean authoritarian have a vested interested in creating a surveillance state. Meanwhile, Lennox asks the obvious: Who’s going to program ethics into AI?