Mind Matters Natural and Artificial Intelligence News and Analysis
Group Of Businesspeople Identified By AI System
Businesspeople With Different Fields Walking In Office Premise Identified By Intellectual Learning System
Adobe Stock licensed

How To Fool Facial Recognition

Changing a couple of pixels here and there can stump a computer
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the most recent Mind Matters.ai podcast, “Bingecast: Is Cheese Consumption Causing Deaths From Tangled Sheets?”, Pomona College economics professor Gary Smith joins Walter Bradley Center director Robert J. Marks to talk about what data analysis can and can’t do to help us understand our world more clearly. Here are the Podcast Notes. A partial transcript follows.

Partial transcript:

38:29 | Fooling face recognition

Robert J. Marks (right): You also mentioned face recognition and how it can be fooled. I’ve heard recently that Apple is under pressure from some of its employees, or maybe it’s the stockholders, to stop sharing the facial recognition software that it has developed. But you said that some of this facial recognition software can actually be relatively easily fooled.

Gary N. Smith: The general problem again is that these AI programs, the deep neural networks are doing is, they’re just looking for patterns. And it turns out that you can change a couple of pixels here and there and the computer is totally at a loss as to what it’s looking at. And so you can take a stop sign and change a pixel or two, which we would not even notice and the computer doesn’t know it is a stop sign any more.

On the facial recognition side, there were these guys, I think it was at Carnegie Mellon, and they showed a computer a picture pf a guy and a picture of a woman and they trained it, look at this picture and call it “Guy” and look at this picture and call it “Gal,” and it worked perfectly. And they put some goofy glasses on the guy and the computer thought the guy was the gal.

Robert J. Marks: Now was it glasses with the funny eyes and the big rubber nose?

Gary N. Smith (left): No, it was glasses and the frames were kind of goofy, with blue colors and yellow colors and orange colors. And humans, looking at them, we know what glasses are and we look behind the glasses and see the face and we know it’s still a guy. And the computer somehow, inside that black box, got very confused in its pixel matching and it decided that the guy wearing glasses was actually the gal. Again, it’s just a reflection of the fact that computers don’t understand pictures; they don’t understand the world, They look at mathematical patterns.

One of the examples in my books: I do a little picture of a wagon, just a simple wagon with wheels and a handle, and you and I would recognize it immediately. We know what it can be used for and we know it would be dangerous to ride it downhill, etc., etc. And I showed it to a deep neural network algorithm and it came back saying it was a business. And I showed it to another one and it came back saying it was a badminton racket.

Robert J. Marks: Now, these were trained deep neural networks, is that right?

Gary N. Smith: Trained deep neural networks, and somehow the pixels they had been trained on didn’t match the wagon I’d drawn, even though it was clearly a wagon. And they ended up thinking it was a badminton racket. Even crazier, there were guys, I think it was a the University of Wyoming, they showed deep neural network horizontal black and yellow lines and the computer came back and said it was a school bus. It didn’t have wheels, didn’t have windows, didn’t have anything resembling a school bus, yet somehow the black and yellow pixels confused them. Again, that’s just a reflection of the fact that, in the current state of AI, what you do is you match letters or sounds or pixels and you find patterns there but the computer doesn’t actually know what it is that it is looking at.

41:36 | Fooling humans with random data

Robert J. Marks: That is fascinating. But humans have this same problem. You went through an example in your book where you generated data at random and you gave it to a quant person and the quant person was just like a computer. He applied his algorithms and said, yes I can make money on this. Even though it was just randomly generated. So I guess humans have this problem too and maybe they’ve just transferred it into computers.

Gary N. Smith: Yeah, that’s how I first got interested in this thing. I had my students at Yale flip coins and we randomly generated these fake price charts. And some technical analyst on Wall Street thought it was real and wanted to know what stock it was so he’d buy it. That’s how I got interested in this idea of coincidental patterns. And the problem now is that computers are so good at finding coincidental patterns. It’s much worse than it used to be because computers are so good at it and so bad at deciding whether it’s a joke or real.

42:41 | The bigger the data, the more spurious correlations

Robert J. Marks: And you mentioned, the bigger the data, the more spurious correlations you’re going to get.

Gary N. Smith: There’s actually a mathematical proof of that.

Robert J. Marks: Oh really! What’s it called?

Gary N. Smith: It’s pretty much what you said. It’s just that, the more data you look at, the more likely you’ll find a coincidental pattern. And of course, with big computers and big data, you can find lots, hundreds, thousands, even billions of coincidental patterns.

Robert J. Marks: So you need a human on the output to question the answers.

Gary N. Smith: And one of the problems with these black boxes is, you don’t know what’s going on inside. And so if comes out and says, buy Apple!, you don’t know if it’s because it looked at the temperature in Curtin, Australia, or if it looked at Apple’s dividends.

Robert J. Marks: Yes. You actually mentioned that you had a student that came to you and said I’m going to be a billionaire and he was going to land a helicopter. I thought it was a great story. Could you share that?

Gary N. Smith: Well, it was one of my students in my investing class and I was making fun of technical analysis and finding patterns in data like I always do. He went off and went to work, ironically, for IBM, I think it was. It might have been Adobe. In his spare time, he was looking at futures contracts and finding patterns there. And he found all these patterns there. And he phoned me up and told me he was going to get rich and he was going to fly a helicopter down from Silicon Valley to my building, where I teach and land on the lawn outside the building and come inside and tell my class the truth about technical analysis. I tell the story every year I teach my class and then I go look outside the window… there’s still no helicopter.

Robert J. Marks: I knew a guy named Jack Marshall. He was a professor. He called himself a financial engineer because he did all the derivatives and math for things like that. And he said he was approached all the time by people who said “I can beat the stock market” or I’ve come up with a neural network” or “I’ve come up with these quant rules to do it” and Jack said he doesn’t even have to talk to the guy or look at the software. He just has to ask him a simple question. He asks, “What kind of car do you drive?” That cuts through all the garbage and says whether their technique has been reduced to practice or not.


Earlier discussions between Robert J. Marks and Gary Smith:

Why was IBM Watson a flop in medicine Robert J. Marks and Gary S. Smith discuss how the AI couldn’t identify which information in the tsunami of medical literature actually mattered.

Why an AI pioneer thinks Watson is a “fraud.” The famous Jeopardy contest in 2011 worked around the fact that Watson could not grasp the meaning of anything.

Can AI combat misleading medical research? No, because AI doesn’t address the “Texas Sharpshooter Fallacies” that produce the bad data.

AI delusions: A statistics expert sets us straight. We learn why Watson’s programmers did not want certain Jeopardy questions asked.

and

The US 2016 election: Why Big Data failed. Economics professor Gary Smith sheds light on the surprise result.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

How To Fool Facial Recognition