Mind Matters Natural and Artificial Intelligence News and Analysis
retro-old-beige-fridge-in-loft-style-wooden-kitchen-stockpack-adobe-stock.jpg
Retro old beige fridge in loft style wooden kitchen

#11: A Lot of AI Is As Transparent As Your Fridge

A great deal of high tech today is owned by corporations
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As the year draws to a close, our Walter Bradley Center director Robert J. Marks interviews fellow computer nerds, members of our Brain Trust, Jonathan Bartlett and Eric Holloway on their picks for over hyped AI of the year.

Hey, great stuff happened in AI this year. But well, lots of “stuff” happened too. And it’s time to have some fun! So here’s #11! Corporate types insist that they believe in transparency. But a lot of AI is as transparent as your fridge. Our team has the story:

#11 starts at about 9:08 A partial transcript and Show Notes follow.

A recent article in top science journal Nature pointed out that AI developments that matter today are often not transparent. That doesn’t mean that people untrained in computer science don’t understand them; it means that people who are trained in computer science don’t get the information to evaluate them:

Eric Holloway: The problem is now, AI is not merely like a research project, but it’s also a product, and it’s a product of some really big companies like Google… They don’t actually release anything that people can use to reproduce the results.

They just say, “Hey, we ran these massive neural networks on these massive datasets with massive amounts of compute and we got super great accuracy scores. And you can use our model to get our same scores, but we’re not going to really tell you how we did it. We might kind of hint at it, but we don’t give you enough specifics where you can repeat it.” And also, it’s actually out of reach of pretty much anybody who’s not Google, because these computations cost like millions of millions of dollars and use massive computer farms.

Robert J. Marks: There’s an old saying in engineering that “In theory, theory and practice are the same. In reality, they are not.” And I think when you reduce something to practice, that’s where the rubber meets the road. That’s what’s going to be important. On the other hand, Eric, doesn’t Google make available to the public this incredible software platform they call TensorFlow and other AI sort of software that they can use? But you’re not talking about that, are you?

Eric Holloway: No. It’s not like the tooling. Well, they don’t even release all their tooling. They give us a little bits and pieces of it enough that other people will start like getting addicted to Google, but not enough that we can really do what they do. Yeah, they released TensorFlow, but there’s always a difference between the tools that Google releases to the public and what they actually use.

TensorFlow is a framework that makes it easier to write these AI algorithms, but the actual algorithms and models themselves, that is the secret sauce that Google is not really releasing.

Robert J. Marks: I see. It works and just trust us.

Note: Eric Holloway is discussing the conflict between a business model and a science model in the computer industry. Obviously, a business like Google wants to keep its trade secrets a secret. But problems develop when the business also wishes to be regarded as, say, a world leader in computer science. In science, reproducibility (replication) is critical for establishing the truth of a proposition.

Earlier today, for example, we noted that a number of “AI can read your mind!” studies are in jeopardy because researchers became suspicious when they could not replicate the dramatic results. But at least they knew how those results were acquired so they could identify the point of error. Absent such transparency, why believe anything at all? Is it even science?

Eric Holloway: There’s even a bigger picture issue why AI is not scientific, and that gets back to its fundamental assumption that everything a human mind can do, you can do with the computer. Everyone in the AI field just takes that for granted. They’re like, “Oh yeah, of course.”

Robert J. Marks: You’re saying that AI doesn’t follow the scientific method?

Eric Holloway: The very premise of the field is unscientific. Science is all about questioning your assumptions and testing them before you accept them as valid. But AI is the complete opposite. They take their assumption and treat it as valid and then do all their research and stuff based on that assumption.

Note: One thing that’s quite evident is that the human mind is reasonably believed to be immaterial and the human brain is, in any event, nothing like a computer. Such ideas might be fun for late night talk shows but they will not help much in the real world.

Stay tuned for our #10 hype, coming up soon!


In our countdown for the Top Twelve AI Hypes of 2020…

And here’s #12! AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.


In our countdown for the Top Twelve AI Hypes of 2020…

And here’s #12! AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.

Show Notes

Additional Resources


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

#11: A Lot of AI Is As Transparent As Your Fridge