Mind Matters Natural and Artificial Intelligence News and Analysis
stefano-ghezzi-722015-unsplash
Hole or tunnel in dark wall
Photo by Stefano Ghezzi on Unsplash

6: AI Can Even Exploit Loopholes in the Code!

AI adopts a solution in an allowed set, maybe not the one you expected
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Rule: The customer must not hear anything that may give offense.

In the same paper in which researchers purported to find examples of AI creativity, we also read the following statement about problems with performance: “Exacerbating the issue, it is often functionally simpler for evolution to exploit loopholes in the quantitative measure than it is to achieve the actual desired outcome.”

One example they offered of this type of gaming the system was a walking digital robot that moved more quickly by somersaulting than by using a normal walking gait. That was a very interesting result. But again—recognized or not — somersaults were allowed in the solution set offered by the programmer. For example, learning to fly — without additional programming — could give the robot a low pass on the Lovelace test for creativity (does something the programmer cannot account for). But there doesn’t appear to be have been any contingency in the program to allow the robot to learn to fly and it didn’t.

I was once working at a startup company called Financial Neural Networks. One of our goals for software was to forecast the futures market for the S&P 500 (an index of the performance of 500 large publicly traded companies in the stock market).

One of our first results looked astonishing! The forecasted price curve followed the actual price curve remarkably. Too remarkably, it turned out. On closer inspection, our S&P forecast lagged the true S&P forecast by one tick. The neural network had learned that the best forecast of the next hour’s price was this hour’s price. And forecasting with minimum error is what had we asked it to do. Yes, this simplistic solution gamed us. But then we had asked for minimum error, and that is what we got.

We were really gaming ourselves.

Note re solutions: See: “AI can solve any problem, as long as we are not too persnickety about the solution,” for some classic examples of unintended (but inadvertently allowed) programming outcomes, for example, “Agent kills itself at the end of level 1 to avoid losing in level 2 (2017).”

See also: 2018 AI Hype Countdown 7: Computers can develop creative solutions on their own! AI help, not hype: Programmers may be surprised by which solution, from a range they built in, comes out on top Sometimes the results are unexpected and even surprising. But they follow directly from the program doing exactly what the programmer programmed it to do. It’s all program, no creativity.

2018 AI Hype Countdown 8: AI Just Needs a Bigger Truck! AI help, not hype: Can we create superintelligent computers just by adding more computing power? Some think computers could greatly exceed human intelligence if only we added more computing power. That reminds me of an old story…

2018 AI Hype Countdown 9: Will That Army Robot Squid Ever Be “Self-Aware”? The thrill of fear invites the reader to accept a metaphorical claim as a literal fact.

2018 AI Hype Countdown: 10. Is AI really becoming “human-like”?  AI help, not hype: Here’s #10 of our Top Ten AI hypes, flops, and spins of 2018 A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI” Don’t worry if you missed it.

Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University.  Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group. Also: byRobert J. Marks:


6: AI Can Even Exploit Loopholes in the Code!