Mind Matters Natural and Artificial Intelligence News and Analysis
rawpixel-653778-unsplash
Fake bugs on yellow background
Photo by rawpixel on Unsplash

7: Computers can develop creative solutions on their own!

AI help, not hype, with Robert J. Marks: Programmers may be surprised by which solution, from a range they built in, comes out on top
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From the Bullies and Dweebs simulation

To be truly creative, AI would need to pass the Lovelace test , as proposed by Selmer Bringsjord of Rensselaer Polytechnic Institute. It must be able to perform a task that cannot be accounted for by its programmer. In that sense, AI will never be creative.

In March, a group of AI researchers, “The Thirty Three,” compiled a collection of anecdotes in order to demonstrate that their evolutionary AI software was creative. The group included world-class experts, researchers from Apple, the University of Texas, Cornell, Imperial College London, University of Colorado and Columbia.

Evolutionary software? In evolutionary programming, programmers develop a goal and see how close they can get to achieving it. They propose billions, even trillions of trillions, of possible solutions to the problems. No computer can analyze them all just by motoring through the numbers. So the programmers develop evolutionary search algorithms, that is, algorithms that intelligently search for a solution based on a bias imposed by the programmer. This bias guides the program toward one or more solutions close to the desired goal.

Sometimes the results are unexpected and even surprising. But they follow directly from the program doing exactly what the programmer programmed it to do. It’s all program, no creativity.

For example, in my own work in swarm intelligence, we coded a predator-prey problem. We called the swarm of prey Dweebs. The Dweebs were pursued by a swarm of predators called Bullies. We maximized the lifetime of the Dweeb swarm using evolutionary programming. A surprising result was Dweeb self-sacrifice. Bullies would chase a single Dweeb in circles while the rest of the Dweebs huddled in the corner. When the sacrificial Dweeb was killed, there was temporary chaos, followed by the identification of another sacrificial Dweeb. This process was repeated until the Dweeb swarm was decimated.

The “sacrificial Dweeb” strategy maximized the lifetime of the Dweeb swarm. That result was surprising. But it was one of the possible solutions we had programmed into the Bullies and Dweebs evolutionary algorithm. The algorithm had not arrived at it independently.

Similarly, there are no real surprises in the results offered by the Thirty Three. One example of creativity offered was the evolution of a walking bug.1 The bug evolves various gaits for walking on six legs:

Interesting and sometimes unexpected gaits emerged from the evolutionary search, depending on the conditions imposed by the program. For example, if a bug had a bum leg, it would learn to walk without it.

When turned upside down, the bug walked on its elbows. An unexpected result, yes. But the programmers had developed an algorithm to explore various gaits for their digital bug and that is what they got.

What would make their bug program creative? What if, without any programming whatever, the digital bug learned to jump a chasm the width of its body length? Remember, we are not asking the programmers to develop such a feature for the program. We are asking the bug to develop the creativity without any prior programming. Otherwise, it’s the programmers’ creativity; not the bugs.

Yes, we are still waiting for that.

1 Mouret JB, Clune J. Illuminating search spaces by mapping elites. arXiv preprint arXiv:150404909. 2015; p. 1–15.

See also: 2018 AI Hype Countdown 8: AI Just Needs a Bigger Truck! AI help, not hype: Can we create superintelligent computers just by adding more computing power? Some think computers could greatly exceed human intelligence if only we added more computing power. That reminds me of an old story…

2018 AI Hype Countdown 9: Will That Army Robot Squid Ever Be “Self-Aware”? The thrill of fear invites the reader to accept a metaphorical claim as a literal fact.

2018 AI Hype Countdown: 10. Is AI really becoming “human-like”? Robert J. Marks: AI help, not hype: Here’s #10 of our Top Ten AI hypes, flops, and spins of 2018 A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI” Don’t worry if you missed it.

Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University.  Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group. Also: byRobert J. Marks:


7: Computers can develop creative solutions on their own!