Mind Matters Natural and Artificial Intelligence News and Analysis
robotic-head-made-of-metallic-chrome-cubes-machine-face-surrounded-by-shiny-steel-boxes-liquid-metal-effect-3d-rendering-illustration-stockpack-adobe-stock
Robotic head made of metallic chrome cubes. Machine face surrounded  by shiny steel boxes. Liquid metal effect. 3d  rendering illustration
Image licensed via Adobe Stock

Marks: AI Looks Very Intelligent — While Following Set Rules

In an excerpt from Chapter 2 of Non-Computable You, Larry Nobles reads Robert J. Marks’s account of evolving AI “swarm intelligence” for Dweebs vs. Bullies (transcript also)
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Larry Nobles

In Podcast 211, Larry Nobles reads an excerpt from Chapter Two of Robert J. Marks’s Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022). The book is now available in audiobook form as well as Kindle format and, of course, paperback.

Chapter Two addresses the question, “Can AI be creative?” Pablo Picasso didn’t think so. He is reported to have said, “Computers are useless. They can only give you answers.”

Nobles reads Dr. Marks’s account of how he and a colleague got a “swarm” of little programs (Dweebs) to evolve a solution to a problem that required a good deal of creativity on his and colleague Ben Thompson’s part — but not on the part of the swarm or the computer:

A partial transcript follows:

The Office of Naval Research contracted Ben Thompson of Penn State’s Applied Research Lab and me and asked us to evolve swarm behavior. As we saw in Chapter One, simple swarm rules can result in unexpected swarm behavior, like stacking Skittles. Given simple rules, finding the corresponding emergent behavior is easy. Just run a simulation. But the inverse design problem is a more difficult one. If you want a swarm to perform some task, what simple rules should the swarm bugs follow?

To solve this problem, we applied an evolutionary computing AI. This process ended up looking at thousands of possible rules to find a set that gave the closest solution to the desired performance. One problem we looked at involved a predator–prey swarm. All action took place in a closed, square virtual room. Predators called Bullies ran around chasing prey called Dweebs. Bullies captured Dweebs and killed them. We wondered what performance would be if the goal was maximizing the survival time of the Dweeb swarm. The swarm’s survival time was measured up to when the last Dweeb was killed.

After running the evolutionary search, we were surprised by the result. The Dweebs submitted themselves to self-sacrifice in order to maximize the overall life of the swarm. This is what we saw:. A single Dweeb captured the attention of all the Bullies who chased the Dweeb in circles around the room. Around and around they went, adding seconds to the overall life of the swarm.

During the chase, all the other Dweebs huddled in the corner of the room shaking with what appeared to be fear. Eventually, the pursuing Bullies killed the sacrificial Dweeb and pandemonium broke out as the surviving Dweebs scattered in fear.

Eventually, another sacrificial Dweeb was identified, and the process repeated. The new sacrificial Dweeb kept the Bullies running around in circles while the remaining Dweebs cowered in the corner. The sacrificial Dweeb result was unexpected. A complete surprise. There was nothing written in the evolutionary computer code explicitly calling for these sacrificial Dweebs. Is this an example of AI doing something we hadn’t programmed it to do? Did it pass the Lovelace test,? Absolutely not.

We had programmed the computer to sort through millions of strategies that would maximize the life of the Dweeb swarm, and that’s what the computer did. It evaluated options and chose the best one. The result was a surprise, but does not pass the Lovelace test for creativity. The program did exactly what it was written to do, and the seemingly frightened Dweebs were not in reality shaking with fear. Humans tend to project human emotions onto non-sentient things. The Dweebs were rapidly adjusting to stay as far away as possible from the closest Bully. They were programmed to do this.

If the sacrificial Dweeb action … [does] not pass the Lovelace test, what would? The answer is anything outside of what the code was programmed to do.

Here’s an example from the predator–prey swarm example. The Lovelace test would be passed if some Dweebs became aggressive and started attacking and killing lone Bullies, a potential action we didn’t program into the suite of possible strategies. But that didn’t happen. And because the ability of a Dweeb to kill a Bully is not written into the code, it will never happen… But remember, the AlphaGo software as written couldn’t even provide an explanation of its own programmed behavior, the game of Go.

Note: An excerpt from Chapter One is also available here, as read by Larry Nobles (October 6, 2022). A transcript is available there as well.

Additional Resources

  • Non-Computable You: What You Do That Artificial Intelligence Never Will by Robert J. Marks at Amazon
  • Robert J. Marks at Discovery.org

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Marks: AI Looks Very Intelligent — While Following Set Rules