Mind Matters Natural and Artificial Intelligence News and Analysis
high-angle-view-of-railroad-tracks-stockpack-adobe-stock
High Angle View Of Railroad Tracks

Iron Law of Complexity: Complexity Adds But Its Problems Multiply

That’s why more complexity doesn’t mean more things will go right; without planning, it means the exact opposite. The math is scary
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In “Bad news for artificial general intelligence” (podcast Episode 160), Justin Bui and Sam Haug from Robert J. Marks’s research group at Baylor University joined him for a look at how AI can go wrong — whether it’s an inconsequential hot weather story or imminent nuclear doom. Now, in Episode 161, they start by unpacking the significance of an ominous fact: When we increase complexity by adding things, we multiply the chances of things going wrong. Never mind getting an advanced machine to solve all our problems; it can’t solve its own:

A partial transcript and notes, Show Notes, and Additional Resources follow.

Robert J. Marks: I recently vetoed a family member’s suggestion that we put a lock on our home that could be opened using a cell phone app. I didn’t want it. Why? There is just too much that could go wrong. An old fashioned key lock is simple and reliable. I was unsure about cell phone apps and haven’t had the best of luck with some of them.

Robert J. Marks

The more complex the system, the more it can go wrong. Artificial General Intelligence or AGI will be complex. For all the stuff it’s expected to do, it has to be complex. And as complexity increases linearly, the way things that can go wrong increases exponentially.

Sam, you’re the first author on a peer reviewed paper that showed the reasoning behind this exponential explosion of contingencies. Can you explain in a simple way as possible why contingencies increase exponentially as the complexity of a system increases linearly?

Let’s consider a very simple washing machine. It has two settings, it either washes the clothes for a long time or it washes the clothes for a short time. In addition to those two settings, it has one single sensor to figure out how long it should wash the clothes. And this sensor is going to measure how heavy the load is.

Sam Haug

In this very simple example, there’s only one sensor keeping track of one variable. It yields two possible outcomes: either washing for not a long time or washing for a long time. So if this washing machine is correctly designed to handle these loads well, it needs to be tested for a heavy load and a light load. And if it handles both of those scenarios correctly, then you have designed the perfect washer for the design project you have.

The assumption is that you’ll begin with a prototype, you will test that prototype to see how well it handles the contingencies that you’re expecting. And if it handles those contingencies well, you’re done with your design process. If it does not, then you’ll need to make some tweaks to make sure that it does. And so this is the framework that we use in the paper to discuss complexity of design.

Now let’s talk about a slightly more complex example. We still have the same washer, it is still only able to discern a few variables using its sensors. Now we’re going to add one additional sensor, which is how dirty the load is, which is its turbidity. So if it is a very turbid load, it’s very dirty so you’d need to wash it also for a long time. And if it is not turbid, if it’s very clear water in the washing machine, then you don’t need to wash it for as long.

And you can do this by simply putting something like an LED light and seeing how much attenuation there is from the light to the sensor. And the more turbid the water, the more attenuation is going to be given… Okay, go ahead.

Robert J. Marks

So here, our design is getting a little bit more complex. And now we have, instead of two possible input loads, four possible input loads. We could have a light, clear load, we could have a light turbid load, a heavy, clear load, or a heavy, turbid load. So now we have doubled the number of possible input loads. And we’ll begin to refer to these possible inputs as contingencies.

Sam Haug

So in order to design the perfect washer that now has two sensors, you need to test four possible loads. And if this washer correctly handles all four of those loads, then you’ve finished your job, you have designed the perfect washer… In this case, every variable you add doubles the number of contingencies that your washer will need to account for.

If we increase the number of sensors on this washing machine to 20 sensors, so it keeps track of 20 different variables, so heaviness would be one, turbidity would be one, it could go through any number of other possible examples. With a still very simple system with only 20 sensors (each one can only be on or off, there’s no range of inputs), there are already over a million contingencies that you would need to design your washer for, which is just incredible.

It gets even more complex when we are talking about more complex programming such as image recognition software

Looking at a little bit more complex system — image recognition software, for example — one of them would be the wolf and dog classification that we talked about last time, where you feed a neural network a picture of either a dog or wolf and it tells you which it is. If you wanted to fully characterize the performance of this system, you would have to test every single combination of pixels in the image size that it’s going to be fed.

Sam Haug

So for a small 100 by 100 pixel image, that’s 10,000 pixels that you need to test. And each of those pixels has 256 gray levels and three color choices, which is the RGB, which is red, green, and blue values for each pixel. In this still relatively small design example, if you wanted to fully test the performance of any image classification software you’re designing, you would have to test it 1029,000 times. That number is so large, it’s difficult to imagine.

As a bit of a ballpark estimate here, the number of atoms in the known universe is estimated to be around 1080 which is an incredibly large number. But the number of contingencies with this small 100 by 100 image is just unfathomably larger than that: 1029,000, which is just bigger than anything we could probably imagine.

As we say in Texas, it’s bigger in Dallas.

Robert J. Marks

That’s right.

Sam Haug

It’s just an enormous, enormous number. Now, of course, I think that testing all possible images would probably not be wise and we’re going to talk later about how you reduce these number of contingencies by reducing the problem a little bit.

Robert J. Marks

Software engineers want to design systems like AI to avoid the problems of the contingency explosion that you just talked about. For example, the image — we wouldn’t want to do all of the 10 to the big, big number tests. So what are some ways to avoid unintended contingencies?

One of the primary ways that we can mitigate the effects of the exploding contingencies is with what we call domain expertise, which is a designer’s very intimate knowledge with the design that he’s creating. So for example, in the area of self-driving cars, which are extremely complex, some domain expertise might be familiarity with traffic laws, familiarity with the physics of acceleration and braking, and turning and such. Domain expertise is just ground level knowledge of the environment that you’re going to be placing your design in.

Sam Haug

In the example of the image recognition design, some domain expertise might be recognizing that your image recognition software will not be exposed to random static noise, for example. And so it may not be as important for you to test all of the possible combinations of static noise for your image but to focus on the images that will probably be presented to your design such as pictures of wolves and pictures of dogs, and to make sure that those are classified correctly. So that’s domain expertise.

Okay. I use this example a lot. I used it in a podcast with Ola Hössjer and Daniel Díaz. But one of the great illustrations of the need for domain expertise is Formula 409. Have neither of you heard of Formula 409?

Robert J. Marks

The cleaning solution?

Justin Bui

The cleaning solution. Okay. I asked Daniel, who is from Colombia, and I asked a Ola who is from Sweden, if they had ever heard of it. And they said, no, no, no. They must use something different. Well, the reason it’s labeled Formula 409 is that it took 409 experiments to design that final result. And that required domain expertise. I’m sure it was done by chemists. I’m sure it wasn’t done by junior high students, for example. And in fact, if it was done by a total novice, it would be called formula 2,642,000, or something like that. So yeah, domain expertise really can be used as a technique to reduce the unintended contingencies. And that’s what they did.

Robert J. Marks

We know about Thomas Edison testing thousands of different filaments when he generated the light bulb. And Nikola Tesla, who was kind of a nemesis of Edison, came along and dissed Edison. He said, “You don’t need to test all these 10,000 different combinations of filaments. If you just had a little bit of book learning, you could get this down to 100 or 200.” Because some of the things that Edison was testing, Tesla considered kind of stupid. So that’s another example of the need of domain expertise.

Another one is WD-40 that I like to use, which is water displacement system mastered at the 40th try. And this was done by an industrial chemist, I think his name was Larsen. And if he had not had domain expertise, we would be using WD-5 million or something like that…

Robert J. Marks

Note: According to the company that makes it, WD-40 literally stands for “Water Displacement 40th ” attempt. That’s the name straight out of the lab book used by the chemist who helped develop WD-40 back in 1953. Norman Larsen was attempting to concoct a formula to prevent corrosion, a task which is done by displacing water. Norm’s persistence paid off when he perfected the formula for WD-40 on his 40th try…

WD-40 was first used to protect the outer skin of the Atlas Missile from rust and corrosion. When it was discovered to have many household uses, Larsen repackaged WD-40 into aerosol cans for consumer use and the product was sold to the general public in 1958.

Two of the craziest purposes for WD-40 include a bus driver in Asia who used it to remove a python snake which had coiled itself around the undercarriage of his bus and police officers who used WD-40 to remove a naked burglar trapped in an air conditioning vent. Thoughtco (November 17, 2019)

Justin, do you have any thoughts on this?

Robert J. Marks

The addition of subject matter expertise really does reduce the complexity of things. I think a lot of my research [involves] image recognition and classification. Some of the techniques that are implemented in a lot of the larger scale systems deal a lot more with traditional computer vision techniques, histogram correction, color matching and correction, image resizing, and… Kind of, the more you can do on the front end, in the pre-processing side of things, the much more simple your AI system can be. And it aligns quite well with some of the increasing complexities that you all have documented in your paper.

Justin Bui

Fascinating stuff. I think I’ve learned from you, Justin, that there’s a lot of standardization of the AI. In other words, there’s a sort of conformity that is used in order to sculpt the input to deep learning so you don’t have to consider so much. Is that fair to say?

Robert J. Marks

I think so. In fact, my intuition and my gut feel says that’s where a lot of the subject matter expertise is actually best used. Let’s keep running with the example of an image classifier. If you can get the best possible, most standard looking data, the most clean, most precise data, the development of your AI system will be that much more simple. You can reduce impacts of noise or color mismatch, lighting variations, all “in the input pipeline”, meaning that you can minimize and optimize the implementation of your AI system.

Justin Bui

Yes. So standardizatio reduces the contingencies by decreasing the complexity of the problem that you’re trying to achieve. Sam, there are some other obstacles facing development of complex AI. We talked about, for example, in Jeopardy, Watson repeating an incorrect answer. And those are covered very interestingly by a quote, madf popular by Donald Rumsfeld. I’d like you to talk about that for a second…

Robert J. Marks

Next: The knowns, the unknowns, and the Pareto tradeoffs

Here’s are Parts 1 and 2 of Episode 159, featuring Robert J. Marks and Justin Bui: If not Hal or Skynet, what’s really happening in AI today? Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use. Free AI software means that much more innovation now depends on who gets to the finish line first. Marks and Bui think that will spark creative competition.

and

Have a software design idea? Kaggle could help it happen for free. Okay, not exactly. You have to do the work. But maybe you don’t have to invent the software. Computer engineer Justin Bui discourages “keyboard engineering” (trying to do it all yourself). Chances are, many solutions already exist at open source venues.

In Episode 160, Sam Haug joined Dr. Marks and Dr. Bui for a look at what happens when AI fails. Sometimes the results are sometimes amusing. Sometimes not. They look at five instances, from famous but trivial right up to one that nearly ended the world as we know it. As AI grows more complex, risks grow too.

In Episode 161, Part 1, Marks, Haug, and Bui discuss the Iron Law of Complexity: Complexity adds but its problems multiply. That’s why more complexity doesn’t mean more things will go right; without planning, it means the exact opposite. They also discuss how programmers can use domain expertise to reduce the numbers of errors and false starts.

and

In Part 2 of Episode 161, they look at the Pareto tradeoff and the knowns and unknowns:
Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty. Constraints underlie any engineering design — even the human body.

Show Notes

  • 01:10 | Introducing Justin Bui and Sam Haug
  • 01:28 | Exponential Explosion of Contingencies
  • 08:20 Avoiding Contingency Explosions
  • 08:43 Domain Expertise
  • 13:09 | Standardization of the AI
  • 15:15 | Four Types of Knowns
  • 19:43 | Human Contingencies
  • 22:25 | Pareto Trade-Off
  • 24:41 | Doman Expertise and Forecasting

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Iron Law of Complexity: Complexity Adds But Its Problems Multiply