Mind Matters Natural and Artificial Intelligence News and Analysis
programmer-stockpack-adobe-stock
Programmer
Programmer

The Pareto Tradeoff — Choosing the Best of a Mixed Lot

Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the first part of podcast Episode 161, “Bad news for artificial general intelligence”, Robert J. Marks and colleagues Justin Bui and Sam Haug from his research group at Baylor University looked at a fundamental reality of complex systems: Complexity adds but its problems multiply. More advanced AI would be faster but capable of bigger and more complex goofs. That leads to the world of knowns and unknowns and the Pareto tradeoffs that enable us to make decisions about artificial intelligence. So now Dr. Marks begins by asking about the late Donald Rumsfeld‘s notion of the knowns and unknowns:

This portion begins at 15:15 min. A partial transcript and notes, Show Notes, and Additional Resources follow.

Sam Haug: This quote is given by former State Secretary of Defense, Donald Rumsfeld: “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

Robert J. Marks: The funny part is it sounds like double talk if you read it real quickly. But if you sit down and examine it, it’s really meaningful and applicable to the sort of thing that we’re considering.

Sam Haug: Yes. We’ve lumped them into four categories. One of them is the known knowns. These are the tests that we have conducted on our design and evaluated the result. We’re very sure that these are correct knowns, because we’ve actually done the testing, we’ve seen how it performs, and there’s not much more to know.

The next would be the known unknowns. These are the tests that we have not conducted. And we know that we have not conducted these tests. So we are aware of our lack of knowledge in these particular environments and circumstances.

Another type of unknowns is the unknown knowns — things that should be obvious, but have been overlooked by the designer. Going back to some of our examples mentioned in the previous podcast, the example of IBM Watson repeating an incorrect answer given by a human contestant would be one of those unknown knowns. The designer, who’s watching the contest, would give themselves a facepalm because they know that they should have foreseen this particular contingency, but they haven’t. These are contingencies that are obvious, but just have not been included.

The final classification is the unknown unknowns. These are the most troubling situations and circumstances. Even a designer with expertise in the domain did not foresee the possible outcome. This would be, for example, self-driving cars attempting to classify plastic bags when they’re moving and not moving. The designers probably would not facepalm if their car encounters a flying plastic bag that it is unable to classify correctly because they didn’t foresee that. And it’s not something that they should have foreseen that was extremely obvious, it was something that just couldn’t have been foreseen by a designer with domain expertise.

Robert J. Marks: Fascinating. I guess the unknown unknowns is really a big problem. So I think in the O.K.O. example, where incoming missiles were interpreted from the sun reflecting off of clouds — that was probably an unknown unknown that wasn’t even considered in the design of O.K.O, which is unfortunate.

But what about the design of human beings?

Robert J. Marks: Here’s a counterargument: We see highly complex systems that operate reliably. An example of that is, you and me. We’re human beings, we are put together, we are very complex, but we still seem to work well. Why? What is going on here?

Sam Haug: I definitely agree that human beings are extremely complex and extremely well made. I personally believe that this is because humans were created by a creator with an extremely large depth of domain expertise, who is able to…

Robert J. Marks: That is a great phrase, Sam. I appreciate. Our creator has a deep — how did you put it? — a deep knowledge of domain expertise. That’s great. That’s funny.

Sam Haug: Our designer doesn’t just have expertise of the domain; he created the domain. And that just has an infinite depth of foresight and predictiveness, where he is able to design these incredibly complex systems and foresee all possible events that they will ever encounter in history or in the future, and design a human being who is able to overcome and adapt to a lot of these circumstances.

Robert J. Marks: Even so, I’m thinking of the design of human beings. We’re still not perfect. I don’t know if there are unintended contingencies or not, but things like COVID, for example., We weren’t designed to handle COVID, especially old people like me, or even something similar, like eating hemlock, the way that Socrates was killed. We also see defects like ibirth defects, diseases such as cancer and things of that sort. Isn’t this an example of contingencies which we would prefer not to see in the design of humans?

Socrates

Note: The great philosopher Socrates (470–399 BC) drank hemlock after being condemned for corrupting young people by encouraging them to ask too many questions.

Sam Haug: The way I like to think about how human beings fail in certain circumstances falls into two categories. The first category is that our creator intentionally did not design us to withstand this particular contingency. When designing a human being or any incredibly complex system, there are some design trade-offs. You can design a human being to be able to resist the effects of eating hemlock, for example, but the cost for doing that may be large.

For example, you would need to include an entirely new metabolic pathway to account for that particular poison. And doing that for any number of poisons may just not be feasible in the size of the human body. I don’t claim to know about all the design implications of making a human being, but I’m sure that there was some level of intentionally in not designing human being to withstand some things for trade-off reasons.

And then the other category of things that humans fail — or the human design does not withstand — would be due to the Fall. I believe in the God of the Bible, who designed us perfectly, and we sinned and fell. And as a result of that Fall, the perfect design that God has made was corrupted. And all of the contingencies that he has foreseen, some of the mitigating factors to avoid or overcome those contingencies, may have been affected by the corruption of the Fall. And so that is where I think diseases and stuff of that nature comes from, because I don’t believe that those were intended pre-Fall.

And now… the Pareto tradeoff

Robert J. Marks: Whatever the cause, we do have something in design — engineers know this — called a Pareto trade-off. This is a trade-off between optimal performances.

I worked my way through my master’s degree as a disc jockey. And one of the things we used to do is cut commercials. Sometimes the copy for the commercials came from the sponsors.

And we had one — I remember it because it’s so hilarious. It was a place called Charlie’s Fish Market. At the time, there was an explosion in the price of meat like pork and beef. The copy was

good meat ain’t cheap and cheap meat ain’t good. So eat fish.

Robert J. Marks: That was the ad for Charlie’s Fish Market. It explains a Pareto trade-off. In our world now, there’s a trade off in performance.

I’ll give you an example with cars: Safe cars aren’t cheap and cheap cars aren’t safe. That’s just like Charlie’s Fish Market. You have to do a Pareto trade-off between a cheap car and a safe car. If you want a safe car, drive around in a Humvee that has extra armor plating. And if you want to go cheap, get a little scooter and don’t wear a helmet. But you have this entire gamut.

The Pareto trade-off says, “For a certain price, there’s the best safety that you can get in a car.” I think, if the only criteria for buying a car was the safety and the price, if you’re like me, you would set the price and then see the maximal safety that you can get. This is inherent in design, at least that we experience today. I agree with you, Sam. I don’t think it was applicable before the Fall. But certainly, today it is. So this is something that we are certainly stuck with. Okay, any final thoughts?

Vilfredo Pareto

Note: Vilfredo Pareto (1848–1943) was an Italian economist and sociologist who is known for his applications of mathematics to economic analysis.

Domain expertise and the design process

Sam Haug: I have just a little bit more on how domain expertise can help in the design process.

I did mention that domain expertise can be used to kind of reduce the number of tests that you need to perform. There are some circumstances where you don’t really care how your design performs because you don’t expect it to be put in that circumstance. But another way that domain expertise can help in the design process is by forecasting what the result of a test would probably be.

This saves a lot of time in doing the actual physical testing because the designer is able to very quickly look at an environment and say, well, I know that it will perform well there or I know that this particular aspect of the environment will cause it to perform poorly. The designer has enough domain expertise to know how it would perform.

Robert J. Marks: You design, you test, and then you redesign. And that’s the reason we talked about WD-40 and Formula 409; it was an iterative loop. Not only does the design have to be for AGI, the software engineer has to know what they’re doing. But there is intelligent testing where you go out and test the AGI, and then do variations in order to improve the AGI as you find out different places that it works.

Justin Bui: To build on that too, writing the testing and verification is its own area of subject matter expertise, I think one that’s often overlooked. It’s funny, to give everybody an example of subject matter expertise, so I ordered a new Bronco last year…

Robert J. Marks: The car, not the horse.

Justin Bui: Yeah. The horse probably would have shown up by now. But it’s very interesting, because if you’ve followed along with the release of that vehicle, they had a roof issue for all the hardtops. It turns out that they decided to replace all of the hardtops that were built or previously issued up to, I believe it was August. And when you observe kind of what happened and how it got to that scenario, it turns out that they had some type of QC [quality control] that permitted faulty hardware to get into the loop.

And you think about it like, well, from a testing perspective or a verification perspective, that’s something that should have been caught. but maybe they just didn’t know what to look for. It ties very well into the testing expertise for an AGI system. We talk about the known knowns, the known unknowns and the unknown unknowns being kind of major hurdles. And the unknown unknowns are the most dangerous kind because we don’t know that we don’t know them.

It’s one of those things that, when you start looking at verifying a system, you could almost argue that that requires more expertise than developing it, in some cases. And so I think that’s going to be a topic that you see more and more of as we continue to dive into these areas. And you continue to see more and more AI systems deployed in the real world. You get these scenarios like Uber where you strike and kill a pedestrian, and pretty much every engineer is probably sitting there saying, well, okay, what are the circumstances that could have led to this? It’s such a complex system with so many different subject expertise requirements that when you when you look at it in an unbiased light, it’s quite a bit to overcome.

And so I think, to tie things together, AGI is becoming less general and more specific. And I think that’s kind of where we’ll see a lot of the direction head in the foreseeable future — a lot more specificity, kind of a step away from the general application.


Here’s are Parts 1 and 2 of Episode 159, featuring Robert J. Marks and Justin Bui: If not Hal or Skynet, what’s really happening in AI today? Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use. Free AI software means that much more innovation now depends on who gets to the finish line first. Marks and Bui think that will spark creative competition.

and

Have a software design idea? Kaggle could help it happen for free. Okay, not exactly. You have to do the work. But maybe you don’t have to invent the software. Computer engineer Justin Bui discourages “keyboard engineering” (trying to do it all yourself). Chances are, many solutions already exist at open source venues.

In Episode 160, Sam Haug joined Dr. Marks and Dr. Bui for a look at what happens when AI fails. Sometimes the results are sometimes amusing. Sometimes not. They look at five instances, from famous but trivial right up to one that nearly ended the world as we know it. As AI grows more complex, risks grow too.

In Episode 161, Part 1, Marks, Haug, and Bui discuss the Iron Law of Complexity: Complexity adds but its problems multiply. That’s why more complexity doesn’t mean more things will go right; without planning, it means the exact opposite. They also discuss how programmers can use domain expertise to reduce the numbers of errors and false starts.

and

In Part 2 of Episode 161, they look at the Pareto tradeoff and the knowns and unknowns:
Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty. Constraints underlie any engineering design — even the human body.

Show Notes

  • 01:10 | Introducing Justin Bui and Sam Haug
  • 01:28 | Exponential Explosion of Contingencies
  • 08:20 Avoiding Contingency Explosions
  • 08:43 Domain Expertise
  • 13:09 | Standardization of the AI
  • 15:15 | Four Types of Knowns
  • 19:43 | Human Contingencies
  • 22:25 | Pareto Trade-Off
  • 24:41 | Doman Expertise and Forecasting

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

The Pareto Tradeoff — Choosing the Best of a Mixed Lot