Dr. Paul Werbos calls it “a soap opera you wouldn’t believe”: the story of how a young Werbos was inspired by the pioneering computer scientist to pursue the development of artificial neural networks, and how Minsky later could not support the effort for disbelief that there was a solution to its many problems.
In this week’s podcast, Dr. Robert J. Marks interviewed Dr. Paul Werbos, famous for his 1974 dissertation which proposed training artificial neural networks through the use of a backpropagation of errors. The two discuss Werbos’s journey in the development of artificial neural networks and the role Marvin Minsky played throughout.
This portion begins at 04:25. A partial transcript, Show Notes, and Additional Resources follow.
Robert J. Marks: Could we start, could you give a high level nutshell overview of your algorithm, error backpropagation, which is the dominant 99.9% of the time used as the algorithm for training artificial neural networks?
Paul Werbos: So backpropagation really came from me trying to understand how brains work and how you could build a brain like a brain. And when I was growing up, I read a lot of books I was excited by. There’s a book called Computers and Thought, which was the start of the whole artificial intelligence world. And believe it or not, I was inspired by a chapter by Marvin Minsky who said we could build human-like intelligence by using something he called reinforcement learning. And I said, “Wow. It would be nice to build something that can do it.”
And then later I met Minsky, and he said, “Nah, that idea never worked. I couldn’t figure out how to do it.” Nobody could figure out how to do it. And I said, “I can figure out how to do it,” because I knew the math, and a lot of these people were glorified hackers, they were looking at themselves in the mirror and how proud they were, how clever they were. And they didn’t go to the math.
Robert J. Marks: Well, I call these people keyboard engineers. They just sit down and they go to the keyboard and that’s where they try to get all their answers, as opposed to understanding the underlying, deeper mathematics.
Paul Werbos: That’s exactly the key thing. We need to understand the math to get it right. And so I spent a lot of time reading John von Neumann, and von Neumann had a lot of really good thoughts about how to do it. And I was amazed people didn’t follow up on some of these thoughts. So I decided, well, okay, I’ll take the mathematical approach. I’ll solve these mathematical problems. Here’s how to do it. And believe it or not, reinforcement learning was the first thing, how to come up with a system that could learn to act and achieve goals.
And then I realized, okay, to make this work, I need to have a subsystem that learns how the world works and that’s what they now call backpropagation. But that backpropagation, which I developed was actually a part of a larger design for intelligence systems.
And in 1972, I presented that to my Harvard thesis PhD committee, and I said, “Okay, this is what I want to do my PhD thesis on.” I actually posted that thesis proposal in a weird place called ViXra, and there it is 1972. Here’s how to use backpropagation to learn how the world works dynamically over time, and how to use that as part of an optimal decision system, and here’s how it fits the brain. And when I presented that to the Harvard faculty, their response was, “There’s enough material here for a thesis. In fact, maybe there’s too much. How about you take a little piece of it, the piece we can understand and write your thesis on that piece.” So I said, “Okay.” So they didn’t believe backpropagation at first.
Robert J. Marks: Well, in fact, you talked about Marvin Minsky and I think he was one of the people that did not like neural networks.
Paul Werbos: That’s true.
Robert J. Marks: He wrote a classic book with Papert called Perceptrons, which kind of killed the funding of neural networks and one of the waves of neural networks. So it’s interesting that your inspiration came from Marvin Minsky, who didn’t like that. He liked the rule-based way of looking at artificial intelligence.
Paul Werbos: Well, he started out believing in reinforcement learning and maybe he even believed in neural nets, but he couldn’t make it work. He couldn’t find anyone who could make it work. And then he said, “Okay, we’ll play with something else. If I can’t do it, it must be impossible.” That was his basic problem.
- 01:19 | Introducing Dr. Paul Werbos
- 02:33 | Werbos’s error backpropagation algorithm
- 07:39 | Marvin Minsky and neural networks
- 10:34 | recurrent neural networks and feedback
- 12:34 | Harvard & MIT’s Cambridge Project
- 13:36 | Werbos’s “flash of genius”
- 18:36 | Pushback from the PhD committee
- 22:16 | Dynamic feedback
- 26:01 | Does a version of error backpropagation occur in the human brain?
- 29:24 | Changing the game
- Dr. Paul Werbos at IEEE.org
- Paul Werbos’s website
- Paul Werbos’s PhD dissertation introducing error backpropagation used today to train artificial neural networks
- Talking Nets: An Oral History of Neural Networks at Amazon.com
- Paul Werbos’s 1972 Proposal to Harvard for Backpropagation and Intelligent Reinforcement System
- Artificial Intelligence in the Age of Neural Networks and Brain Computing on Amazon.com
- NSF Award granted to Andrew Ng and Yann LeCun
- “Mind, Brain and Soul From the Viewpoint of Mathematical Realism” at www.werbos.com
- Computers and Thought on Amazon.com
- John von Neumann, Hungarian-American mathematician, physicist, and computer scientist
- Perceptrons: An Introduction to Computational Geometry by Marvin Minsky and Seymour A. Papert on Amazon.com
- Paul Erdős, Hungarian mathematician
- Alonzo Church, American mathematician
- Alan Turing, British mathematician and logician
- Jean-Baptise Fourier, French mathematician and physicist
- György Buzsáki, Biggs Professor of Neuroscience at New York University School of Medicine
- Barry Richmond, Principal Investigator at NIH