Mind Matters Natural and Artificial Intelligence News and Analysis
beautiful-girl-child-sportswoman-cheerleader-in-uniform-danc-592082648-stockpack-adobe_stock
Beautiful girl, child, sportswoman cheerleader in uniform dancing, showing the bridge, showing the elements of gymnastics in the gym at the competition. The concept of sports, cheerleading.
Image Credit: shchus - Adobe Stock

Are We Now Giving Nobel Prizes to Cheerleaders?

Even though both John Hopfield and Geoffrey Hinton are brilliant researchers, there were much more obviously deserving recipients
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

The 2024 Nobel Prize in Physics has been awarded to two neural network researchers, John Hopfield and Geoffrey Hinton.

I confess I first lost respect for the Nobel Prize when Barack Obama won the Peace prize in 2009. He had only been President for 9 months. The status of prize was cheapened in the eyes of many. I recall chuckling at a contemporary gas station sign, “Free Nobel Prize With Oil Change.” To the Committee’s credit, the Nobel Prize Secretary in 2015 expressed regret for the award.

Donald Trump recently said that he would have gotten the Nobel Peace Prize for the Abraham Accords if his name were Obama and not Trump. This is closer to the truth than not.

Here’s why I am concerned about this year’s Nobel Prize in Physics: Even though both John Hopfield and Geoffrey Hinton are brilliant researchers, there were much more obvious deserving recipients. The deserving neural network recipients, as I will describe, are Paul Werbos and Bernie Widrow, both of whom invented foundational AI tools in use today. Hopfield was a popularizer of neural networks and Hinton skillfully applied the inventions of others.

Here are some details.

The Hopfield Neural Network

Let’s first talk about John Hopfield’s neural network. In my 2022 book Non-Computable You, I have a section titled “Hopfield and His Neural Network of Little Worth.” The amazing AI of today, including deep fakes and transformer models like ChatGPT, do not make use of Hopfield neural networks.

I published papers about the Hopfield neural network during its popularity, but soon became disillusioned with its performance. For one thing, Hopfield’s neural network did not scale well. An example of scaling in humans is a 100-pound man doubling in height. His weight does not double. It increases to 800 pounds.1 Humans are designed to be from roughly 5 to 6 feet tall and weigh between about 100 to 200 pounds. Outside of this range, humans begin to scale poorly. That’s why overly tall heavy men have knee problems and elephants can’t jump.

Hopfield’s proposed neural network scaled poorly. If there are n artificial neurons, there are about n2 interconnects among neurons. So, if the number of neurons increased from 10 to 100, the number of neural interconnects would increase from about 100 to 10,000.

Hopfield’s proposed neural network scaled poorly in another way. Doubling the number of neurons in Hopfield’s neural network fell short of doubling its performance. Scaling issues limit the use of Hopfield’s original neural network to problems of moderate size.

I confess that I have not followed the development of Hopfield’s neural network in recent years. Hopfield’s papers have been cited over 90 thousand times and I have not perused even a fraction of these papers. Possibly the limitations I cite have been overcome. Even so, the Hopfield neural network applications have not had anywhere near the impact of the AI we see today.

So why did John Hopfield receive the Nobel prize? As I write in Non-Computable You, “With evangelistic zeal, Hopfield spoke loudly and frequently about what is today known as Hopfield neural networks.”

Hopfield was an evangelist for his work and attracted a lot of followers. Including me. His popularity was apparently sufficient for someone to enthusiastically nominate him for the Nobel Prize.

Geoffrey Hinton

Geoffrey Hinton, the co-recipient of the Nobel Prize in Physics, is a gifted researcher. He popularized the application of error backpropagation, so-called deep learning and convolutional neural networks (CNNs). Although Hinton did great things with CNNs, his highly cited 2012 paper cites five other CNN papers by other authors. He is not the creator of CNNs. My colleagues and I used convolution in neural networks in the late 1980s, mistakenly referenced by some as a CNN.

Error back propagation is a mathematically beautiful neural network training algorithm used to train today’s deep fake generators and transformers like ChatGPT. Hinton used but did not invent error backpropagation. In 1974 Harvard PhD student Paul Werbos did so. Hinton mistakenly claims that his frequent coauthor, David Rumelhart, was the inventor.

I am unable to find who first dubbed Hinton the “Godfather of AI.” The title ignores the monumental contributions of AI patriarchs like Marvin Minsky, Bernie Widrow, Frank Rosenblatt and John McCarthy.2

Who Should Have Won the AI Nobel Prize?

Cheerleaders work hard at promotion, but should not be given championship trophies. Here’s an analogy from science: Einstein’s relativity theory is used in both GPS calibration and cancer treating proton therapy. But Nobel prizes in physics shouldn’t go to those who apply relativity, but to the person who discovered relativity.3

Paul Werbos in 1991 by Rolf Kickuth
Own work, CC BY-SA 4.0

From this perspective, the obvious better candidates for the AI Nobel prize are Bernie Widrow and Paul Werbos. Both are still living — a requirement for receiving the award.

Paul Werbos: The error backpropagation training algorithm was first reported in 1974 by Werbos in his Harvard PhD dissertation.4 I interviewed him about his invention. I suspect that error backpropagation would make the list of the top ten most commonly used algorithms in the world. When I asked Paul Werbos whether he agreed, his response was simply “Easily.” Here is a transcript of the interview.

The Nobel Prize for relativity should have been given to Einstein who discovered it. The Nobel prize for AI should likewise be given to the discoverer of the training algorithm used to train almost all of the amazing AI we see today.

Bernie Widrow: Stanford’s Bernard (Bernie) Widrow trained neural networks in the late 1950’s and early 1960’s. He trained neural networks to

  • Forecast weather better than the local weatherman,
  • Optimally play the game Blackjack (a.k.a. 21), and
  • Do voice recognition.

Because computers are better today, more sophisticated methods can be applied. But Widrow was the first. He was the motivating genius behind much of the AI we see today.

Luckily, we can offer a fun black and white video of young Professor Widrow presenting some of the applications of his neural networks.

Never Hesitate to Nominate

Nobel prizes are given through a process of nomination. I have a colleague who brags about being nominated for the Nobel prize numerous times. Although being nominated itself means little, you can’t hit a bullseye without throwing a dart.

I wish I had known that the Nobel committee thought that neural network algorithms were physics. I still have problems seeing the connection. Had I known, I would have nominated Paul Werbos and Bernie Widrow for the awards. If the award committee gave my nominations close scrutiny, Widrow and Werbos would have won the Nobel prize for AI.

Notes

1 For scaling in humans, think of a cube one inch on a side. If this is doubled to two inches, the cube has eight times the volume.

2 McCarthy coined the term “artificial intelligence” in the mid 1950’s.

3 Interestingly, Einstein received a Nobel prize, but not for relativity. This is a drastic oversight on the part of the Nobel prize committee.

4 Paul J. Werbos, “Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences,” (PhD diss., Harvard University, 1974)


Are We Now Giving Nobel Prizes to Cheerleaders?