Hinton and Hopfield Win Nobel Prize in Physics
Hinton warns against neglecting AI safetyGeoffrey E. Hinton, a pioneer in the field of artificial intelligence responsible for helping develop “neural networks,” has been awarded the 2024 Nobel Prize alongside John J. Hopfield. The two scientists won the award for their groundbreaking work in machine intelligence, paving the way for a revolutionary new way to use computers.
Hinton attracted attention just last year when he departed from Google and started warning the public about the potential dangers of new AI systems. He likened the AI revolution to the Industrial Revolution of the nineteenth century, only this time, it won’t be our physical capacities that get trumped by the machine, but our intellects. The New York Times reports,
“It will be comparable with the Industrial Revolution,” he said. “Instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”
The fallout of AI hype has already proved substantial. Everything from freshman composition papers to whole magazines turning out ChatGPT plagiarism owes itself to the neural networks Hinton helped make possible some fifty years ago. Artists and writers alike have banded together, sued, and decried the invasion of AI-generated content into their creative territory. In short, the concerns over intellectual and artistic copyright have merit.
Well-Earned Authority?
Also according to The New York Times, Hinton thinks that winning the Nobel will cause people to listen more intently to what he says, including his warnings.
With every technological invention, possibilities and dangers arise; the question now is whether we’ve fully understood the range of impacts of AI and have the moral responsibility to use it well. As far as Hinton’s concern about machines overtaking human intellect, the philosophy of mind community needs to have some deep talks about what it means to have an “intellect” in the first place. Is mental reasoning comparable to a computational algorithm? Is the human brain that deterministic? Or does the long-contested element of free will shape how we understand intelligence?
Altman Cares More About the Money, Says Hinton
Hinton also made a short video thanking his former students in his journey toward the Nobel, but also took the time to call out Sam Altman, CEO of OpenAI, for caring less about AI safety (which Hinton said OpenAI was originally intended to promote) and more about profits. Last November, Altman was ousted from his post by the board, but proceeded to get right back on top just days later.
Kylie Robison writes at The Verge,
OpenAI launched with a famously altruistic mission: to help humanity by developing artificial general intelligence. But along the way, it became one of the best-funded companies in Silicon Valley. Now, the tension between those two facts is coming to a head.
The Battle Continues
According to Robison’s report, Hinton’s suspicions of Altman are merited, with three top employees of the company having departed in the last year following Altman’s temporary termination. “So what is OpenAI becoming?” Robison continues.
All signs point to a conventional tech company under the control of one powerful executive — exactly the structure it was built to avoid.
The AI battle will continue to reflect Big Tech’s penchant for money grubbing with the need for regulation, caution, and, in some cases, moral resistance. When AI startups allow dead people to become chatbots, which happened recently through Character.AI, it’s time to start wondering how on earth such ethical violations can be avoided. Can they, given the technology and its lack of boundary? Even if Hinton warns and warns about the tech he helped create, will it make AI giants in Silicon Valley care?