Mind Matters Natural and Artificial Intelligence News and Analysis

OpenAI disbands team that was preparing for human-level AI

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Miles Brundage, the team lead for OpenAI’s AGI Readiness team has resigned, warning that “neither OpenAI nor any other frontier lab is ready, and the world is also not ready” for human-level AI (AGI). He plans to start a group to lobby for various changes and protections from outside the company. Meanwhile, his team is being reassigned.

How to understand all this?

To start with, his is not the only high-profile exit from OpenAI; the company is realigning toward a more profit-oriented and less research-oriented focus. Thus if the human-level AI threat is as serious as Brundage seems to think, it is unclear why the readiness team is being disbanded.

Adobe Stock licensed

Is human-level AI really that big a threat? Bill Dembski’s recent foray into getting OpenAI’s chatbot, ChatGPT4o, to rewrite James Joyce in the manner of James Joyce is any guide, the literary world, at least, is not at much risk.

It’s reasonable to think that the problems Dembski describes are general and they may be fundamental to the way that chatbots operate. Many industry chiefs may sense that fact but don’t want to go there.

At his Substack, freelance journalist Garrison Lovely tells us something else that may be of interest:

I think the next major fault line in the AI debates will be posture toward China. Altman, Anthropic CEO Dario Amodei, and former OpenAI researcher Leopold Aschenbrenner have prominently advanced varieties of the idea that democracies must ‘prevail’ over China in a race to build superhuman AI systems.

Others, like Brundage, prefer a cooperative approach to China.

“End of an Era”,” October 23, 2024

Ah yes, China. The problem is that China is not so much interested in human-level AI as in total mass surveillance.

For the Western world, tech employee comfort with total mass surveillance is likely to be a bigger problem than the chatbots supposedly coming for our jobs.

You may also wish to read: Bill Dembski: When a chatbot tried improving on a literary genius. If what matters is computer engineering and a supermassive databank, the chatbot should improve on the masters, right? Information theorist Dembski thinks that the statistical averages that the bots depend on mean they will necessarily regress to the mean and result in mediocrity.


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

OpenAI disbands team that was preparing for human-level AI