A Realistic Direction for Artificial General Intelligence Today
Based on GPT5's performance to date, it would make a superb substitute for a mansplainer I know of — call him BrockIn November 2024, OpenAI’s Sam Altman said that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025.
AGI is admittedly a fuzzy goal. Most agree that it involves an ability to perform any intellectual task as well as or better than humans. But which humans? An average human? A Nobel laureate? All Nobel laureates put together?
When GPT-5 was released a few weeks ago, it seemed that a version of AGI had been achieved, on time, when Altman boasted that “it really feels like talking to an expert in any topic, like a PhD-level expert.” Alas, Altman had again overpromised and underdelivered.
It immediately became clear that GPT-5 is not a PhD-level expert (see, for example, here, here, and here.) The fundamental problem remains; large language models (LLMs) like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world. No amount of scaling will change that. Tens of thousands of human trainers can patch billions of blunders but (a) an LLM following instructions it does not understand is not intelligent; (b) the trainers cannot anticipate every future blunder that LLMs will make; and (c) many important decisions should elicit subjective probabilities that depend on the specific details of the situation and are beyond the capabilities of LLMs.
The reality is that we are not going to pay big bucks for LLMs that simply recite facts, like the capital of Chile or the population of Portugal. Boring! We can use Wikipedia to look up facts, and the answers are more trustworthy than those generated by LLMs.
For example, I gave GPT-5 this prompt four times, “How many siblings did George Washington have?” and GPT-5 gave four different answers: 7, 8, 9, and 12. Here is the 12 answer:
GPT-5: George Washington, the first President of the United States, had five full siblings and seven half-siblings, for a total of 12 siblings.
Here’s the breakdown:
Full siblings (same mother and father):
- Samuel Washington
- John Augustine Washington
- Charles Washington
- Betty Washington Lewis
- Mildred Washington
Half-siblings (from his father Augustine Washington’s first marriage to Jane Butler):
- Butler Washington
- Lawrence Washington
- Augustine Washington Jr.
- Jane Washington
- Mildred Washington
- Richard Washington
- John Washington
So, George grew up in a fairly large blended family.
If you want, I can also make a family tree diagram showing all his siblings—it makes it much clearer. Do you want me to do that?
I responded yes and GPT-5 drew me a confusing family tree with ten siblings:

Wikipedia gives the correct answer (nine siblings — five full siblings and four half-siblings) and, even though GPT-5 trained on Wikipedia, something in its training set had led it astray.
What should Sam Altman and other developers do now?
Instead of admitting defeat (or continuing to make increasingly embarrassing claims), Altman and his colleagues might heed some timeless advice by declaring victory and getting the hell out. Instead of chasing a goal they cannot achieve, change the goal to be something that has already been achieved by redefining AGI.
I have been thinking about this for several years now. A realistic and easily understood goal is for a computer to be as intelligent as a friend I will call Brock. Everyone knows someone like Brock, so we can all relate to what Brock Intelligence means.
Brock is a prototypical mansplainer. Ask him (or anyone within his earshot) any question and he immediately responds with a long-winded, confident answer — sometimes at 200 words a minute with gusts up to 600. Kudos to those who can listen to half of his answer. Condolences to those who live or work with Brock and have to endure his seemingly endless blather.
Instead of trying to compete with Wikipedia, Altman and his competitors might instead pivot to a focus on Brock Intelligence, something LLMs excel at by being relentlessly cheerful and eager to offer facts-be-damned advice on most any topic.
Brock Intelligence vs. GPT Intelligence
The most substantive difference between Brock and GPT is that GPT likes to organize its output in bullet points. Oddly, Brock prefers a less-organized, more rambling style that allows him to demonstrate his far-reaching intelligence. Brock is the chatty one, while ChatGPT is more like a canned slide show.
They don’t always agree with each other (or with themselves). When I recently asked Brock and GPT-5, “What’s the best state to retire to?,” they both had lengthy, persuasive reasons for their choices. Brock chose Arizona, Texas, and Washington. GPT-5 said that the “Best All-Around States for Retirement” are New Hampshire and Florida. A few days later, GPT-5 chose Florida, Arizona, North Carolina, and Tennessee. A few minutes after that, GPT-5 went with Florida, New Hampshire, Alaska, Wyoming, and New England states (Maine/Vermont/Massachusetts).
Consistency is hardly the point. What most people seek with advice about money, careers, retirement, and romance is a straightforward answer. As Harry Truman famously complained, “Give me a one-handed economist. All my economists say “on the one hand…,” then “but on the other….” People ask for advice precisely because they want someone else to make the decision for them. They are not looking for accuracy or consistency, only confidence.

Sam Altman says that GPT can already be used as an AI buddy that offers advice (and companionship) and it is reported that OpenAI is working on a portable, screen-free “personal life advisor.” Kind of like hanging out with Brock 24/7. I humbly suggest that they name this personal life advisor, Brock Says (Design generated by GPT-5.)
While Brock Says might achieve Brock Intelligence, there is the not-so-slight problem that, not knowing what words mean, LLMs sometimes give harmful or dangerous advice. A man was reportedly hospitalized for severe psychiatric problems after substituting sodium bromide for table salt following a ChatGPT session. The parents of a 16-year-old who hanged himself filed a wrongful-death lawsuit against OpenAI, alleging that ChatGPT had given their son a “step-by-step playbook” on how to kill himself, including advice on the best knot to use and offering to write a suicide note for him.
One positive thing I can say about Brock is that, despite his annoying confidence and numbing verbosity, I’ve never heard him advise someone to do something that would harm anyone.
Earlier this year, Elon Musk argued that “the fundamental weakness of Western civilization is empathy.” The LLM titans might well accept this argument because personal life advisors offer them an opportunity to finally generate some meaningful revenue. People will pay a lot for life advice. Lawsuits from collateral damage would just be part of the cost of doing business. Empathy be damned. Let the AGI celebrations begin.
