Kathleen Walch, Principal Analyst at Cognilytica, asks “Is AGI really around the corner, or are we chasing an elusive goal that we may never realize?” It was an oddly blunt question from someone in her industry. But then she was right to expect Ben Goertzel (right), CEO & Founder of the SingularityNET Foundation, to reassure her that all is well when she interviewed him at OpenCogCon.
Ben Goertzel, a leading expert in the pursuit of Artificial General Intelligence (AGI)—computers that can think like humans—thinks that we are now at a “turning point” where AGI will see rapid advances:
Over the next few years he believes the balance of activity in the AI research area is about to shift from highly specialized narrow AIs toward AGIs. Deep neural nets have achieved amazing things but that paradigm is going to run out of steam fairly soon, and rather than this causing another “AI winter” or a shift in focus to some other kind of narrow AI, he thinks it’s going to trigger the AGI revolution.Kathleen Walch, “Is Artificial General Intelligence (AGI) On The Horizon? Interview With Dr. Ben Goertzel, CEO & Founder, SingularityNET Foundation” at Forbes
He has large hopes:
He states that “any other problem humanity faces – including extremely hard ones like curing death or mental illness, creating nanotechnology or femtotechnology assemblers, saving the Earth’s environment or traveling to the stars — can be solved effectively via first creating a benevolent AGI and then asking the AGI to solve that problem.”Kathleen Walch, “Is Artificial General Intelligence (AGI) On The Horizon? Interview With Dr. Ben Goertzel, CEO & Founder, SingularityNET Foundation” at Forbes
He discussed with Walch some of the projects he is working on, including a search for AGI through AI pioneer Marvin Minsky’s “Society of Minds” approach in which a number of simple agents interact to produce AGI: “I intend to create AGI and when I roll out this AGI, I want it to be rolled out in a decentralized and democratically controlled way, rather than in a manner that allows it to be controlled by any one person or corporate or governmental entity.”
That AI is in fact working in precisely the opposite direction in China—AI is creating the most comprehensive totalitarianism ever known—doesn’t come up for discussion.
Goertzel also works on OpenCog, an architecture for AGI “based on a sophisticated mathematical theory of general intelligence, which tells us how the general nature of general intelligence manifests itself in the specific case of human-like cognition.” He sees the main obstacles as lack of funding and the inadequacy of the current computer infrastructure. But he believes that his company’s software can close the gap.
He told Walch that the other major issue is that current computing infrastructure is not well tailored for AGI. But what about Microsoft (OpenAI) and Google (Deep Mind)? They don’t lack money and they can build different infrastructures. He believes that these firms are burning up resources pursuing “intellectual dead ends.”
Nonetheless, he sees a bright future for AGI:
Dr. Goertzel is of the opinion that we have between five and twenty years to achieve human-level AGI, with less than three years after that achieving super-human level AGI. In fact he believes his company “can fairly likely get to human-level AGI in 5-7 years if we can accumulate reasonable funding into our TrueAGI project, which is based on OpenCog Hyperon, the new version of the OpenCog AGI architecture we’re building now.”
As for skeptics, he tells Walch, “they want to believe that human cognition is more special and elusive than it actually is.” He has little time for those who worry about technocracy and such: “… these reactions are probably going to look very silly to people a few decades from now as they go about their lives which have been made tremendously easy and happy and fascinating compared to 2020 reality…”
Other AI specialists are much more cautious. François Chollet notes notes that an emphasis on mastery of specific skills like chess or go thwarts the realization that humans’ main skill is in generalizing from the known to the unknown, something computers are not very good at. David Watson of the Alan Turing Institute provides an informative assessment of the weaknesses of AI. He proposes fixes, but more in the manner of a mechanic than a prophet. The fixes may work; they may not. Also, as Roman Yampolskiy <a href=”reminds us, AI will fail the way everything eventually does, and when it does fail, it will create problems, maybe big ones.
Another question is the extent to which human intelligence is bound up with consciousness, a subject we really do not know very much about. Efforts to study consciousness via the tools of science appear mainly speculative at present. Some sense of this fact can be gained from the increasing popularity of the idea that even electrons are conscious. While that fact, if true, might hearten those who seek true AGI, it doesn’t shed much light on what consciousness is, which could leave them in a bind if it turns out to be truly necessary and they do not know how to enable it in computers..
In a sense, AGI is perhaps a bit like SETI’s space aliens. It must be Possible for many of the same reasons as the space aliens must be Out There. In other words, we must believe what our view of ourselves and our reality depends on and Ben Goertzel has taken the stand that human consciousness is not really all that special or elusive.
You may also enjoy:
Which is smarter? Babies or AI? Not a trick question Humans learn to generalize from the known to the unknown without prior programming and do not get stuck very often in endless feedback loops.
AI expert: Artificial intelligences are not electronic people. AI makes mistakes no human makes, so some experts are trying to adapt human cognitive psychology to machines. David Watson of the Alan Turing Institute fills us in on some of the limitations of AI and proposes fixes based on human thinking.
AI will fail, like everything else, eventually The more powerful the AI, the more serious the consequences of failure Overall, we predict that AI failures and premeditated malevolent AI incidents will increase in frequency and severity proportionate to AIs’ capability.