Mind Matters Natural and Artificial Intelligence News and Analysis

TagSuperintelligent AI

Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.
Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.

AI Researcher: Stop Calling Everything “Artificial Intelligence”

It’s not really intelligence, says Berkeley’s Michael Jordan, and we risk misunderstanding what these machines can really do for us

Computer scientist Michael I. Jordan, a leading AI researcher, says today’s artificial intelligence systems aren’t actually intelligent and people should stop talking about them as if they were: They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley. Katy Pretz, “Stop Calling Everything AI, Machine-Learning Pioneer Says” at IEEE Spectrum (March 31, 2031) Their principal role, he says, is to “augment human intelligence, via painstaking analysis of large…

The first model of the computational mechanism is an arithmometer.

Computers Are Getting Faster But Are They Getting Smarter? No.

Computers are Turing machines, limited to operations that can be completely understood in relation to their programming

Won’t quantum computers be smarter than regular ones? No. Still No. What about optical computing, computing with DNA, or some other exotic form of computation? Always No. A skeptical reader might ask, Why such a definitive answer? How do you deal with the spectacular performance of deep learning? What about AlphaGo Zero? What about Watson? What about the infamous Deep Blue? What about quantum supremacy? Don’t these examples all disprove your point? No. All forms of computation past, present, and future will be physical. And all physical phenomena can be modeled by a Turing machine (pictured). No matter how fast the computer runs, the computer will never be more powerful than a Turing machine. A Turing machine consists of five…

Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.

Would Super AI Cure Cancer — or Destroy the Earth?

Max Planck Institute computer scientists say that we not only don’t but can’t know

An international team of computer scientists associated with the Max Planck Institute concluded that, given the nature of computers, there is no way of determining what superintelligent AI would do: An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI “A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity”, says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for…

Artificial intelligence and future technologies. Mixed media

Researchers: It Would Be Impossible To Control Super AI

But is superintelligent AI really possible? Some experts are skeptical

From the media release for a recent paper: The idea of artificial intelligence overthrowing humankind has been talked about for many decades, and scientists have just delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we’re unable to comprehend it, it’s impossible to create such a simulation. David Nield, “Calculations Show It’ll Be Impossible to Control a Super-Intelligent AI” at Science Alert The open access research study is here. First, the idea that machines can design smarter machines should be treated with skepticism: maybe we are…

Robot with Artificial Intelligence observing human skull in Evolved Cybernetic organism world. 3d rendered image

Could Super Artificial Intelligence Be, in Some Sense, Alive?

An AI theorist makes the case to a technical writer…

Tech writer Ben Dickson poses the question: Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us? Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020 Philosopher Borna Jalšenjak (above right) of the Luxembourg School of Business has been thinking about that. He has a chapter, “The Artificial Intelligence Singularity: What It Is and What It Is Not,” in Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, in which he explores the case for “thinking machines” being alive, even if they are machines. The book as a whole…