On the second day of the COSM 2021 conference, speakers asked — with appropriate skepticism — whether we could ever produce true Artificial General Intelligence (AGI). But the final day of the conference hosted a conversation on the realistically achievable forms of AI and quantum computing that may pose existential threats to modern life.
Robert J. Marks, Director of the Walter Bradley Center for Natural and Artificial Intelligence (which hosted COSM) — also Distinguished Professor of Electrical and Computer Engineering at Baylor University — spoke first. The title of his 2020 book, The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI , provides an unsubtle hint at his position.
Marks thinks that AI will never be “will never be sentient. It will never understand what it is doing. And, currently, it has no common sense.” In other words don’t expect AGI anytime soon, if ever.
AI apologists disagree, naturally. They often look to the future and say, “If computers can do X today, who’s to say they won’t do Y and Z tomorrow?” Marks has a ready response: there are fundamental, theoretical limits to what AI can accomplish:
“Anything you can do on computers today can be done on Turing’s original machine,” he explained—it just would take billions of times longer than on your home PC. But this also means that if you find theoretical limits to Turing’s original computer, then those limits are applicable to the computers “not just of today, but also tomorrow.”
Once you understand the inner workings of computers, these limitations become clear. But Marks thinks that many tech titans today who bullishly hype AGI in the media really don’t appreciate the underlying computer science.
Just because AGI is unrealistic doesn’t mean that AI and quantum computers won’t be able to do some amazing things. He reminded the audience of Vladimir Putin’s famous remark: “artificial intelligence is the future, not only of Russia, but of all mankind. And then he ends the statement with whoever becomes the leader in this sphere will become the ruler of the world.”
In other words, there’s an AI arms race going on. If we don’t get some skin in the game then America will lose to its global competitors. AI can help us develop smarter bombs that inflict more damage upon targets and less upon civilians, improved encryption to protect our secrets from enemies, drones that keep our warriors out of harm’s way, and anti-drone technology to protect us from the AI of foreign combatants.
Yet not everyone agrees we should develop this tech. In 2018, some 4,500 AI experts and others authorities, including 26 Nobel laureates and the UN secretary, said, as Marks put it, that “killer robots are politically unacceptable, morally repugnant, and should be banned by international law.”
But he thinks a much stronger moral case can be made in favor of military AI. “Tech wins wars” he said, further noting in his book that “Advanced technology not only wins wars but gives pause to otherwise aggressive adversaries.” History seems to prove him right.
Consider World War II. Early AI tech such as the Norden bombsight helped American bombers hit their targets like never before. Radar helped England defend against Nazi bombers. And the atomic bomb, as horrific as it was, won the peace. These innovations may be different from modern AI, but they show that if you don’t invest in military tech, you lose.
Marks is therefore mystified as to why some people want to ban the use of AI in military tech because it weakens us and opens us to attack from people who do develop AI for military purposes. He thinks this opposition to AI stems from a mistaken view of human nature.
“One of the things that is taught in the Judeo-Christian foundation is that we are fallen and there’s always going to be fallen people,” Marks explained. As a result, he argues, we should assume that people are going to do bad things and be prepared to defend against them.
But Marks warned we’re not doing a good job of this. He reported that a senior cybersecurity specialist at the Pentagon resigned recently because “right now it was impossible for the US to compete with China on AI” and the US wasn’t doing enough to close the gap. Nicolas Chaillan was the U.S. Air Forces first chief software officer when he joined in 2018. He quit September 2:
Speaking to the Financial Times in his first interview since leaving, Chaillan said China was far ahead of the US.
“We have no competing fighting chance against China in fifteen to twenty years. Right now, it’s already a done deal; it is already over in my opinion,” he said.Bill Bostock, “Pentagon Official Says He Resigned Because US Cybersecurity Is No Match for China” at Military.com/Business Insider (October 12, 2021)
Arthur Herman, a senior fellow at the Hudson Institute, followed Marks by giving additional compelling reasons why we must close this gap. It’s all about the advent of quantum computing.
Classical computers use bits which can exist only in either a 0 or 1 state. But quantum computers use qubits which can exist as a 0 or a 1, or as entangled probabilities of 0s or 1s, giving them many additional potential values or states. This exponentially increases computing power.
We’re not there yet, but if and when quantum computing is realized, it promises to make many common forms of encryption used today obsolete. This will require a revolution in cybersecurity.
Herman predicted that quantum computers that can crack current encryption methods will require 3,000 to 4,000 qubits, but the best quantum computers developed so far only have about 130 qubits.
And guess who currently holds that record? China.
The main limitation is developing the right materials to accurately perform quantum computation and measure its results. He predicts that, within 5 to 10 years, quantum computing power will hit the capacity needed to crack current encryption methods.
Does this spell the end of all secrets on the planet? “Don’t panic,” Herman tells us, because quantum computing will also bring us quantum cryptography — i.e., vastly better methods of encryption than we currently have. “In fact,” he notes, “the National Institute of Standards and Technology has been working on and reaching their final nominees for the five different mathematical formulas for making quantum resistant algorithms.”
While we’re not there yet, this means that when quantum computing arrives, everyone will need to up their encryption game.
But again, China is currently winning this game. And will we even know when the game is won? Because quantum computing is so powerful, Herman warns, when a quantum cyberattack happens “it’ll be stealthy because it will be able to disguise itself as a user of the system.” We could lose all our secrets and we wouldn’t even know it.
The message is that unless the West wants find itself behind the Great Firewall, and lose the computing arms race, we need to develop military AI and quantum computing and we need to do it fast. If Herman is right, there are only a few years left before the race is over.
You may also wish to read:
Marks: We can’t do without autonomous killer robots in combat. As an expert in swarm intelligence, he thinks drone swarms offer specific advantages. A recent article at Wired suggests that the U.S. military is heeding such advice and developing autonomous drone swarms for combat.
Book at a Glance: Robert J. Marks’s Killer Robots. What if ambitious nations such as China and Iran develop lethal AI military technology but the United States does not? Many sources (30 countries, 110+ NGOs, 4500 AI experts, the UN Secretary General, the EU, and 26 Nobel Laureates) have called for these lethal AI weapons to be banned. Dr. Marks disagrees: Deterrence reduces violence, he argues.