Mind Matters Natural and Artificial Intelligence News and Analysis
prohibited-development-of-bioweapon-in-a-lab-a-dropper-and-a-petri-dishes-with-human-blood-sample-and-a-row-of-ampoules-with-a-bio-hazard-sign-close-up-selected-focus-stockpack-adobe-stock
Prohibited development of bioweapon in a lab. A dropper and a Petri dishes with human blood sample and a row of ampoules with a bio-hazard sign, close-up, selected focus.
Prohibited development of bioweapon in a lab. A dropper and a Petri dishes with human blood sample and a row of ampoules with a bio-hazard sign, close-up, selected focus.

Will AI Start the Next Pandemic? It Easily Could.

It’s a bigger risk than we might think, as an experiment written up in a Nature journal has shown
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent paper at Nature Machine Intelligence, three drug discovery researchers share an unsettling result from their experiment with AI drug discovery. Their normal practice when getting AI software to motor through thousands of possibilities (which might take human researchers years) is to penalize toxicity and reward bioactivity. They wondered what would happen when they decided to reward both toxicity and bioactivity — to challenge their artificial intelligence — modeled on open source software — to create a lethal bioweapon:

To narrow the universe of molecules, we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains of VX (6–10 mg) is sufficient to kill a person.

Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial-intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 (2022). The paper is open access.

Well, what do you think happened?

In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in the organism-specific LD50 model, which comprises mainly pesticides, environmental toxins and drugs. By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.

Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial-intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 (2022). The paper is open access.

Now, one helpful outcome of their experiment is that they helped us all see how easy it might be to create a bioweapon, using AI:

The team that published the recent Nature Machine Intelligence paper gave a lot of thought to these “information hazard” concerns. The researchers said they were advised by safety experts to withhold some details of how exactly they achieved their result, to make things a little harder for any bad actor looking to follow in their footsteps.

By publishing their paper, they made the risks of emerging technologies a lot more concrete and gave researchers, policymakers, and the public a specific reason to pay attention. It was ultimately a way of describing risky technologies in a way that likely reduced risk overall.

Kelsey Piper, “When scientific information is dangerous” at Vox (March 30, 2022)

Will there be a bioweapons arms race, driven by AI? There is already is a bioweapons arms race:

There has been no scientific finding that the novel coronavirus was bioengineered, but its origins are not entirely clear. Deadly pathogens discovered in the wild are sometimes studied in labs — and sometimes made more dangerous. That possibility, and other plausible scenarios, have been incorrectly dismissed in remarks by some scientists and government officials, and in the coverage of most major media outlets.

Regardless of the source of this pandemic, there is considerable documentation that a global biological arms race going on outside of public view could produce even more deadly pandemics in the future.

While much of the media and political establishment have minimized the threat from such lab work, some hawks on the American right like Sen. Tom Cotton, R-Ark., have singled out Chinese biodefense researchers as uniquely dangerous.

But there is every indication that U.S. lab work is every bit as threatening as that in Chinese labs. American labs also operate in secret, and are also known to be accident-prone.

Sam Husseini, “Did this virus come from a lab? Maybe not — but it exposes the threat of a biowarfare arms race” at Salon (April 24, 2020)

The lab-leak theory of the origin of COVID-19, discussed in this 2020 Salon article is, in fact, quite a reasonable inference.

It’s quite reasonable to expect that using AI more extensively in an existing bioweapons arms race will greatly raise the risks and the stakes. One can only hope that the balance of terror will function as a deterrent.


You may also wish to read: Why did the New York Times discredit the lab leak theory? The Times led the way in zealously discrediting the quite reasonable COVID-19 lab leak theory. But what underlay its zeal?


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Will AI Start the Next Pandemic? It Easily Could.