Mind Matters Natural and Artificial Intelligence News and Analysis
doctor-and-robotics-research-diagnose-human-brains-scan-generative-ai-stockpack-adobe-stock
Doctor and robotics research diagnose Human brains scan generative ai
Image licensed via Adobe Stock

If AI Speeds Up Science, Does It Risk Squashing Some Parts?

A Yale anthropologist and a Princeton psychologist warn of the dangers of overreliance on AI in science
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Yale anthropologist Lisa Messeri and Princeton psychologist M. J. Crockett published an open-access paper paper in Nature earlier this year discussing the various ways researchers see AI functioning in their work. They may see it as Oracle, Surrogate, Quant, or Arbiter. The researchers hope AI will “improve productivity and objectivity by overcoming human shortcomings.” But, however they see it and whatever their hopes for it, Messeri and Crockett warn,

But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community’s ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.

Messeri L, Crockett MJ. Artificial intelligence and illusions of understanding in scientific research. Nature. 2024 Mar;627(8002):49-58. doi: 10.1038/s41586-024-07146-0. Epub 2024 Mar 6. PMID: 38448693.

Embedding a value system without noticing it

machine AI bot working with people in the office and production for industrial revolution and automation manufacturing process. humanoid AI robots, unemployment. Generative AI, illustration

At Ars Technica, science writer Jennifer Ouellette pinpointed one of the authors’ key concerns:

“The goal of scientific knowledge is to understand the world and all of its complexity, diversity, and expansiveness,” Messeri told Ars. “Our concern is that even though we might be writing more and more papers, because they are constrained by what AI can and can’t do, in the end, we’re really only asking questions and producing a lot of papers that are within AI’s capabilities.”

Jennifer Ouellette, “Producing more but understanding less: The risks of AI for scientific research” Ars Technica, March 6, 2024

In an interview with Ouellette, Messeri stresses that this is not a problem of AI gone rogue or used for malevolent purposes. Even if those problems are fixed or prevented, the problem scientists still face is:

Many AI tools reawaken the myth that there can be an objective standpoint-free science in the form of the “objective” AI. But these AI tools don’t come from nowhere. They’re not a view from nowhere. They’re a view from a very particular somewhere. And that somewhere embeds the standpoint of those who create these AI tools: a very narrow set of disciplinary expertise—computer scientists, machine learning experts. Any knowledge we ask from these tools is reinforcing that single standpoint, but it’s pretending as if that standpoint doesn’t exist.

Ouellette, “The risks of AI for scientific research”

In short, if we rely on AI to do the heavy lifting, we may be able to solve problems that arise from poor functioning. But we are still constrained by the limits of what AI can do, cut off from what it can’t do, and possibly unaware of the embedded viewpoint.

Poor functioning vs. inherent limitations

Teddy bear astronaut wearing a tiny space suit floating in space and galaxy background outer space. Science fiction wallpaper.

These two types of problem are illustrated in two recent AI stories. In one case, chatbots were “hallucinating” nonsense that the Soviets had sent bears into space. That’s not only not true; it’s not even likely. Thus, sooner or later, someone will spot and correct the bots’ output.

Luckily, that story is too ridiculous to cause much trouble. Some auto-generated dangerous falsehoods, however, may not be outwardly ridiculous. But, dangerous or not, the “hallucinations” may remain in the realm of fixable-in-principle technical glitches.

The second type of problem is that AI is not creative. It does not generate new information; it works with existing information. Over time, with frequent reuse, it tends to degrade it. Thus, it ends up “eating its own tail,” producing poorer and poorer quality output, as in the jack rabbits episode. But if AI is not creative, that may be a fundamental limitation. If so, overreliance on AI in science could lead to stagnation, with the added danger that researchers may not be aware that it is happening.

You may also wish to read: Model collapse: AI chatbots are eating their own tails The problem is fundamental to how they operate. Without new human input, their output starts to decay. Meanwhile, organizations that laid off writers and editors to save money are finding that they can’t just program creativity or common sense into machines.

and

Internet pollution — if you tell a lie long enough… LLMs can generate falsehoods faster than humans can correct them. Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

If AI Speeds Up Science, Does It Risk Squashing Some Parts?