Mind Matters Natural and Artificial Intelligence News and Analysis
3D rendering of female robot looking sad.
3D rendering of a female robot sitting in solitude on the floor and looking sad or depressed. Black background with copyspace.

Researchers want rules to prevent “exploitation” of conscious AI

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Science writer Mariana Lenharo tells us that some researchers are worried that if AIs becomes conscious, we might neglect them or treat them poorly. As outlined in their report, they want to take steps to prevent that before it happens:

A group of philosophers and computer scientists are arguing that AI welfare should be taken seriously. In a report posted last month on the preprint server arXiv, ahead of peer review, they call for AI companies not only to assess their systems for evidence of consciousness and the capacity to make autonomous decisions, but also to put in place policies for how to treat the systems if these scenarios become reality.

They point out that failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer.

Some think that, at this stage, the idea that there is a need for AI welfare is laughable. Others are sceptical, but say it doesn’t hurt to start planning.

“What should we do if AI becomes conscious? These scientists say it’s time for a plan,” December 10, 2024 The report is open access.

Concerned researchers include Oxford’s Anil Seth, who wrote last year at Nautilus, that even though conscious AI is unlikely, we “mustn’t ignore the possibility.”

To what extent is this advertised concern simply a way of keeping AI hype alive?

Lenharo also tells us,

The report contends that AI welfare is at a “transitional moment”. One of its authors, Kyle Fish, was recently hired as an AI-welfare researcher by the AI firm Anthropic, based in San Francisco, California. This is the first such position of its kind designated at a top AI firm, according to authors of the report. Anthropic also helped to fund initial research that led to the report. “There is a shift happening because there are now people at leading AI companies who take AI consciousness and agency and moral significance seriously,” [Jeff] Sebo says. “AI becomes conscious?”

If they have a vested interest in keeping AI hype alive, this is the best way to go about it. Many ethicists and similar thinkers will be attracted to and promote the cause, producing conferences, papers, and books, even if AI never gets beyond a proneness to hallucination. Even if no definition or explanation of human consciousness is ever arrived at. But we shall see.

You may also wish to read: Chatbots are running out of training data. Publishers are beginning to fight back against copyright infringement but one analyst believes that “advances might soon come through self-reflection by an AI.”


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Researchers want rules to prevent “exploitation” of conscious AI