Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times,
The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.
Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.–Cade Metz and Gregory Schmidt, Elon Musk and Others Call for Pause on A.I., Citing ‘Risks to Society’ – The New York Times (nytimes.com)
Especially since the launch of OpenAI’s ChatGPT in November of 2022, tech companies like Google and Microsoft have been racing to implement new AI systems into their own search engines. Even Snapchat is trying its hand at the chatbots.
The pushback is notably coming from those within the world of technology. The petition isn’t heralded by luddites, but by ambitious tech innovators. The reporters continue,
Though some who signed the letter are known for repeatedly expressing concerns that A.I. could destroy humanity, others, including Mr. Marcus, are more concerned about its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice.
The petition reflects growing concern over the proper role of AI in human society and its potential for overreach. What should we use AI for, and when does it cross the line?
For further reading: