Mind Matters Natural and Artificial Intelligence News and Analysis
shaking-hands-with-the-future-human-and-ai-collaboration-man-and-robot-on-background-of-huge-data-center-based-on-generative-ai-stockpack-adobe-stock
Shaking hands with the future: human and AI collaboration. Man and robot on background of huge data center. Based on Generative AI
Image licensed via Adobe Stock

Marks: The More Complex the AI, the More It Could Go Wrong

Robert J. Marks's new article discusses how AI's growing complexity makes it harder to regulate
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Robert J. Marks, director of the Walter Bradley Center for Natural & Artificial Intelligence, has a brand new article out over at Newsmax on the complexity of artificial intelligence and how, regardless of how many “band-aids” we put on its problematic outputs, it’s impossible to fully regulate a machine with this level of sophistication. Because AI is not a “slave to the truth,” it always needs improvement and correction by its human users. The problem is that we can’t avoid some of the damages until they’re already wrought. Marks writes,

The more complex a system, the greater the number of ways it can respond and the more ways it can go wrong.

The greater the number of possible responses, the more a design may need to be tested and tuned. AI with a narrow mission more easily tuned. But as the complexity of a system increases linearly, the number of ways it can respond increases exponentially. GPT-3, the big brother of ChatGPT, has 175 billion moving parts, or tunable parameters. This is enormous complexity.

GPT-4, the next generation, is even more complex.

Chatbots are not slaves to truth. The New York Times accurately observes that chatbots are often “inaccurate, misleading and downright weird.” And reports of chatbots breaking bad are troubling.

-Robert J. Marks, AI Complex, Making You a Caregiver | Newsmax.com

Marks mentions the case of Professor Jonathan Turley, who was falsely accused of sexual harassment by ChatGPT. His reputation took a blow; Marks notes how the algorithm can be trained to avoid those incidents in the future, but the cost of it happening once is already apparent.

Marks concludes his piece by recommending AI be kept away from kids, given what the Snapchat “MyAI” is telling them, and encourages people to always hold ChatGPT’s outputs with some suspicion.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Marks: The More Complex the AI, the More It Could Go Wrong