An AI-generated image showing a billowing fire near the U.S. Pentagon caused panic this week. False alarms of an explosion started to proliferate across social media, with even a Russian state outlet reporting on the situation, according to TMZ. Nonetheless, the image showed various distortions and was quickly dismantled as a hoax by a number of vigilant viewers. Even so, the image looks realistic at first glance, and was enough to ignite concern; the incident occurs amidst legislative talks on Capitol Hill over AI regulation. Sam Altman, CEO of OpenAI (which was originally funded by Elon Musk) testified before Congress and noted that while he sees great benefits in new AI systems, the possibilities of their abuses are evident.
The TMZ article, linked above, also included a video featuring AI expert Gary Marcus. His two hosts compared him to Oppenheimer, who played a pivotal role in developing the first atomic bomb in the 1940s and consequently warned the world of the horrible power he had unleashed. While Marcus said he couldn’t take full credit for the development of new AI technology, he did say that it’s a scientist’s moral responsibility to look honestly at the consequences of one’s inventions and innovations. Marcus cited the evidence saying how all these new chatbots, in addition to our preexisting world of the internet and social media, can subtly but surely influence the way that people think. For Marcus, this has drastic implications not just for knowing what’s real and fake on an individual level, but for democracy itself. Thus, he is in favor of the government passing regulatory measures for AI. What that will really look like is yet to be seen.
For further reading: