Grok’s Deepfake Capacities Modified Following Pushback
Deepfakes and AI-generated content are only becoming more prevalent on sites like X.Grok, X’s AI program, came under fire when its users began creating explicit images and videos of real people. AI-generated deepfakes began to abound until finally, X took action to modify the AI bot’s capabilities. According to X’s Safety account:
We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.
Additionally, image creation and the ability to edit images via the Grok account on the X platform are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable.
Deepfakes and AI-generated content are only becoming more prevalent on sites like X. According to a 2025 report from Futurism, at least half if not more of the internet’s new content is AI-generated. That trend appears to have flattened, but it’s a jaw-dropping statistic regardless. If you get online and scroll, chances are at least part of what you see or read wasn’t created by a human but “generated” through prompts. Strange times indeed.
The Grok incident indicates that public consensus still doesn’t tolerate such clear violations of privacy and human dignity. Nonetheless, it doesn’t seem likely that this problem will vanish overnight. AI image generators are only getting better. A problem that definitely isn’t going away is judging authenticity. We already have a hard time distinguishing between real and AI-generated images. In a few years’ time, what sort of social impact will that have? Will we simply stop caring?
