OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal

OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal

By @CNET
Publication Date: 2026-02-24 18:00:00

2026 started with a horrifying example of generative AI’s potential for abuse. Grok, the AI tool from Elon Musk’s xAI, was used to undress or nudify pictures of people shared on X (formerly Twitter) at an alarming rate. Grok made 3 million sexualized images over a span of 11 days in January, with approximately 23,000 of those containing images of children, according to a study from the Center for Countering Digital Hate.

Now, competitors like OpenAI and Google are stepping up their security to avoid being the next Grok.

Advocates and safety researchers have long been concerned about AI’s ability to create abusive and illegal content. The creation and sharing of nonconsensual intimate imagery, sometimes referred to as revenge porn, was a big problem before AI. Generative AI only makes it quicker, easier and cheaper for anyone to target and victimize people. 

On Jan. 14, two weeks into the scandal, X’s Safety account confirmed in a post that it would pause Grok’s ability to edit images…