It comes amid a debate over whether US tech companies are shielded from responsibility for what users post on AI platforms.
Section 230 of the Communications Decency Act of 1996 provides legal immunity to online platforms for user-generated content.
But Prof James Grimmelmann of Cornell University argues this law “only protects sites from liability for third-party content from users, not content the sites themselves produce”.
Grimmelmann said xAI was trying to deflect blame for the imagery on to users, but expressed doubt this argument would hold up in court.
“This isn’t a case where users are making the images themselves and then sharing them on X,” he said.
In this case “xAI itself is making the images. That’s outside of what Section 230 applies to”, he added.
Senator Ron Wyden of Oregon has argued that Section 230, which he co-authored, does not apply to AI-generated images. He said companies should be held fully responsible for such content.
“I’m glad to see states like California step up to investigate Elon Musk’s horrific child sexual abuse material generator,” Wyden told the BBC on Wednesday.
Wyden is one of the three Democratic senators who asked Apple and Google to remove X and Grok from their app stores.
The announcement of the probe in California comes as the UK is preparing legislation that would make it illegal to create non-consensual intimate images.
The UK watchdog Ofcom has also launched an investigation into Grok.
If it determines the platform has broken the law, it can issue fines of up to 10% of its worldwide revenue or £18m, whichever is greater.
On Monday, Sir Keir Starmer told Labour MPs that Musk’s social media platform X could lose the “right to self regulate” adding that “if X cannot control Grok, we will.”
