In a separate post – which showed a screenshot of the first part of the Grok response – Mr Wishart wrote: “This is a shocking allegation which has totally taken me aback [and is] beyond anything I’ve ever encountered in normal political discourse.”
Other posts made by Mr Wishart to the Grok chatbot included the questions “is it true you live in Elon Musk’s house” and “can you draw a picture of Elon as a hobbit”.
This was referring to an appearance Musk made on Joe Rogan’s podcast, where he compared British people to hobbits – the fictional characters from JRR Tolkein’s eponymous novel – in a discussion about illegal immigration and the grooming gang scandal.
On Wednesday morning, Mr Wishart then shared a post claiming he had received an apology from Grok – posting what appeared to be transcripts of a conversation he had with the chatbot on a separate app.
However, just as the AI can be prompted to use strong language against a user, it can equally be prompted to generate an apology.
“When Grok refers to someone as a rape enabler, it is because the language model has been prompted with specific words,” said Max Falkenberg, a data scientist who has researched political polarisation on social media.
“Grok did not invent” the accusation it levied at Mr Wishart, he added, as “it looks at its training data and it’s trying to predict which words come next”.
In other words, an AI chatbot generates its replies depending on what it is asked and the data used to create it – which in Grok’s case includes posts made on X.
At the time of its launch, owner Elon Musk said Grok would answer “spicy questions that are rejected by most other AI systems”.
It has caused controversy in the two years since it has launched, including posts which praised Hitler and allegedly making sexually explicit videos of Taylor Swift.
