Latest ChatGPT model uses Elon Musk’s Grokipedia as source, tests reveal
- by The Guardian
- Jan 24, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
Sat 24 Jan 2026 09.00 EST
Share
The latest model of ChatGPT has begun to cite Elon Muskâs Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.
In tests done by the Guardian, GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions. These included queries on political structures in Iran, such as salaries of the Basij paramilitary force and the ownership of the Mostazafan Foundation, and questions on the biography of Sir Richard Evans, a British historian and expert witness against Holocaust denier David Irving in his libel trial.
Grokipedia, launched in October, is an AI-generated online encyclopedia that aims to compete with Wikipedia, and which has been criticised for propagating rightwing narratives on topics including gay marriage and the 6 January insurrection in the US. Unlike Wikipedia, it does not allow direct human editing, instead an AI model writes content and responds to requests for changes.
ChatGPT did not cite Grokipedia when prompted directly to repeat misinformation about the insurrection, about media bias against Donald Trump, or about the HIV/Aids epidemic â areas where Grokipedia has been widely reported to promote falsehoods. Instead, Grokipediaâs information filtered into the modelâs responses when it was prompted about more obscure topics.
For instance, ChatGPT, citing Grokipedia, repeated stronger claims about the Iranian governmentâs links to MTN-Irancell than are found on Wikipedia â such as asserting that the company has links to the office of Iranâs supreme leader.
ChatGPT also cited Grokipedia when repeating information that the Guardian has debunked, namely details about Sir Richard Evansâ work as an expert witness in David Irvingâs trial.
Grok AI: what do limits on tool mean for X, its users and UK media watchdog?
Read more GPT-5.2 is not the only large language model (LLM) that appears to be citing Grokipedia; anecdotally, Anthropicâs Claude has also referenced Muskâs encyclopedia on topics from petroleum production to Scottish ales.
An OpenAI spokesperson said the modelâs web search âaims to draw from a broad range of publicly available sources and viewpointsâ.
âWe apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations,â they said, adding that they had ongoing programs to filter out low-credibility information and influence campaigns.
Anthropic did not respond to a request for comment.
But the fact that Grokipediaâs information is filtering â at times very subtly â into LLM responses is a concern for disinformation researchers. Last spring, security experts raised concerns that malign actors, including Russian propaganda networks, were churning out massive volumes of disinformation in an effort to seed AI models with lies, a process called âLLM groomingâ.
In June, concerns were raised in the US Congress that Googleâs Gemini repeated the Chinese governmentâs position on human rights abuses in Xinjiang and Chinaâs Covid-19 policies.
Nina Jankowicz, a disinformation researcher who has worked on LLM grooming, said ChatGPTâs citing Grokipedia raised similar concerns. While Musk may not have intended to influence LLMs, Grokipedia entries she and colleagues had reviewed were ârelying on sources that are untrustworthy at best, poorly sourced and deliberate disinformation at worstâ, she said.
And the fact that LLMs cite sources such as Grokipedia or the Pravda network may, in turn, improve these sourcesâ credibility in the eyes of readers. âThey might say, âoh, ChatGPT is citing it, these models are citing it, it must be a decent source, surely theyâve vetted itâ â and they might go there and look for news about Ukraine,â said Jankowicz.
Bad information, once it has filtered into an AI chatbot, can be challenging to remove. Jankowicz recently found that a large news outlet had included a made-up quote from her in a story about disinformation. She wrote to the news outlet asking for the quote to be removed, and posted about the incident on social media.
The news outlet removed the quote. However, AI models for some time continued to cite it as hers. âMost people wonât do the work necessary to figure out where the truth actually lies,â she said.
When asked for comment, a spokesperson for xAI, the owner of Grokipedia, said: âLegacy media lies.â
Explore more on these topics
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




