I tried Grokipedia, the AI-powered 'anti-Wikipedia.' Here's why neither is foolproof
- by ZDNet
- Oct 29, 2025
- 0 Comments
- 0 Likes Flag 0 Of 5
Also: Get your news from AI? Watch out - it's wrong almost half the time
While Grok was engineered "to maximize truth and objectivity," according to xAI, it's important to remember that it's an LLM, and like all LLMs, it's imperfect.
According to one public leaderboard comparing the hallucination rates of frontier models when prompted to perform a simple document-summarization task, Grok 2 currently scores comparatively high, meaning it's more prone to hallucination than many other leading models. Grok 4, the newest iteration of the model, currently ranks in the 99th spot on that same leaderboard, just ahead of OpenAI's o4-mini-high and just behind Microsoft's Phi-4.
Screenshot: GitHub
xAI also trained Grok in part using public posts and other information taken from X, which xAI has described as "a unique and fundamental advantage" for the chatbot. A study posted to the preprint server site arXiv earlier this month, however, found a causal relationship between training data gleaned from "junk" social media content -- think: high-engagement, low-quality posts -- and an inclination toward the digital analogue of "brain rot": a noticable decline in the trustworthiness of model outputs, and an increase in "dark traits," such as psychopathy.
Content sourcing questions
Then there's the fact that many users have reported incidents in which Grok appeared to consult Musk's social media posts or articles written about the billionaire before responding to certain sensitive prompts. Similarly, Grokipedia appears in some important respects to reflect Musk's personal, political, and ideological opinions.
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




