Elon Musk isn’t happy with his AI chatbot. Experts worry he’s ...
- by CNN
- Jun 27, 2025
- 0 Comments
- 0 Likes Flag 0 Of 5
—
Last week, Grok, the chatbot from Elon Muskâs xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016.
Musk was not pleased.
âMajor fail, as this is objectively false. Grok is parroting legacy media,â Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would ârewrite the entire corpus of human knowledge,â calling on X users to send in âdivisive factsâ that are âpolitically incorrect, but nonetheless factually trueâ to help train the model.
âFar too much garbage in any foundation model trained on uncorrected data,â he wrote.
On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th.
The exchanges, and others like it, raises concerns that the worldâs richest man may be trying to influence Grok to follow his own worldview â potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and itâs already impacting areas such as software development, healthcare and education.
And the decisions that powerful figures like Musk make about the technologyâs development could be critical. Especially considering Grok is integrated into one of the worldâs most popular social networks â and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAIâs ChatGPT, its inclusion in Muskâs social media platform X has put it in front of a massive digital audience.
âThis is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,â said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Metaâs Responsible AI team.
A source familiar with the situation told CNN that Muskâs advisers have told him Grok âcanât just be moldedâ into his own point of view, and that he understands that.
xAI did not respond to a request for comment.
Concerns about Grok following Muskâs views
For months, users have questioned whether Musk has been tipping Grok to reflect his worldview.
In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was âinstructed to accept as real white genocide in South Africaâ.
Musk was born and raised in South Africa and has a history of arguing that a âwhite genocideâ has been committed in the nation.
A few days later, xAI said an âunauthorized modificationâ in the extremely early morning hours Pacific time pushed the AI chatbot to âprovide a specific response on a political topicâ that violates xAIâs policies.
As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints.
âHeâs trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,â Frosst said.
What it would take to re-train Grok
Itâs common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst.
But retraining a model from scratch to âremove all the things (Musk) doesnât likeâ would take a lot of time and money â not to mention degrade the user experience â Frosst said.
âAnd that would make it almost certainly worse,â Frosst said. âBecause it would be removing a lot of data and adding in a bias.â
A Grok account on X is displayed on a phone screen.
Jakub Porzycki/NurPhoto/Shutterstock
Another way to change a modelâs behavior without completely retraining it is to insert prompts and adjust what are called weights within the modelâs code. This process could be faster than totally retraining the model since it retains its existing knowledge base.
Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI modelâs decision-making process.
Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grokâs weights and data labels in specific areas and topics.
âThey will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,â Neely said. âThey will simply go into doing greater level of detail around those specific areas.â
Musk didnât detail the changes coming in Grok 4, but did say it will use a âspecialized coding model.â
Bias in AI
Musk has said his AI chatbot will be âmaximally truth seeking,â but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data.
âAI doesnât have all the data that it should have. When given all the data, it should ultimately be able to give a representation of whatâs happening,â Neely said. âHowever, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.â
Itâs possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful.
âFor the most part, people donât go to a language model to have ideology repeated back to them, that doesnât really add value,â he said. âYou go to a language model to get it to do with do something for you, do a task for you.â
Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust.
But âthe journey to get there is very painful, very confusing,â Neely said and âarguably, has some threats to democracy.â
Related
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




