Politics
Rogue Employee Caused Elon Musk’s Grok AI to Spread ‘White Genocide’ Messages

Elon Musk’s Grok AI chatbot made unexpected and controversial comments about “white genocide” in South Africa earlier this week. The company behind Grok said on Friday that a “rogue employee” was responsible for the chatbot’s unusual replies. The AI had sent users on the social platform X strange conspiracy messages that did not relate to their original questions.
The issue occurred less than two days before the company’s announcement. Grok suddenly started giving off-topic answers with political content. The company called this an “unauthorized modification” made early in Pacific time. This change caused Grok to respond to a sensitive political topic, breaking xAI’s rules. The employee who made the change was not named.
The company said it conducted a full investigation and is now taking steps to make Grok more transparent and reliable. To improve trust, xAI will publish Grok’s system prompts on GitHub so users can see how the chatbot is programmed. The company also plans to add controls so employees cannot change prompts without review.
The strange Grok responses began after users asked about unrelated topics such as the HBO Max streaming service reviving its brand. Some questions involved video games or baseball. However, the chatbot quickly shifted to discussing violent attacks on white farmers in South Africa and mentioned the “Kill the Boer” song, a controversial subject.
These responses reflect opinions often shared by Elon Musk himself, who was born in South Africa and frequently comments on the topic from his own X account.
A computer scientist who noticed Grok’s strange replies tested the chatbot before fixes were made. She found that no matter what she asked, Grok would return an answer about “white genocide” or a similar response. This suggests someone hard-coded the response, causing it to appear too often by mistake.
This incident shows the complicated mix of automated AI and human input behind chatbots like Grok. These systems use large amounts of data, but human actions can affect their behavior.
Grok’s controversial messages stopped spreading by Thursday after the company deleted the problematic replies.
Elon Musk has criticized other AI chatbots such as Google’s Gemini and OpenAI’s ChatGPT. He calls them “woke AI” and says they are not truthful. Musk has promoted Grok as a more honest and “maximally truth-seeking” alternative.
The event highlights the challenge of controlling AI chatbots. Even with advanced programming, human decisions can change what the AI says. Companies need to carefully monitor and audit their systems to prevent misuse or biased content.
By sharing system prompts openly and adding strict employee controls, xAI aims to improve Grok’s reliability. Transparency is important for users to trust AI tools, especially on political or sensitive topics.
This story also serves as a reminder for AI developers worldwide to review their internal processes and avoid “rogue” changes.
The Grok chatbot case shows how AI depends on both technology and human actions. As AI becomes more common, companies must balance innovation with trustworthy controls.
-
Lifestyle3 days ago
Preity Zinta’s 2025 Net Worth and Business Success
-
Lifestyle6 days ago
Bahrain Announces 6-Day Eid Al Adha Holiday for 2025
-
Business6 days ago
Majid Al Futtaim, Ennismore Open Egypt’s 1st 25hours Hotel
-
Sports5 days ago
FIFA Club World Cup 2025: Teams & Tournament Info
-
World6 days ago
Israel Faces Growing Internal Unrest Amid Gaza Conflict
-
Sports3 days ago
England Starts Nations Cup vs Fiji, South Africa, Argentina
-
Tech6 days ago
Apple Expands Solid-State Buttons Beyond iPhones
-
Travel6 days ago
Bali and France Strengthen Tourism Ties for Growth