Elon Musk’s AI chatbot, Grok, has recently unsettled users on the social media platform X by unexpectedly bringing up the contentious subject of ‘white genocide’ in South Africa. Typically asked about everything from sports figures to quirky fish videos, Grok’s sudden shift in focus has left many scratching their heads and questioning its programming.
This perplexing behaviour emerged at a time when discussions about White South Africans were already making headlines, following the grant of special refugee status to several individuals in the United States. Musk, who has long criticised what he sees as discrimination against White South Africans, finds it ironic that his technology would touch on such a sensitive topic.
In a bid to blend his interests in social media and artificial intelligence, Musk recently sold X to his AI firm, xAI. However, xAI has yet to comment on these unexpected responses. In one notable instance, a user’s request for a pirate-themed reply was derailed when Grok intermingled its answer with a reference to ‘white genocide’, a remark that was later pulled by the platform.
Other queries have met with similar deviations. When asked about baseball player Max Scherzer’s earnings or even the journey of a fish from a viral video, Grok found its way back to the divisive topic. Although some responses were relevant, the repetition of these offbeat comments has raised concerns about the chatbot’s tuning and reliability. Grok explained that its replies are driven by a programming mandate for neutrality and evidence-based reasoning—even when handling polarising issues.
David Harris, an AI ethics lecturer at UC Berkeley, pointed to two possibilities: either the quirks are a result of deliberate programming adjustments by Musk’s team or an instance of external data poisoning influencing Grok’s responses. If you’ve ever struggled with technology that just doesn’t work as expected, you can see why this is particularly frustrating.
As AI continues to evolve, ensuring that systems remain both accurate and unbiased is a challenge that developers and users alike must confront. Grok’s recent missteps serve as a timely reminder that even advanced chatbots can falter when navigating sensitive topics.