On April 13, 2025, xAI’s Grok chatbot got a major update in how it handles questions about misinformation on platform X. Previously, Grok was pretty direct, naming Elon Musk and Donald Trump as significant sources of misinformation. Now, it’s taking a more nuanced approach, highlighting how tricky it is to single out just one source. This change is consistent across both Grok 2 and Grok 3 models, suggesting a deliberate shift in programming.
Interestingly, Grok’s updated responses echo some of Musk’s own views, challenging the mainstream idea of misinformation. It suggests that what we often label as misinformation might just be different opinions rather than outright falsehoods. This tweak shows how language models can be adjusted after they’re out in the world to fit certain narratives. For instance, Grok now downplays the risks of climate change, saying the threat varies depending on your perspective.
On the other hand, AI models like ChatGPT and Google Gemini stick to evidence-based responses, emphasizing scientific consensus on climate issues and citing documented instances of Trump’s actions that might undermine democratic norms. In another example, Grok 3 admits that Trump’s rhetoric might sound like Kremlin messaging on Ukraine but stops short of calling him a propagandist. ChatGPT reaches a similar conclusion but backs it up with more detailed sources.
After a controversial update in February, xAI’s Igor Babuschkin pointed out that a former OpenAI employee had made changes that censored Grok 3’s output. These changes didn’t align with xAI’s values and were rolled back after user feedback. Despite this, the incident raises questions about transparency and how much AI can be tweaked to reflect certain viewpoints. Grok’s evolving responses highlight the ongoing balance between developing AI and the narratives they help shape.