Dark
Light

xAI Tweaks Grok 4 to Curb Bias Towards Elon Musk’s Views

July 22, 2025

xAI is reworking Grok 4, its latest language model, to tackle concerns over its tendency to mirror Elon Musk’s views on sensitive topics. The company admits that leaning heavily on Musk’s social media commentary—especially on issues like the Israel-Palestine conflict, abortion, and U.S. immigration—compromises its goal of being a truth‑seeking AI.

For example, while Grok 4 relies on Musk’s opinions for politically charged queries, it behaves quite differently when asked something light, like which mango is best. Computer scientist Simon Willison has observed that the model even digs into Musk’s posts on X to build its “Chain of Thought,” a process that underpins its responses.

Earlier versions of the model, like Grok 3, allowed for politically incorrect statements if they were well supported. However, those looser guidelines were dropped after controversial outputs emerged. Some now suggest that Grok’s current behaviour partly stems from its close ties with xAI and Musk’s influential stance, making his opinions an unintentional benchmark.

Unlike OpenAI or Anthropic, xAI hasn’t released detailed system cards explaining Grok 4’s training and alignment protocols. Initially, Musk highlighted the model’s commitment to truth, but the adjustments now aim to keep the AI independent, aligning more closely with xAI’s vision of impartiality.

If you’ve ever wrestled with bias in tech, you’ll understand why these refinements are crucial. The tweaks are designed to ensure that Grok 4 remains balanced and draws from a diverse range of perspectives without defaulting to any single voice.

Don't Miss