In today’s fast-paced digital world, more people are turning to AI tools like Grok on Elon Musk’s X platform for fact-checking. While this might sound like a great idea, it also raises some important questions about the spread of misinformation. Earlier this month, X allowed users to interact with Grok, much like Perplexity’s automated account. Before long, folks were using Grok to check comments, especially on hot-button political topics.
But here’s the catch: human fact-checkers are getting a bit nervous. They’re worried that AI tools like Grok might give answers that sound right but are actually off the mark. This isn’t a new problem. Last year, Grok was responsible for some misleading information that prompted five state secretaries to ask Elon Musk for some serious changes. We’ve seen similar issues with other AI models, like OpenAI’s ChatGPT and Google’s Gemini, especially during the U.S. elections.
Angie Holan, who leads the International Fact-Checking Network at Poynter, puts it well: “AI assistants, like Grok, they’re really good at using natural language and give an answer that sounds like a human being said it. And in that way, the AI products have this claim on naturalness and authentic sounding responses, even when they’re potentially very wrong.” It’s a bit like talking to someone who sounds confident but might not have all the facts straight.
On the flip side, human fact-checkers rely on verified sources, which means they’re accountable and credible. Pratik Sinha from Alt News points out a big concern: the quality of Grok’s answers depends on the data it accesses, and that data might be influenced by outside factors.
Even though Grok’s account on X acknowledged it could be misused, it doesn’t warn users about the accuracy of its responses. This lack of transparency is risky, especially since Grok’s answers are out there for everyone to see, unlike private chatbot conversations.
The bigger worry is the social harm AI-generated misinformation can cause. We’ve seen this play out in India, where misinformation led to some serious consequences. “Some of the research studies have shown that AI models are subject to 20% error rates… and when it goes wrong, it can go really wrong with real world consequences,” Holan added.
Despite the efforts of AI companies to improve their models, human oversight is still crucial. Platforms like X and Meta are exploring crowdsourced fact-checking methods, but these aren’t without their controversies. Sinha remains hopeful that people will eventually see the value in human fact-checkers and their accuracy over AI.
In the end, using AI for fact-checking offers both opportunities and challenges. As Holan suggests, it’s crucial to prioritize factual accuracy over the appealing nature of AI-generated responses. This ongoing conversation highlights the importance of keeping humans in the loop when it comes to fact-checking.