Dark
Light

AI and human collaboration: fighting misinformation on X

July 4, 2025

X (formerly known as Twitter) is refreshing its approach to tackling misinformation by evolving its Community Notes programme, first introduced in 2021. Originally, Community Notes allowed users to add helpful context to posts that might be misleading – a tool based entirely on human contributions. Other platforms even adopted similar strategies after seeing its success.

Now, X is testing a new model that blends human insight with AI-generated input. The idea is simple: while artificial intelligence can help generate notes at a pace and scale hard to match manually, the final decision on a note’s usefulness still rests with real people. This means that even though large language models (LLMs) might speed things up, you’re never far away from that essential human touch.

Community feedback is the key here. Using a process called reinforcement learning from community feedback (RLCF), the system aims to refine AI-generated notes so they remain clear, accurate, and unbiased. If you’ve ever felt overwhelmed by the sheer volume of online information, you’ll appreciate how these improvements work to keep content reliable without overloading human moderators.

Of course, challenges remain. AI-generated notes might sometimes risk inaccuracy or come off as too uniform, possibly reducing human involvement in crafting notes. There’s also the practical concern of balancing the speed of AI contributions with the thorough review required by human evaluators. However, upcoming enhancements—like AI tools designed to assist human reviewers and even AI ‘co-pilots’ for note writing—offer promising solutions.

This evolving collaboration between humans and machines is all about combining rapid processing with thoughtful, human judgment. In doing so, X hopes not only to improve the quality of information shared on its platform but also to empower you to think more critically about the content you see.

Don't Miss