Dark
Light

AI Models Reveal Cultural Biases in English and Chinese Responses, Study Shows

July 8, 2025

Recent research from MIT and Tongji University shows that popular language models like GPT and ERNIE aren’t culturally neutral—they actually reflect different cultural biases based on the language you use. The study, published in Nature Human Behaviour, explores how these models not only generate text but also mirror the social orientations and cognitive styles inherent in English and Chinese.

If you’ve ever wondered why a single AI might seem to adopt a slightly different personality depending on its language, you’re not alone. When used in Chinese, GPT tends to lean towards an interdependent social orientation, while its English counterpart favours a more independent approach. Similarly, its cognitive style shifts, offering a holistic view in Chinese and a more analytic perspective in English. These shifts were measured using established psychological tools like the Collectivism Scale and the Attribution Bias Task.

The researchers also experimented with cultural prompts—small instructions that nudge the AI to adopt the traits of a specific culture. This approach shows promise, potentially leading us towards more culturally adaptable or neutral AI systems. It’s a step in the right direction for creating technology that can better reflect and respect the diversity of user backgrounds.

For anyone dealing with the challenges of bias in technology, these insights are a valuable reminder that even our most advanced tools are shaped by the cultures they come from. This study encourages further exploration into how AI reflects cultural values, ultimately helping to design systems that are fairer and more considerate of diverse linguistic and cultural contexts.

Don't Miss