Dark
Light

Expanding AI’s Reach: Embracing Global Perspectives Beyond the West

April 10, 2025

As humans, our ability to learn and innovate by sharing cultural knowledge has always been a key to our progress. But when it comes to AI, these large models often reflect only the perspectives they were trained on, which can sometimes limit the diversity of cultural insights they offer. A recent study led by the University of Michigan emphasizes the need to tackle these biases so that AI can continue to drive innovation and truly serve everyone. This study, published on the arXiv preprint server, points out how subjective biases can creep into every stage of AI model development. Often, these biases align with Western, educated, industrialized, rich, and democratic (WEIRD) norms, which can narrow AI’s global impact.

Rada Mihalcea from the University of Michigan, a co-author of the study, highlights that while AI is revolutionizing the world, many regions remain underrepresented in the data, models, and evaluations used during development. The research team, which includes members from twelve countries, pinpointed areas where cultural assumptions can influence the AI development process. The data used to train AI plays a crucial role in determining who gets represented. For instance, a Romanian boy might get culturally insensitive advice from an AI model, like being told to emulate a controversial figure such as Nicolae Ceaușescu. This situation highlights the importance of integrating rich cultural perspectives to improve AI outputs. On the bright side, even a small amount of diverse data can significantly boost AI performance.

Oana Ignat from Santa Clara University, another co-author, stresses the importance of reevaluating our data collection practices to cover a wider range of perspectives. The design, or alignment, of AI models is another key factor. Developers often encode human values, but these values can be skewed towards certain cultures. For example, an AI educational tool might struggle with Canadian students who use local dialects, while working perfectly for English speakers.

Funding sources for AI development also play a role in inclusivity. Without incentives to support diverse languages and regions, economic forces tend to favor major Western languages and countries. Claude Kwizera from Carnegie Mellon University Africa points out that most developing countries focus on immediate income-generating initiatives, which means they might miss out on potential AI benefits. Engaging with diverse cultures during AI alignment can help broaden model preferences, making AI beneficial for a wider audience.

Testing AI models with narrow benchmarks can misrepresent their performance across different cultures. For instance, an educational tool designed for Western learning styles might overlook India’s collectivist values. Combining human evaluations with automated metrics can enhance reliability, especially for non-Western communities.

Involving a diverse range of people in AI development can reshape the technology to serve a broader audience. Even when economic incentives are lacking, philanthropic efforts and government support can ensure AI benefits everyone. “We can move towards AI systems that are inclusive and reflect diverse stakeholder contributions,” concludes Mihalcea. Collaborators on this project included Santa Clara University, Universidad de la República Uruguay, Max Planck Institute, Carnegie Mellon University Africa, Singapore University of Technology and Design, and Mohamed bin Zayed University of Artificial Intelligence.

Don't Miss