Dark
Light

Evaluating Fairness in Machine Learning: Are Single Models Sufficient?

July 21, 2025

Machine learning now plays a major role in decisions that can affect lives—think job applications or loan approvals. But if you’ve ever wondered whether one model really can cover it all, you’re not alone. A fresh study from experts at the University of California San Diego and the University of Wisconsin–Madison is challenging the idea of relying on a single model when outcomes differ.

Led by Associate Professor Loris D’Antoni at the Jacobs School of Engineering, the research dives into how everyday people view fairness when multiple, highly accurate models offer different conclusions. Presented at the CHI 2025 Conference and available as an arXiv preprint, the study compares the diversity in model outputs to differences in human judgment—even when accuracy is high.

According to D’Antoni, the conventional practice in machine learning may have a fairness risk. He explained, “We asked lay stakeholders how decisions should be made when multiple models disagree on a single input.” The response was clear: many prefer not to rely solely on one model, and randomising outcomes isn’t an acceptable fix either.

First author Anna Meyer, a Ph.D. student who will soon begin her role as an assistant professor at Carlton College, noted that these findings stand in contrast with standard practices in machine learning development and philosophical debates on fairness. The study encourages greater exploration of different models and a more active role for human judgement in critical decisions.

Other contributors, such as Aws Albarghouthi from the University of Wisconsin and Yea-Seul Kim from Apple, add depth to the discussion. If you’ve ever wrestled with the balance between automation and fairness, the study offers useful insights into how technology might better serve everyone.

By stepping back and considering multiple angles, the research nudges us to rethink how we build and use machine learning systems—ensuring decisions are as fair and transparent as possible.

Don't Miss