Bridging Algorithmic Fairness and Human Perception in AI for Transplant 

This research explores the gap between how fairness is defined in AI systems and how it is understood by people who interact with those systems—especially in high-stakes situations like kidney transplant decisions. In machine learning, fairness is often measured using different mathematical formulas that aim to treat groups equally based on factors like race or gender. But for patients, doctors, and families, fairness can feel much more personal and emotional. For example, a system might treat all demographic groups equally on paper but still suggest a lower-priority transplant match for a patient that clinicians feel deserves more urgent care. This disconnect shows that even if an algorithm is “fair” by technical standards, it may not align with what people experience as fair in real life. Our work aims to bridge that divide—combining data-driven fairness with human-centered understanding. To read more about our definition of fairness, check out this blog post here.

To better understand the gap between technical and perceived fairness, we developed an approach that captures how people respond to AI predictions. In our model, non-expert stakeholders are asked whether they agree or disagree with transplant-related decisions made by the system. This simple feedback helps us estimate how fairness is experienced across different social groups. 

We tested this approach through a survey experiment with 85 participants on Prolific. By comparing their responses to standard algorithmic fairness metrics, we uncovered a significant mismatch: even when the model met technical fairness standards, many participants still felt the outcomes were unfair. This reinforces the idea that human perceptions of fairness often go beyond what mathematical definitions can capture.  

To address the disconnect between algorithmic and perceived fairness, we propose two key strategies designed to surface and incorporate stakeholder expertise: 

  • Gamification: Interactive experiments that allow stakeholders to engage with transplant decision scenarios, helping researchers understand how people perceive fairness in different contexts. 
  • Stakeholder Audits: Sessions where patients, clinicians, and others directly evaluate AI-generated outcomes, offering critical insights into what they see as fair or unfair. 

Rather than aiming to simply educate stakeholders, our goal is to learn from their lived experiences and decision-making values. By centering their perspectives, we aim to build transplant AI systems that align more closely with real-world expectations of fairness that are more inclusive, trustworthy and responsive. Moving forward, we plan to further refine these strategies through continued engagement and implementation in real clinical settings.  

Leave a Reply

Your email address will not be published. Required fields are marked *