We had a great time at the the Conference on Health, Inference, and Learning (CHIL) this year. CHIL aims to include clinicians and researchers from both industry and academia who specialize in machine learning, health policy, causality, fairness, and other related fields. The conference aims to foster insightful discussions on innovative and emerging ideas, fostering collaboration and dialogue. Different presentations on fairness in regards to algorithms were found to be very helpful toward our project.
Dr. Nadendla (Associate Professor) and Mukund Telukunta (PhD Student) presented “Learning Social Fairness Preferences from Non-Expert Stakeholder Opinions in Kidney Placement”.
Abstract: Modern kidney placement incorporates several intelligent recommendation systems which exhibit social discrimination due to biases inherited from training data. Although initial attempts were made in the literature to study biases in kidney placement, these methods replace true outcomes with surgeons’ decisions due to the long delays involved in recording such outcomes reliably. However, the replacement of true outcomes with surgeons’ decisions disregards expert stakeholders’ biases as well as social opinions of other stakeholders who do not possess medical expertise. This paper alleviates the latter concern and designs a novel fairness feedback survey to evaluate an acceptance rate predictor (ARP) that predicts a kidney’s acceptance rate in a given kidney-match pair. The survey is launched on Prolific, a crowdsourcing platform, and public opinions are collected from 85 anonymous crowd participants. Our results show that the public participants deem “accuracy equality” as the preferred notion of fairness across all sensitive features. Moreover, the specific ARP tested in the Prolific survey has been deemed fair by the participants.