AI Ethics Symposium Schedule

We are happy to announce the tentative schedule for the AI Ethics Symposium on November 22nd, 2024. We will start the morning with presentations from our three plenary speakers: Alison L. Antes of Washington University School of Medicine, Benjamin Collins of Vanderbilt University Medical Center, and S. Matthew Liao of NYU School of Global Public Health.

Later in the afternoon, we will have parallel sessions for those whose abstracts were approved. Please take a look at the schedule for the afternoon panels below. We can’t wait to see you there.

For more information, check out our page: Symposium Information

If you have not yet registered, please register here: Registration Form

  • Registration closes on November 1st, 2024.

Afternoon panel presentation sessions:

Enhancing SimUNet for OPO Use

As we continue our work on our project, we’re excited to share the latest developments on SimUNet, which we will use in field experiments to test our AI models with OPO professionals.

During this year, we’ve focused on understanding the OPO context and identified core features needed for SimUNet to serve as an effective research tool. SimUNet’s original interface was designed around transplant center clinicians evaluating and responding to kidney offers. The upcoming updates will introduce a more flexible architecture that will allow researchers to create more customized study designs and the ability to conduct experiments in an OPO context.

We’ve already made significant progress in updating the contemporary design of SimUNet and adding important administrative tools to help manage the studies. Currently, the UNOS Software Engineering team is updating SimUNet to include more robust researcher features and adding the OPO study designs.

Our Next Steps:

Our immediate focus is on developing detailed OPO scenarios. These scenarios will simulate the decision-making points of OPO staff, allowing us to see how the tools we are building can assist them in their critical roles. We are also looking to identify key events when the AI tools have the most potential to help our introduce efficiencies.

We plan to conduct a focus group and testing with OPO professionals to refine the tool. Once the prototype of SimUNet OPO is ready, we will continue to refine and test the tool during the first half of 2025.

Why This Matters:

The enhancements we’re making to SimUNet will allow us to empirically demonstrate that what we are developing provides OPO staff with more intuitive tools to manage organ donation processes. By incorporating our AI models, we aim to streamline their workflows, improving both efficiency and outcomes in organ transplantation. Our goal is to start the field experiments in September 2025.

We are excited about the potential impact of this project and look forward to sharing more updates as we move closer to our study start date in September 2025.

Stay tuned for further developments as we continue to refine and test the improvements to SimUNet.

Contributor: Brendon Cummiskey (UNOS)

Upcoming AI Ethics Symposium

Please join us at the AI Ethics Symposium: Bridging Disparities in Health Care Using Artificial Intelligence on Friday, November 22, 2024 from 8:30 am to 5:00 pm.   

Location: II Monastero, Saint Louis University, St Louis, MO 

Registration: Free! Please register via: https://forms.gle/kB6tytzJYwKDdJC68  

Physicians, researchers, and scholars, including graduate and professional students, are invited to submit proposals on topics that pertain to practical and ethical opportunities and challenges related to AI’s capacity to address inequality and promote fairness in health care, particularly with respect to organ transplantation. In addition, proposals for panels representing public stakeholders, such as health policymakers and patient populations, would be especially welcome.  

To learn more, please see https://sites.mst.edu/aifortransplant/symposium/

We look forward to seeing you!

Translating Human Subjects Research Across Domains

AI systems are widely used in healthcare for applications ranging from medical image scan analysis to cancer detection. Human-subject studies are essential for understanding the effects of AI in specific tasks or domains. For us, these experiments are crucial for assessing the impact of Explainable AI (XAI) on factors like task performance, user trust, and user understanding. Conducting these experiments enhances our understanding of human-AI interaction, ultimately shaping AI systems to be more effective and suitable for specific domains, users, and tasks. However, to develop theory, we need to conduct a large number of experiments to iterate on designs.

Unfortunately, in the healthcare domain, it is difficult to conduct a large number of human-subject experiments. Experts, such as doctors and nurses, are busy and don’t have time to participate in multiple studies. Therefore, we find it useful to conduct more general experiments in a non-healthcare context (i.e., an analogy domain) where we can use an online participant pool. Once we find designs that work well in general, we will be able to test them in the kidney transplant context via a SimUNet field trial.

Identifying an Appropriate Analogy Domain

In the kidney transplant process, the proposed AI system will be utilized by surgeons and coordinators with expertise in matching donors and recipients. Therefore, the analogy domain should enable people to leverage their knowledge to complement the AI, rather than blindly accepting its predictions. Expertise can be assessed through self-reported measures, career-related information, and objective knowledge tests.

The analogy domain must also have tabulated information, like the dataset used in the kidney transplant process, which contains elements such as donor age, gender, and serum creatinine levels. The outcome measure should be predictable, not random, allowing users to make informed decisions. Domains like fantasy football or sports betting are unsuitable due to high randomness.

Finally, the domain should involve subjective truth – a decision made on personal expertise and risk preferences, where two experts could genuinely disagree about the appropriate path forward. In the kidney transplant process, surgeons’ decisions to accept or not accept a donor kidney are used as the ground truth in AI’s training. This is different from an “objective truth,” such as the health outcomes that a patient realizes in the future. In many cases, such as when a transplant is not conducted, we don’t know what that objective truth would have been.

Based on these characteristics, we have developed an experimental task in the real estate domain. The decision-making process in real estate involves the assessment of multiple attributes and expert knowledge. This similarity makes it a useful domain for studying how AI explanations affect user decision-making. Many people have familiarity with the process of buying a house, so they understand the decisions that go into it. For real estate, there is data on the house as well as the buyer, which have to go through a matching process. This allows us to test different XAI formats that can eventually be translated to the kidney transplant context.

Contributors: Harishankar V. Subramanian (S&T), Casey Canfield (S&T)

OPO Perspectives on AI Adoption

Between December 2023 and May 2024, we have interviewed 10 OPO leaders from across the country. One observation has been the existence of conflicting visions for AI adoption.

Some leaders believe that there needs to be a coordinated, top-down approach to AI adoption to ensure fairness and equity. This would likely be led by government regulators. However, this is likely to be a slower process.

Others see a need for a more bottom-up approach to facilitate experimentation and innovation. This can support the identification of best practices, but not all OPOs are equally able to participate. OPOs that are bigger or have more resources will be better able to benefit from AI.

The bottom line is that the exact impact of AI on OPO practices is unknown. We are hoping to generate evidence on how much AI can help as part of our planned field experiments using UNOS’s SimUNet. There are pros and cons of both a top-down and bottom-up approach in the adoption of AI. We are currently working on drafting a paper to summarize our findings and encourage more dialogue on this subject. In the meantime, we want to know what you think!

Developing Deep Learning Models for OPOs and Transplant Centers

So far, we have created 2 deep learning models that are focused on the OPO perspective:

  1. Deceased Donor Kidney Assessment – which evaluates the likelihood that a donated kidney will be transplanted based on up to 18 characteristics (including biopsy information, if available). This can help OPOs determine whether a kidney is hard-to-place based on historical behavior. You can play with this model here: https://ddoa.mst.hekademeia.org/#/
  2. Final Acceptance – which improves the estimate of the likelihood of transplant by incorporating recipient characteristics. This can be used to make an estimate for each recipient on the Match Run. For hard-to-place kidneys, this can help determine where to go for expedited placement. OPOs could save time by not sending offers that are very unlikely to be accepted.

Both models are trained from the OPTN Deceased Donor Dataset. We are planning to test the impact of these models using UNOS’s SimUNet, which is a research platform that is currently being expanded to include the OPO perspective as part of this project. To date, SimUNet studies have only focused on transplant surgeon decision-making.

To develop models that support transplant surgeons, we believe a more tailored approach is needed. Professor Cihan Dagli and Rachel Dzieran are working on a new model called Transplant Surgeon Fuzzy Associate Memory (TSFAM). The intent is for the model to use deep learning network model interfaces to capture individualized transplant surgeon practices and assessments through fuzzy associative memory. Fuzzy logic accounts for imperfect data and ambiguity, which is more consistent with how humans make decisions. We are identifying decision rules used by an individual transplant surgeon and then tailoring the AI-based decision-making model to support individual decision-making. Case studies are currently being reviewed for building the structure to collect the individualized transplant surgeon policies. The primary goal of this work is to support transplant surgeons by using their own policies when assessing deceased donor organs. Dr. Dagli has two new PhD students joining in Fall 2024 to continue to develop this new model.

Contributor: Rachel Dzieran (S&T), Cihan Dagli (S&T)

CHIL 2024

We had a great time at the the Conference on Health, Inference, and Learning (CHIL) this year. CHIL aims to include clinicians and researchers from both industry and academia who specialize in machine learning, health policy, causality, fairness, and other related fields. The conference aims to foster insightful discussions on innovative and emerging ideas, fostering collaboration and dialogue. Different presentations on fairness in regards to algorithms were found to be very helpful toward our project.

Dr. Nadendla (Associate Professor) and Mukund Telukunta (PhD Student) presented “Learning Social Fairness Preferences from Non-Expert Stakeholder Opinions in Kidney Placement”.

Mukund Telekunta and Dr. Nadendla at CHIL.

Abstract: Modern kidney placement incorporates several intelligent recommendation systems which exhibit social discrimination due to biases inherited from training data. Although initial attempts were made in the literature to study biases in kidney placement, these methods replace true outcomes with surgeons’ decisions due to the long delays involved in recording such outcomes reliably. However, the replacement of true outcomes with surgeons’ decisions disregards expert stakeholders’ biases as well as social opinions of other stakeholders who do not possess medical expertise. This paper alleviates the latter concern and designs a novel fairness feedback survey to evaluate an acceptance rate predictor (ARP) that predicts a kidney’s acceptance rate in a given kidney-match pair. The survey is launched on Prolific, a crowdsourcing platform, and public opinions are collected from 85 anonymous crowd participants. Our results show that the public participants deem “accuracy equality” as the preferred notion of fairness across all sensitive features. Moreover, the specific ARP tested in the Prolific survey has been deemed fair by the participants.

AOPO Annual Meeting 2024

Our first time attending the Association of Organ Procurement Organizations (AOPO) annual meeting in San Antonio, TX was a success. Casey Canfield presented a poster on “Increasing Kidney Utilization Using Artificial Intelligence Decision Support” and talked with many OPOs about our proposal for integrating AI into their workflow. We look forward to continuing to work with the OPO community!

Find Us at AOPO 2024!

The Association of Organ Procurement Organizations 41st annual meeting will be held in San Antonio, Texas from June 24th – 26th. The AOPO Annual Meeting brings organ and tissue procurement professionals together to share ideas, create connections, and educate the donation and transplant community.

Casey Canfield will have a poster titled “Increasing Kidney Utilization Using Artificial Intelligence Decision Support for Accelerated Placement” on display in the foyer outside the Ballroom. She will be presenting it on Wednesday (6/26) from 10:30-11am. Be sure to stop by to ask her any questions and see how you can get involved!

“Fairness” in Kidney Allocation

How does fairness work in organ transplantation? 

At present, the current organ allocation process is based on:

  • First, relevant medical criteria – such as the severity of the potential recipient’s condition and blood type – as well as the geographic proximity of the donor and recipient.
  • Secondly, the potential recipients are placed onto a list in which, after the previous criteria are taken into account, fairness in organ allocation is considered on a first come-first served model where those who have been on the waitlist the longest are prioritized to receive a matched organ. 

The issue at hand is that potential recipients matched with higher-risk kidneys often decline to accept such kidneys since they are likely to receive a more desirable kidney in the near future. This requires moving further down the wait-list, which is a time-consuming process and “time is tissue.” This project’s goal is to introduce an AI-assisted decision-making tool to operationalize fairness in the algorithm and provide a “first and best”-served model that reflects the same level of fairness as in standard organ allocation, but offers a more precise way of matching less desirable kidneys with recipients who are likely to accept them. In short, who is most likely to accept and to succeed with this kidney

Why AI? 

Algorithms can help simplify complex systems. However, algorithms use historical data to make estimates, which can introduce bias from previous decisions and changing policies. This is where the fairness of algorithms comes into play. We want to evaluate the models, and not the current process, to ensure the AI generalizations are actually helpful. Our AI tool aims to be a fair decision assistant. Surgeons and their patients have the final say and the power to override or ignore the algorithm’s prediction. Algorithms can’t see external factors, such as a support system for the recipient, or if the recipient is comfortable with the risk factors for a particular offer and other important factors for a successful transplant. Surgeons and transplant professionals have access to this. With the development of this AI, surgeons can see the recommendation and add additional factors to make the final decision of kidney transplantation in collaboration with their patient.

Project Overview:

This project aims to bring significant improvements to how kidneys are offered for transplant while examining and addressing biases in AI systems used to support decision-making. It involves developing fairer AI systems that balance the needs of transplant candidates, transplant centers, and OPOs. The main goals are to:

  1. Understand different stakeholder preferences to identify biases in the AI system.
  2. Combine these preferences to determine a fair approach.
  3. Enhance the AI system’s fairness based on the combined preferences.

In simpler terms, the project looks at how biases in AI can affect decisions about kidney transplants and works to create a fairer system that considers the needs and views of everyone involved.

Contributors: Jason Eberl (SLU), Michael Miller (SLU), Venkata Sriram Siddhardh Nadendla (S&T), Mukund Telukunta (S&T)