AI Ethics Symposium Schedule

We are happy to announce the tentative schedule for the AI Ethics Symposium on November 22nd, 2024. We will start the morning with presentations from our three plenary speakers: Alison L. Antes of Washington University School of Medicine, Benjamin Collins of Vanderbilt University Medical Center, and S. Matthew Liao of NYU School of Global Public Health.

Later in the afternoon, we will have parallel sessions for those whose abstracts were approved. Please take a look at the schedule for the afternoon panels below. We can’t wait to see you there.

For more information, check out our page: Symposium Information

If you have not yet registered, please register here: Registration Form

  • Registration closes on November 1st, 2024.

Afternoon panel presentation sessions:

Enhancing SimUNet for OPO Use

As we continue our work on our project, we’re excited to share the latest developments on SimUNet, which we will use in field experiments to test our AI models with OPO professionals.

During this year, we’ve focused on understanding the OPO context and identified core features needed for SimUNet to serve as an effective research tool. SimUNet’s original interface was designed around transplant center clinicians evaluating and responding to kidney offers. The upcoming updates will introduce a more flexible architecture that will allow researchers to create more customized study designs and the ability to conduct experiments in an OPO context.

We’ve already made significant progress in updating the contemporary design of SimUNet and adding important administrative tools to help manage the studies. Currently, the UNOS Software Engineering team is updating SimUNet to include more robust researcher features and adding the OPO study designs.

Our Next Steps:

Our immediate focus is on developing detailed OPO scenarios. These scenarios will simulate the decision-making points of OPO staff, allowing us to see how the tools we are building can assist them in their critical roles. We are also looking to identify key events when the AI tools have the most potential to help our introduce efficiencies.

We plan to conduct a focus group and testing with OPO professionals to refine the tool. Once the prototype of SimUNet OPO is ready, we will continue to refine and test the tool during the first half of 2025.

Why This Matters:

The enhancements we’re making to SimUNet will allow us to empirically demonstrate that what we are developing provides OPO staff with more intuitive tools to manage organ donation processes. By incorporating our AI models, we aim to streamline their workflows, improving both efficiency and outcomes in organ transplantation. Our goal is to start the field experiments in September 2025.

We are excited about the potential impact of this project and look forward to sharing more updates as we move closer to our study start date in September 2025.

Stay tuned for further developments as we continue to refine and test the improvements to SimUNet.

Contributor: Brendon Cummiskey (UNOS)

Upcoming AI Ethics Symposium

Please join us at the AI Ethics Symposium: Bridging Disparities in Health Care Using Artificial Intelligence on Friday, November 22, 2024 from 8:30 am to 5:00 pm.   

Location: II Monastero, Saint Louis University, St Louis, MO 

Registration: Free! Please register via: https://forms.gle/kB6tytzJYwKDdJC68  

Physicians, researchers, and scholars, including graduate and professional students, are invited to submit proposals on topics that pertain to practical and ethical opportunities and challenges related to AI’s capacity to address inequality and promote fairness in health care, particularly with respect to organ transplantation. In addition, proposals for panels representing public stakeholders, such as health policymakers and patient populations, would be especially welcome.  

To learn more, please see https://sites.mst.edu/aifortransplant/symposium/

We look forward to seeing you!

Translating Human Subjects Research Across Domains

AI systems are widely used in healthcare for applications ranging from medical image scan analysis to cancer detection. Human-subject studies are essential for understanding the effects of AI in specific tasks or domains. For us, these experiments are crucial for assessing the impact of Explainable AI (XAI) on factors like task performance, user trust, and user understanding. Conducting these experiments enhances our understanding of human-AI interaction, ultimately shaping AI systems to be more effective and suitable for specific domains, users, and tasks. However, to develop theory, we need to conduct a large number of experiments to iterate on designs.

Unfortunately, in the healthcare domain, it is difficult to conduct a large number of human-subject experiments. Experts, such as doctors and nurses, are busy and don’t have time to participate in multiple studies. Therefore, we find it useful to conduct more general experiments in a non-healthcare context (i.e., an analogy domain) where we can use an online participant pool. Once we find designs that work well in general, we will be able to test them in the kidney transplant context via a SimUNet field trial.

Identifying an Appropriate Analogy Domain

In the kidney transplant process, the proposed AI system will be utilized by surgeons and coordinators with expertise in matching donors and recipients. Therefore, the analogy domain should enable people to leverage their knowledge to complement the AI, rather than blindly accepting its predictions. Expertise can be assessed through self-reported measures, career-related information, and objective knowledge tests.

The analogy domain must also have tabulated information, like the dataset used in the kidney transplant process, which contains elements such as donor age, gender, and serum creatinine levels. The outcome measure should be predictable, not random, allowing users to make informed decisions. Domains like fantasy football or sports betting are unsuitable due to high randomness.

Finally, the domain should involve subjective truth – a decision made on personal expertise and risk preferences, where two experts could genuinely disagree about the appropriate path forward. In the kidney transplant process, surgeons’ decisions to accept or not accept a donor kidney are used as the ground truth in AI’s training. This is different from an “objective truth,” such as the health outcomes that a patient realizes in the future. In many cases, such as when a transplant is not conducted, we don’t know what that objective truth would have been.

Based on these characteristics, we have developed an experimental task in the real estate domain. The decision-making process in real estate involves the assessment of multiple attributes and expert knowledge. This similarity makes it a useful domain for studying how AI explanations affect user decision-making. Many people have familiarity with the process of buying a house, so they understand the decisions that go into it. For real estate, there is data on the house as well as the buyer, which have to go through a matching process. This allows us to test different XAI formats that can eventually be translated to the kidney transplant context.

Contributors: Harishankar V. Subramanian (S&T), Casey Canfield (S&T)

OPO Perspectives on AI Adoption

Between December 2023 and May 2024, we have interviewed 10 OPO leaders from across the country. One observation has been the existence of conflicting visions for AI adoption.

Some leaders believe that there needs to be a coordinated, top-down approach to AI adoption to ensure fairness and equity. This would likely be led by government regulators. However, this is likely to be a slower process.

Others see a need for a more bottom-up approach to facilitate experimentation and innovation. This can support the identification of best practices, but not all OPOs are equally able to participate. OPOs that are bigger or have more resources will be better able to benefit from AI.

The bottom line is that the exact impact of AI on OPO practices is unknown. We are hoping to generate evidence on how much AI can help as part of our planned field experiments using UNOS’s SimUNet. There are pros and cons of both a top-down and bottom-up approach in the adoption of AI. We are currently working on drafting a paper to summarize our findings and encourage more dialogue on this subject. In the meantime, we want to know what you think!

Developing Deep Learning Models for OPOs and Transplant Centers

So far, we have created 2 deep learning models that are focused on the OPO perspective:

  1. Deceased Donor Kidney Assessment – which evaluates the likelihood that a donated kidney will be transplanted based on up to 18 characteristics (including biopsy information, if available). This can help OPOs determine whether a kidney is hard-to-place based on historical behavior. You can play with this model here: https://ddoa.mst.hekademeia.org/#/
  2. Final Acceptance – which improves the estimate of the likelihood of transplant by incorporating recipient characteristics. This can be used to make an estimate for each recipient on the Match Run. For hard-to-place kidneys, this can help determine where to go for expedited placement. OPOs could save time by not sending offers that are very unlikely to be accepted.

Both models are trained from the OPTN Deceased Donor Dataset. We are planning to test the impact of these models using UNOS’s SimUNet, which is a research platform that is currently being expanded to include the OPO perspective as part of this project. To date, SimUNet studies have only focused on transplant surgeon decision-making.

To develop models that support transplant surgeons, we believe a more tailored approach is needed. Professor Cihan Dagli and Rachel Dzieran are working on a new model called Transplant Surgeon Fuzzy Associate Memory (TSFAM). The intent is for the model to use deep learning network model interfaces to capture individualized transplant surgeon practices and assessments through fuzzy associative memory. Fuzzy logic accounts for imperfect data and ambiguity, which is more consistent with how humans make decisions. We are identifying decision rules used by an individual transplant surgeon and then tailoring the AI-based decision-making model to support individual decision-making. Case studies are currently being reviewed for building the structure to collect the individualized transplant surgeon policies. The primary goal of this work is to support transplant surgeons by using their own policies when assessing deceased donor organs. Dr. Dagli has two new PhD students joining in Fall 2024 to continue to develop this new model.

Contributor: Rachel Dzieran (S&T), Cihan Dagli (S&T)

“Fairness” in Kidney Allocation

How does fairness work in organ transplantation? 

At present, the current organ allocation process is based on:

  • First, relevant medical criteria – such as the severity of the potential recipient’s condition and blood type – as well as the geographic proximity of the donor and recipient.
  • Secondly, the potential recipients are placed onto a list in which, after the previous criteria are taken into account, fairness in organ allocation is considered on a first come-first served model where those who have been on the waitlist the longest are prioritized to receive a matched organ. 

The issue at hand is that potential recipients matched with higher-risk kidneys often decline to accept such kidneys since they are likely to receive a more desirable kidney in the near future. This requires moving further down the wait-list, which is a time-consuming process and “time is tissue.” This project’s goal is to introduce an AI-assisted decision-making tool to operationalize fairness in the algorithm and provide a “first and best”-served model that reflects the same level of fairness as in standard organ allocation, but offers a more precise way of matching less desirable kidneys with recipients who are likely to accept them. In short, who is most likely to accept and to succeed with this kidney

Why AI? 

Algorithms can help simplify complex systems. However, algorithms use historical data to make estimates, which can introduce bias from previous decisions and changing policies. This is where the fairness of algorithms comes into play. We want to evaluate the models, and not the current process, to ensure the AI generalizations are actually helpful. Our AI tool aims to be a fair decision assistant. Surgeons and their patients have the final say and the power to override or ignore the algorithm’s prediction. Algorithms can’t see external factors, such as a support system for the recipient, or if the recipient is comfortable with the risk factors for a particular offer and other important factors for a successful transplant. Surgeons and transplant professionals have access to this. With the development of this AI, surgeons can see the recommendation and add additional factors to make the final decision of kidney transplantation in collaboration with their patient.

Project Overview:

This project aims to bring significant improvements to how kidneys are offered for transplant while examining and addressing biases in AI systems used to support decision-making. It involves developing fairer AI systems that balance the needs of transplant candidates, transplant centers, and OPOs. The main goals are to:

  1. Understand different stakeholder preferences to identify biases in the AI system.
  2. Combine these preferences to determine a fair approach.
  3. Enhance the AI system’s fairness based on the combined preferences.

In simpler terms, the project looks at how biases in AI can affect decisions about kidney transplants and works to create a fairer system that considers the needs and views of everyone involved.

Contributors: Jason Eberl (SLU), Michael Miller (SLU), Venkata Sriram Siddhardh Nadendla (S&T), Mukund Telukunta (S&T)

Panel at Artificial Intelligence and You Symposium

The Artificial Intelligence and You Symposium hosted by the Center for Science, Technology, and Society was held on April 26th at the Innovation Forum at Missouri S&T. Our team organized a panel discussion in which we discussed measuring and aggregating preferences in AI for kidney transplant healthcare. This included 3 short presentations followed by a panel discussion led by Casey Canfield and joined by Cihan Dagli, Daniel Shank, and Sid Nadendla.

Harishankar Subramanian presenting on explainable AI interfaces.

Harishankar Subramanian spoke on integrating stakeholder input into the design of explainable interfaces. Explainable AI (XAI) helps users to understand system’s processes and logic, appropriately trust the system, effectively manage performance as well as fully understand the system and why the system is operating the way in which it is operating. One of the main findings was that the appropriate interface of an AI will vary based on user expertise and decision-making process. Future work will be done to investigate user understanding of AI and the users ability to judge when to rely on AI recommendations. 

Mukund Telukunta presenting on bias in the kidney placement process.

Mukund Telukunta spoke on measuring social fairness preferences from non-expert opinions in kidney transplantation. There is a history of racial bias in the organ transplant process, which needs to be considered in the development of AI tools. Survey participants were presented information, which included the AI’s recommendation as well as the surgeons decision for 10 potential recipients, to evaluate the fairness of the outcomes. The AI decision support system was deemed fair. Future work will consist of gathering expert opinions, such as OPO’s, transplant surgeons, and patients. 

Amaneh Babaee presenting on embedding preferences in adaptable AI decision support.

Amaneh Babaee spoke on her research on identifying organizational and individual factors in AI adoption for kidney transplant, specifically in organ procurement organizations (OPO’s). Interviews with OPOs suggest that AI will be useful if it can measurably speed up the allocation process. In addition, the implications of OPO size varies. A large OPO has more higher financial resources to adopt the AI but has a slower decision-making process. A small OPO has less financial resources, but can quickly decide whether or not AI is suitable for them. She is also deploying a survey to measure perceptions of AI. While both studies are still ongoing, it seems that AI has the potential to improve transplant outcomes, but regulatory hurdles may hinder its integration into existing operations.

Panel Discussion led by Casey Canfield (NP)
Left to Right: Dr. Shank, Dr. Nadendla, Dr. Dagli, and Student Researchers Harishankar Subramanian, Mukund Telukunta and Amaneh Babaee.

In the panel discussion, one concern was that the AI could make incorrect recommendations, leading to negative outcomes. It is unlikely that this process would be automated and there will always need to be a human-in-the-loop. Transplant centers and OPOs will still need to rely on their expertise to fill in the gaps, since the AI does not have as much information as they do about a particular case.

How Do You Feel About Adopting AI in Kidney Placement?

We want to know what transplant stakeholders think about adding AI to the kidney placement process. Please let us know if you would like to participate in a survey or interview! Email Elham (abt8f@umsystem.edu) for more information. 

We are proposing that OfferAI, a hypothetical tool based on the one we are developing, could help people at transplant centers and organ procurement organizations (OPOs) (see diagram below). For a transplant center, AI could be used to accept or deny kidney offers faster. For OPOs, AI could be used to decide if the kidney is hard-to-place and help decide when to use a rescue pathway (i.e., accelerated or expedited placement).  

As part of our research, we are currently interviewing OPOs to get a better understanding of the potential benefits and drawbacks of implementing AI into their organizations. Interview data will remain confidential and will be solely used for research purposes.  

We are also seeking participants for a survey about AI adoption. We are comparing how attitudes, perceived risks, assurance and trust in AI, interpersonal influence, and government influence affect interest in AI adoption across transplant centers, OPOs, patients, and the public. This will help us identify potential barriers and opportunities for AI in the kidney transplant placement process.  

Right now, we are recruiting people who work at:  

Research Update

We recently hosted a webinar to provide an update on our research progress:

  • We have launched a prototype of OfferAI for deceased donor kidney assessment. Please try it out and let us know what you think.
  • We have drafted a review paper about how to design explainable AI to improve human-AI team performance.
  • We are preparing to launch 2 surveys related to adoption and fairness to understand how different stakeholders perceive OfferAI and believe it should be implemented.

Review the recording and check out the slides!