The Biden Administration’s Commitments for the Development and Use of AI in Healthcare

The Biden Administration’s Commitments for the Development and Use of AI in Healthcare

Leading Healthcare Entities Join Google and OpenAI in Signing Voluntary Commitments

Following Google, OpenAI, and 13 other AI companies, leading healthcare entities have agreed to sign the Biden-⁠Harris Administration’s voluntary commitments for the safe, secure, and trustworthy development and use of artificial intelligence. This agreement reflects a series of actions aimed at harnessing the immense benefits of large-scale AI models in healthcare environments while ensuring patient privacy and mitigating risks.

Improving Coordinated Care, Patient Experiences, and Reducing Clinician Burnout

A total of 28 healthcare organizations, including CVS Health, Stanford Health, Boston Children’s Hospital, UC San Diego Health, UC Davis Health, and WellSpan Health, have signed these commitments. With the aim of addressing the skepticism surrounding AI systems in healthcare, these organizations seek to develop AI solutions that will deliver more coordinated care, improved patient experiences, and reduced clinician burnout.

“We believe that AI is a once-in-a-generation opportunity to accelerate improvements to the healthcare system, as noted in the Biden Administration’s call to action for frontier models to work towards early cancer detection and prevention,” the organizations noted in their commitment document.

Aligning with FAVES AI Principles and Establishing Trust and Transparency

To build trust among downstream users, the healthcare organizations have committed to ensuring their AI projects align with the fair, appropriate, valid, effective, and safe (FAVES) AI principles outlined by the U.S. Department of Health and Human Services (HHS). This will help eliminate biases and known risks, ensuring that the AI solutions deliver accurate and reliable results in real-world use cases.

“We will establish policies and implement controls for applications of frontier models, including how data are acquired, managed, and used. Our governance practices shall include maintaining a list of all applications using frontier models and setting an effective framework for risk management and governance, with defined roles and responsibilities for approving the use of frontier models and AI applications,” the companies wrote.

Continued R&D and Mitigating Problems with Open-Source Technology

In addition to current implementations, the healthcare organizations are committed to ongoing research and development of health-centric AI innovation, with appropriate safeguards in place. They will leverage non-production environments, test data, and internally facing applications to prototype new AI applications while ensuring privacy compliance. Continuous monitoring will be done to ensure fair and accurate responses in various use cases, aided by human-in-the-loop evaluation and observability tools for AI.

Training Workforce for Safe and Effective Development and Use of AI Applications

The organizations recognize the need to address the challenges associated with open-source technology. They will also prioritize workforce training on the safe and effective development and use of applications powered by advanced AI models, ensuring that their teams are equipped with the necessary knowledge and skills to handle frontier models responsibly.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts