To realize the benefits and avoid the risks associated with health artificial intelligence (AI), it is imperative strategically to prioritize key actions that are most likely to ensure that incentives and supports are intentionally designed and properly executed, and that progress is both effective and responsive to a changing environment. This will require significant effort from and coordination between all stakeholder groups. In addition, beyond the targeted efforts outlined below, much work will be required by multiple parties. The actions listed are not intended to be comprehensive but instead constitute the “vital few” highest priorities to advance the Code Principles and Commitments, building the capability to rapidly respond to new tools, technologies, opportunities, and concerns.
In identifying priority actions for the translation of the Code Commitments to real-world application, information synthesis, gaps, and opportunities were identified by all contributors to this work. Two additional constructs were considered; the first was the identification of priority actions that are foundational to, supportive of, or capable of catalyzing additional needed action, thereby likely to create a cascading effect and potentially speeding the national collective effort to promote safe, effective, and trustworthy health AI. The second was the application, as appropriate, of behavioral economics, which posits a set of strategies including incentives, defaults, and framing to make preferred decisions and actions easier and more rewarding or less costly (Siegel et al., 2021). Below, for each commitment, priority actions are outlined for consideration in the application of the Code Commitments in real-world settings.
To ensure that human health, agency, and connection remain the primary focus of health AI, it is essential to identify, transparently characterize, and promote the societal and cultural goals of the recipients of the use of health AI as an accountable mechanism to protect and advance human health and connection.
Development of standards and other governance structures to assess alignment by developers and users of health AI with societal and cultural goals for health AI:
Incentives and structures for independent evaluation, certification to the AI Code Commitments, and public and transparent reporting on certification status:
To ensure equitable distribution of benefits and risks of health AI, it will be critical to place equity and fairness at the center of all health AI development and use and ensure its prioritization throughout the AI lifecycle.
Standardized metrics to assess and report bias in data, AI output, and AI use in the interest of equitable distribution of benefit and risk:
Incentives and support for low-resourced organizations and communities to ensure equitable access to the benefits of AI:
To ensure that key stakeholders are viewed as partners with agency in every stage of the lifecycle, it is important to identify, engage, and most importantly, integrate all relevant stakeholder input in conceptualization, design, development, implementation, and surveillance throughout the health AI lifecycle.
Participation by all key stakeholders across the health AI lifecycle:
Local governance bodies that include all stakeholders in the AI lifecycle:
Common understanding/education of all impacted parties:
Consistent with the priorities laid out in the National Academy of Medicine Plan for Health Workforce Well-Being (NAM, 2022c), it is imperative to create a shared sense of purpose and potential for the health care workforce. Top priorities include workforce education and investment in research.
Positive work and learning environments and culture (NAM, 2022c):
Measurement, assessment, strategies, and research (NAM, 2022c):
Effective monitoring and sharing of AI’s performance and impact on health and safety will require stakeholders to integrate and align risk management and quality measurement and assurance frameworks for the health AI lifecycle. Careful consideration is needed to assess technical rigor, use case utility, and trustworthiness (equity, fairness) in the conduct of performance monitoring.
Standardized quality and safety metrics to assess the impact of the use of health AI on health outcomes:
Innovation and discovery are needed to drive continuous improvements to health, and shared learning and ongoing systems feedback are the foundation of the Learning Health System.
A well-supported national health AI research agenda:
Participation in shared learning across all stakeholders:
Innovation as a core investment:
While there is clearly much work to be done across stakeholders to advance responsible health AI, prioritizing actions that impact the highest points of leverage and that are in some cases already in motion will allow us to reap the benefits and avoid the pitfalls of health AI most expeditiously, safely, and effectively. Applying the concepts of behavioral economics, it will be important to make it easy and rewarding to abide by the shared vision, values, goals, and expectations described
in the nationally aligned AI Code Principles and Code Commitments. Established standards, incentives, and transparent performance metrics will be key. Table 7-1 summarizes priority actions to operationalize the AICC Code Commitments.
TABLE 7-1 | Summary of Priority Actions to Operationalize the AICC Code Commitments
| Key Actions to Operationalize the AICC Code Commitments | ||
|---|---|---|
| Commitment | Action | Involved Parties |
| Advance Humanity |
|
|
| Ensure Equity |
|
|
| Engage Impacted Individuals |
|
|
| Key Actions to Operationalize the AICC Code Commitments | ||
|---|---|---|
| Commitment | Action | Involved Parties |
| Improve Workforce Well-Being |
|
|
| Monitor Performance |
|
|
| Innovate and Learn |
|
|
NOTE: ASTP ONC = Assistant Secretary for Technology Policy, Office of the National Coordinator for Health Information Technology; FDA = U.S. Food and Drug Administration; HRSA = Health Resources and Services Administration; NIH = National Institutes of Health.
This page intentionally left blank.