National Academy of Medicine drafts code of conduct for AI in healthcare

7 months ago 47
ARTICLE AD

The National Academy of Medicine has issued a landscape review, code principles and commitments saying accurate, safe, reliable and ethical AI transformation in healthcare and biomedical science is achievable.

Based on the Leadership Consortium’s Learning Health System Core Principles, an initiative the academy has spearheaded since 2006, the organization said its new draft framework promotes responsible behavior in AI development, use and ongoing assessment. 

Its core tenets require inclusive collaboration, ongoing safety assessment, efficiency and environmental protection.

WHY IT MATTERS

The full commentary, which contains a landscape review and “Draft Code of Conduct Framework: Code Principles and Code Commitments,” was developed through the academy’s AI Code of Conduct initiative, under a steering committee of expert stakeholders, according to an announcement.

The code principles and the proposed code commitments “reflect simple guideposts to guide and gauge behavior in a complex system and provide a starting point for real-time decision-making and detailed implementation plans to promote the responsible use of AI,” the National Academy of Medicine said.

The academy’s Artificial Intelligence Code of Conduct initiative that launched in January 2023 engaged many stakeholders – listed in the acknowledgments – in co-creating the new draft framework.

“The promise of AI technologies to transform health and healthcare is tremendous, but there is concern that their improper use can have harmful effects,” Victor Dzau, academy president, said in a statement.

“There is a pressing need for establishing principles, guidelines and safeguards for the use of AI in healthcare,” he added.

Beginning with an extensive review of the existing literature surrounding AI guidelines, frameworks and principles – some 60 publications – the editors named three areas of inconsistency: inclusive collaboration, ongoing safety assessment, and efficiency or environmental protection. 

“These issues are of particular importance as they highlight the need for clear, intentional action between and among various stakeholders comprising the interstitium, or connective tissue that unify a system in pursuit of a shared vision,” they wrote.

Their commentary also identifies additional risks of the use of AI in healthcare, including misdiagnosis, overuse of resources, privacy breaches and workforce displacement or “inattention based on over-reliance on AI.”

The 10 code principles and six code commitments in the framework ensure that best AI practices maximize human health while minimizing potential risks, the academy said, noting they serve as “basic guideposts” to support organizational improvement at scale.

“Health and healthcare organizations that orient their visions and activities to these 10 principles will help advance the system-wide alignment, performance and continuous improvement so important in the face of today’s challenges and opportunities,” the academy said.

“This new framework puts us on the path to safe, effective and ethical use of AI, as its transformational potential is put to use in health and medicine,” Michael McGinnis, National Academy of Medicine executive officer, added.

Peter Lee, president of Microsoft Research and an academy steering committee member, noted that the academy invites public comment (through May 1) to refine the framework and accelerate AI integration in healthcare. 

“Such advancements are pivotal in surmounting the barriers we face in U.S. healthcare today, ensuring a healthier tomorrow for all,” Lee said.

In addition to input from stakeholders, the academy said it would convene critical contributors into workgroups and test the framework in case studies. The academy will also consult individuals, patient advocates, health systems, product development partners and key stakeholders – including government agencies – before it releases a final code of conduct for AI in healthcare framework.

THE LARGER TREND

Last year, the Coalition for Health AI developed a blueprint for AI that took a patient-centric approach to addressing barriers to trust and other challenges of AI to help inform the academy’s AI Code of Conduct.

It was built on the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” Dr. Brian Anderson, chief digital health physician at MITRE, a CHAI cofounder and now its chief executive officer, said in the blueprint’s announcement.

While most healthcare leaders agree that trust is a chief driver to improving healthcare delivery and patient outcomes with AI, how health systems should put ethical AI into practice is a terrain littered with unanswered questions.

“We don’t have yet a scalable plan as a nation in terms of how we’re going to support critical access hospitals or [federally qualified health centers] or health systems that are less resourced, that don’t have the ability to stand up these governance committees or these very fancy dashboards that are going to be monitoring for model drift and performance,” he told Healthcare IT News last month.

ON THE RECORD

“The new draft code of conduct framework is an important step toward creating a path forward to safely reap the benefits of improved health outcomes and medical breakthroughs possible through responsible use of AI,” Dzau said in the National Academy of Medicine’s announcement.

Andrea Fox is senior editor of Healthcare IT News.

Email: afox@himss.org


Healthcare IT News is a HIMSS Media publication.

Read Entire Article