AI In Healthcare Resource Center - McDermott Will & Emery

AI IN HEALTHCARE

RESOURCE CENTER

Artificial Intelligence (AI) is not new to healthcare, but emerging generative AI tools present a host of novel opportunities and challenges for the healthcare sector. For years, our global, cross-practice team and health law leaders have guided innovative companies to use AI and machine learning (ML) in the delivery and support of patient care and the provision of cutting-edge tools in the health and life sciences industries. We continue to closely monitor the state of AI in healthcare to help you maximize the value of data and AI with an eye toward compliance in this rapidly evolving regulatory landscape.

We’ve curated links to key AI-related resources from legislative and executive bodies, government agencies and industry stakeholders around the world in one convenient place so you can stay current on these important issues and actively shape the AI policy landscape. This resource center will be updated with the latest developments, as well as insights and analyses from our team.

Subscribe now to receive updates, and please get in touch with us to discuss how your organization is developing or deploying AI/ML solutions.

OPPORTUNITIES FOR PUBLIC COMMENT

  • Request for Information: Due June 7, 2023 | White House, Office of Science and Technology Policy (OSTP) Request for Information: National Priorities for Artificial Intelligence
    • Summary: The Biden-Harris Administration is developing a National AI Strategy to facilitate a cohesive and comprehensive approach to AI-related risks and opportunities. To inform this strategy, OSTP seeks public comment on a variety of topics, including questions relating to protecting rights, safety, and national security; advancing equity and strengthening civil rights; bolstering democracy and civic participation; promoting economic growth and jobs; and innovating in public services.
  • Request for Comment: Due June 12, 2023 | National Telecommunications and Information Technology Administration (NTIA) AI Accountability Policy
    • Summary: On April 11, 2023, NTIA issued a request for comment on an AI accountability ecosystem. Specifically, NTIA is seeking feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.
  • Proposed Rule Comment Opportunity: Due June 20, 2023 | Office of the National Coordinator for Health Information Technology (ONC) Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule
    • Summary: On April 11, 2023, ONC released a proposed rule that includes proposals to “promote greater trust in the predictive decision support interventions (DSIs) used in healthcare to…enable users to determine whether predictive DSI is fair, appropriate, valid, effective and safe.” The proposed transparency, documentation and risk management requirements impact developers that participate in the ONC Health IT Certification Program and those that create predictive DSIs that are enabled by or interface with certified health IT modules.
  • Comment Opportunity: Due July 3, 2023 | Food and Drug Administration (FDA) Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions
    • Summary: On April 3, 2023, FDA issued draft guidance with recommendations on the information to be included in a Predetermined Change Control Plan to be submitted in marketing submissions for AI and ML-enabled devices. The purpose of a Predetermined Change Control Plan is to account for certain planned or expected device modifications that would otherwise normally require a premarket approval supplement, new de novo submission or new 510(k) under applicable regulations. See our summary of the draft guidance here.
  • Comment Opportunity: Due August 9, 2023 | FDA CDER/CBER/CDRH Paper on Using Artificial Intelligence and Machine Learning (AI/ML) in the Development of Drugs and Biologics (FR Notice)
    • Summary: On May 10, 2023, FDA’s Center for Drug Evaluation and Research (CDER), in collaboration with the Center for Biologics Evaluation and Research (CBER) and the Center for Devices and Radiological Health (CDRH), released this discussion paper to facilitate discussion with stakeholders on the use of AI/ML in drug development to help inform potential future rulemaking. The paper describes current and potential uses of AI/ML in drug discovery, clinical and non-clinical research, post-market surveillance, and advanced pharmaceutical manufacturing, and raises several questions for stakeholder input.
  • Comment Opportunity: No Deadline Provided | White House, President’s Council of Advisors on Science and Technology | Working Group on Generative AI Invites Public Input
    • Summary: The President’s Council of Advisors on Science and Technology launched a working group on generative AI to help assess key opportunities and risks, and provide input on how best to ensure that these technologies are developed and deployed as equitably, responsibly and safely as possible. The working group, which will hold its next public meeting on Friday, May 19, 2023, invites submissions from the public on how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks. The call for submissions outlines five specific questions for which the working group is seeking responses.

GOVERNMENT RESOURCES

United States

US DEPARTMENT OF HEALTH AND HUMAN SERVICES
  • HHS Chief AI Officer Website
    • Summary: The US Department of Health and Human Services (HHS) Office of the Chief Artificial Intelligence Officer (OCAIO) aims to facilitate effective collaboration on AI efforts across HHS agencies and offices. The site outlines HHS’s AI strategy, highlights AI accomplishments and priorities at HHS, and provides an inventory of HHS AI use cases. The site also provides a collection of AI-focused laws, regulations, executive orders and memoranda driving HHS’ AI efforts.
  • September 2021 | HHS Trustworthy AI Playbook
    • Summary: Published by the HHS OCAIO in September 2021, the Trustworthy AI Playbook includes HHS-specific guidance on major trustworthy AI concepts and how HHS leaders can confidently develop, use and deploy AI solutions. The playbook also lists the current statutory authorities that HHS believes it can use to regulate AI in healthcare (see Appendix III in the linked playbook). OCAIO also released an executive summary of the playbook. Of particular interest is the below graphic showing HHS’s directive to staff on how to regulate AI.
  • HHS Regulatory Considerations
FOOD AND DRUG ADMINISTRATION
  • Digital Health Center of Excellence
    • Summary: The Digital Health Center of Excellence (DHCoE) was created by FDA to empower stakeholders to advance healthcare by fostering responsible and high-quality digital health innovation. DHCoE is responsible for aligning and coordinating digital health work across the FDA. This site provides links to a variety of resources, draft guidance and events related to AI/ML.
CENTERS FOR MEDICARE AND MEDICAID SERVICES
  • Artificial Intelligence at CMS
    • Summary: This site offers a starting point for stakeholders interested in any aspect of AI at CMS. It provides links to foundational governance documents on AI, noting that those who wish to engage in AI-related activities, whether as a CMS employee, partner or vendor, should be aware of federal policies regarding the application of AI. The site also provides details on AI programs and initiatives at CMS.
OFFICE OF THE NATIONAL COORDINATOR FOR HEALTH INFORMATION TECHNOLOGY
  • Blog Series: Artificial Intelligence & Machine Learning
    • Summary: This blog series explores current and potential uses of AI, predictive models and machine learning algorithms in healthcare, and the role that ONC can play in shaping their development and use. Topics covered include increasing transparency and trustworthiness of AI in healthcare, minimizing risks and rewards of machine learning, and risks posed to patients by these technologies.
FEDERAL TRADE COMMISSION
  • February 27, 2023 Business Blog | Keep Your AI Claims in Check
    • Summary: FTC offers insight into its thought process when evaluating whether AI marketing claims are deceptive.
  • June 16, 2022 Report to Congress | Combatting Online Harms Through Innovation
    • Summary: On June 16, 2022, the FTC submitted its report to Congress required by the 2021 Appropriations Act discussing whether and how AI could be used to identify, remove or take other appropriate action to address online harms. The report details seven areas within the FTC’s jurisdiction in which AI could be useful in combatting online harms and provides recommendations on reasonable policies, practices and procedures for such uses and for any legislation that may advance them.
  • April 19, 2021 Business Blog | Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI
    • Summary: FTC highlights key pillars in managing consumer protection risks associated with AI and algorithms, including transparency, explainability, fairness and accountability.
  • April 8, 2020 Business Blog | Using Artificial Intelligence and Algorithms
    • Summary: FTC offers tips for the truthful, fair and equitable development and use of AI and identifies Section 5 of the FTC Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act as statutory authorities for FTC enforcement activity.
NATIONAL TELECOMMUNICATIONS AND INFORMATION AGENCY
  • NTIA.gov
    • Summary: This site serves as a starting point for stakeholders interested in NTIA policymaking activities, which are principally focused on helping to develop policies necessary to verify that AI systems work as they claim.
NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY
  • NIST.gov
    • Summary: This site serves as a landing page for stakeholders interested in NIST’s research and standards development activities regarding AI.
  • Trustworthy & Responsible AI Resource Center
    • Summary: This platform supports and operationalizes the NIST AI Risk Management Framework (AI RMF) and accompanying playbook. It provides a repository of foundational content, technical documents, and AI toolkits, including standards, measurement methods and metrics, and data sets. Over time, it will provide an interactive space that enables stakeholders to share AI RMF case studies and profiles, educational materials and technical guidance related to AI risk management.
  • January 26, 2023 | NIST AI Risk Management Framework
    • Summary: On January 26, 2023, NIST released its “Artificial Intelligence Risk Management Framework (AI RMF)”. The framework is designed to equip stakeholders with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment and use of AI systems over time. The AI RMF is intended to be a living document that is to be reviewed frequently and updated as necessary. The framework has an accompanying playbook that provides suggested actions for achieving the outcomes laid out in the framework.

HEALTHCARE-SPECIFIC POLICY AND REGULATORY INITIATIVES

FOOD AND DRUG ADMINISTRATION
  • March 1, 2023 | CDER Paper/RFI on Artificial Intelligence in Drug Manufacturing (FR Notice)
    • Summary: On March 1, 2023, CDER issued a discussion paper soliciting public comments on areas for consideration and policy development associated with application of AI to pharmaceutical manufacturing. The discussion paper includes a series of questions to stimulate feedback from the public, including CDER and CBER stakeholder. The comment period closed on May 1, 2023.
  • September 28, 2022 | Final Guidance “Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff
    • Summary: This final guidance provides clarity on FDA’s oversight and regulatory approach regarding clinical decision support (CDS) software intended for healthcare professionals and the types of CDS functions that do not meet the definition of a device as amended by the 21st Century Cures Act. See our summary of the draft guidance here.
OFFICE OF THE NATIONAL COORDINATOR FOR HEALTH INFORMATION TECHNOLOGY
  • April 11, 2023 | Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule
    • Summary: This proposed rule includes proposals to “promote greater trust in the predictive decision support interventions (DSIs) used in healthcare to…enable users to determine whether predictive DSI is fair, appropriate, valid, effective and safe.” The proposed transparency, documentation and risk management requirements impact developers that participate in the ONC Health IT Certification Program and those that create predictive DSIs that are enabled by or interface with certified Health IT Modules. The proposed rule was published in the Federal Register on April 18, 2023. Comments are due June 20, 2023.
  • January 2022 | AI Showcase: Seizing the Opportunities and Managing the Risks of Use of AI in Health IT
    • Summary: This showcase spotlights how federal agencies and industry partners are championing the design, development and deployment of responsible, trustworthy AI in health IT. Speakers included representatives from numerous federal agencies, Congress, the American Medical Association, academic health and research centers, and industry. The agenda, presentation slides and event recording are available to the public.
WORLD HEALTH ORGANIZATION
  • May 16, 2023 | WHO Calls for Safe and Ethical AI for Health
    • Summary: The WHO released a statement calling for caution to be exercised in using AI-generated large language model tools such as ChatGPT for health-related purposes. The WHO expressed concern that the caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools, including adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation, and enumerated several areas of concern with the use of such technology. The WHO proposed that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine, whether by individuals, care providers or health system administrators and policymakers.
  • June 28, 2021 | Report: Ethics and Governance of Artificial Intelligence for Health
    • Summary: The report identifies the ethical challenges and risks with the use of AI in healthcare, as well as six consensus principles to ensure AI works to the public benefit of all countries. The report is the product of input from experts in ethics, digital technology, law, human rights and health ministries. The WHO report includes a set of recommendations to governments and developers for oversight of AI in the delivery of healthcare, seeking to hold all stakeholders – in the public and private sectors – accountable and responsive to the healthcare workers who will rely on AI and the communities and individuals whose health will be affected by its use.

INDUSTRY AGNOSTIC POLICY INITIATIVES

United States

  • April 25, 2023, DOJ Civil Rights Division, CFPB, FTC and EEOC Joint Statement on Enforcement
    • Summary: The Civil Rights Division of the United States Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC( and the US Equal Employment Opportunity Commission (EEOC) released their “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” which reiterates each agency’s commitment to applying existing legal authorities to the use of automated systems and innovative new technologies.
  • October 2, 2022, White House Office of Science and Technology Policy | Blueprint for an AI Bill or Rights: Making Automated Systems Work for the American People
    • Summary: This document establishes five principles and associated practices to support the development of policies and procedures to protect civil rights and promote democratic values in the design, use and deployment of AI systems.

LEGISLATIVE ACTIVITY

United States

  • May 16, 2023, US Congress | Senate Judiciary Subcommittee on Privacy, Technology, and the Law Hearing on Oversight of AI
    • Summary The Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing entitled, “Oversight of A.I.: Rules for Artificial Intelligence.” The witnesses for the hearing were Sam Altman, CEO of OpenAI, Christina Montgomery, Chief Privacy and Trust Officer at IBM, and Gary Marcus, Professor Emeritus at New York University. The hearing focused on the explosion of AI, especially with the release of ChatGPT (developed by OpenAI). Senators on both sides of the aisle expressed concerns about the lack of regulatory oversight on the development and deployment of AI across various sectors, including financial and healthcare. Witnesses were in agreement that regulation is needed, although did not necessarily agree on what approach should be taken. Lawmakers compared the dramatic increase in use of ChatGPT (and comparable services) to the proliferation of social media and expressed concerns about delaying regulation until it had already caused too much harm.
  • April 13, 2023, US Congress | Senate Majority Leader Chuck Schumer (D-NY) Launches Framework to Regulate AI
    • Summary: Senator Schumer announced a collaboration with stakeholders to develop a legislative framework for regulating AI. This effort is expected to involve multiple congressional committees and will focus on four key areas of concern: “Who,” “Where,” “How” and “Protect.” The first three areas aim to inform users, provide the government with the necessary data to regulate AI technology and minimize potential harm. The final area, Protect, is dedicated to aligning these systems with American values and ensuring that AI developers fulfill their promise of creating a better world.
  • National Conference of State Legislatures | 2023 AI Legislation Tracker
    • Summary: This resource tracks 2023 state legislation related to general AI issues, providing information and summaries of each bill. The National Conference of State Legislatures also has resources tracking state legislation related to AI from prior years, beginning in 2019.

INDUSTRY COMMENTARY

  • April 4, 2023, Coalition for Health AI | Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare
    • Summary: The Coalition for Health AI, an alliance of major health systems and tech companies, has released a blueprint to build trust in artificial intelligence’s use in healthcare. The blueprint focuses on framing risks, measuring impacts, allocating risk resources and strong governance.
  • March 9, 2023, US Chamber of Commerce | Artificial Intelligence Commission Report
    • Summary: The US Chamber of Commerce’s Artificial Intelligence Commission on Competitiveness, Inclusion, and Innovation released a comprehensive report on the promise of artificial intelligence, while calling for a risk-based regulatory framework that will allow for its responsible and ethical deployment. Notably, the Chamber release is clear this is a preferred alternative to the White House Office of Science and Technology Policy’s recent Blueprint on Artificial Intelligence. The report outlined several major findings from its year-long deep dive on AI.
  • February 1, 2019, Connected Health, an initiative of ACT: The App Association | Policy Principles for Artificial Intelligence in Health
    • Summary: The Connected Health Initiative’s Health AI Task Force released recommended principles to guide policymakers in taking action on AI. The principles cover areas such as research, quality assurance and oversight, access and affordability, bias, education, collaboration and interoperability, workforce issues, ethics,, and privacy and security.