Last year, both Congress and the Biden-Harris administration noticeably increased their attention on Artificial Intelligence (AI). Key congressional committees explored AI implications for health care & life sciences, and the U.S. Department of Health and Human Services (HHS) and its agencies advanced their own initiatives and as part of the federal government's efforts to implement President Biden's AI executive order (EO).

On Capitol Hill, much of the AI push thus far has been led by the Senate, where in December the "Group of Four," comprised of Majority Leader Chuck Schumer (D-NY) and Sens. Martin Heinrich (D-NM), Todd Young (R-IN) and Mike Rounds (R-SD), concluded a months-long series of nine AI Insight Forums on various topics. Notably, the fourth forum, which followed the release of President Biden's AI EO, focused on AI's impact on high-risk applications, such as in health care, and explored how developers can mitigate potential harms regarding algorithmic bias. Outside of these forums, the Senate Health, Education, Labor, and Pensions (HELP) Committee recently held a hearing to examine AI in health care, where witnesses called on Congress to prioritize establishing strong governance over the highest potential dual-use risks of AI and biosecurity. Also of note, in a letter to the HHS Secretary, the Senate Finance Committee also recently pressed HHS on its use of AI-enhanced tools, including how its programs approach coverage, reimbursement and regulation of such tools, requesting one or more staff briefings on a range of AI-related questions.

In the House, the House Energy and Commerce Committee (E&C) recently convened a series of subcommittee hearings, including a Health Subcommittee hearing, during which lawmakers on both sides of the aisle recognized that AI would help physicians avoid burnout, navigate workforce shortages in the industry and increase the efficiency of the health care system. Ranking Member Frank Pallone (D-NJ) echoed the call for information and privacy protections, especially around medical data. Following the Subcommittee series, the panel held a Full Committee hearing featuring testimony from Dr. Micky Tripathi, the National Coordinator for Health Information Technology, who has been tasked by HHS Secretary Xavier Becerra to help coordinate HHS's AI efforts, including under the AI EO. Dr. Tripathi discussed HHS' mandate under the EO to develop a strategic plan for responsible deployment of AI, outlining specific plans to explore the need for additional statutory authority, develop an AI safety program and accelerate grants and contract allocations relating to AI and the launch of multiple AI-related challenge grants, among other actions. Also of note, on January 3, Congressman Greg Murphy, M.D., Co-Chair of the Doctors Caucus, in a letter to the Food and Drug Commissioner, Dr. Robert Califf, specifically probed the Food and Drug Administration (FDA) on how the agency is preparing for a rapid increase in requests for AI and machine-learning devices and related workload implications for the agency.

In addition to the release of the long-awaited AI EO, the White House on December 14, 2023, announced that it had secured voluntary commitments from 28 provider and payer organizations, expanding upon earlier commitments that the White House received from 15 AI companies to responsibly develop models. In line with these announcements, the companies have committed to informing users of certain AI-generated content, adhering to a risk management framework for using applications powered by foundation models and investigating and developing uses of AI responsibly.

Moreover, the HHS Office of the National Coordinator for Health Information Technology (ONC) has taken its own AI-focused actions in recent weeks, including by finalizing its Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule. The final rule implements provisions of the 21st Century Cures Act and issues updated standards, implementation specifications and certification criteria for the ONC Health IT Certification Program. In particular, the rule establishes risk management and transparency requirements for AI and other predictive algorithms supplied by certified health IT, allowing users to access a broad set of information about the clinical and non-clinical algorithms they use to support their decision making and to assess such algorithms for fairness, appropriateness, validity, effectiveness and safety. These requirements go into effect December 31, 2024. ONC will host information sessions to explain the final rule, beginning on January 4, 2024.

The Department has also begun to issue broad-stroke principles to guide its work on AI. On December 14, 2023, HHS unveiled its multiyear data strategy highlighting the need to responsibly leverage AI and mitigate the risks to privacy and transparency posed by such new technologies. HHS has outlined four specific objectives: (1) establish policies on AI in health and human services, including through the AI EO's mandate to create an AI Task Force; (2) advance quality and safety of AI in health applications, including by developing a framework for the quality assurance of AI-enabled technologies; (3) leverage HHS funding to advance responsible use of AI in health, including through collaboration with private-sector organizations to improve readiness and address bias; and (4) deploy a full range of AI capabilities across HHS, including by establishing a set of guidelines for responsible AI use and implementing mandatory HHS staff training.

On the heels of the release of the multiyear data strategy, the Agency for Health Care Research and Quality (AHRQ) on December 15, 2023, released five principles to guide AI developers and health care organizations in reducing potential bias in AI: (1) promote health and health care equity during health care algorithm life cycle phases; (2) ensure transparency and explainability in health care algorithms and their use; (3) engage patients and communities; (4) identify health care algorithmic fairness issues and tradeoffs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.

The FDA is working to stand up a Digital Health Advisory Committee, which will provide guidance on the development, regulation and implementation of digital health technologies (DHTs) like AI. The Committee will be comprised of nine voting members, as well as industry and consumer representatives, selected from a pool of digital health experts. Issues within the Committee's scope include the benefits, risks and clinical outcomes associated with use of DHTs, as well as advice on the use of DHTs in clinical trials or post-market studies within FDA's jurisdiction.

The focus on AI in health care & life sciences picked up pace in 2023 and this momentum will carry over into 2024 as the Biden-Harris administration and Congress press forward on AI fronts. Stakeholders in health care & life sciences will continue to watch closely to see how evolving AI policy further takes shape in the New Year.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.