This digest covers key virtual and digital health regulatory and public policy developments during July 2023 from the United States, United Kingdom, and European Union.

In this issue, you will find the following:

U.S. News

  • FDA Regulatory Updates
  • Healthcare Fraud and Abuse Updates
  • Corporate Transactions Updates
  • Policy Updates

EU and UK News

  • Regulatory Updates
  • Privacy and Cybersecurity Updates

August Featured Content:

U.S.: Preview of a Packed Legislative Agenda Following Congress Recess
Congress remains in recess throughout the entire month of August. The Senate is expected to return on September 5, while the House is expected to return on September 12. When Congress returns, the top legislative priority will be addressing the twelve fiscal year (FY) 2024 appropriations bills to prevent a federal government shutdown before the end of the current fiscal year ending on September 30, 2023. While the Senate Appropriations Committee successfully reported all twelve spending bills before the end of July, the House has introduced appropriations packages at topline spending levels much lower than the Senate and will need to reconvene on a pathway forward before the end of September, or else face a government shutdown.

EU/UK: EMA Publishes Draft Reflection Paper on Use of AI in Medicine
The European Medicines Agency (EMA) published a draft reflection paper on the use and application of artificial intelligence (AI) and machine learning (ML) at different stages of the medicinal product lifecycle. The EMA advises that developers employing AI or ML should perform a risk analysis and seek early regulatory support, and that the use of AI should comply with existing rules on data requirements as applicable to the particular function that the AI is undertaking. It is clear that any data generated by AI or ML will be closely scrutinized by the EMA, and a risk-based approach should be taken depending on the AI functionality and the use for which the data is generated. Read our blog for more information.


U.S. News

FDA Regulatory Updates

FDA Issues Warning Letter Regarding Violative "Wellness" Claims for Software Device. As reported in prior issues of our digest, in recent months, FDA has issued a string of enforcement letters to sponsors of digital health devices (e.g., Vitang Technology LLC, iRhythm Technologies Inc.). On June 21, the agency issued yet another Warning Letter, this time to Zyto Technologies Inc. (Zyto). Zyto markets the ZYTO Hand Cradle Galvanic Skin Response device (the ZYTO) and associated proprietary software, with the cradle cleared for measurement of galvanic skin response. FDA took issue with "journey to wellness" claims about use of the product to identify certain "stressors" and "balancers" that FDA considered disease or treatment-specific. FDA viewed the claims as going beyond the scope of the 510(k) clearance the sponsor initially obtained for the cradle as well as exceeding the applicable limitations to exemption when the classification regulation was later changed to 510(k) exempt. Also noteworthy is FDA's stance that, because the software and cradle must be used together (as reflected by the labeling), the software is considered a component of the device (the cradle) and thus shares its classification. As is often the case prior to issuance of a device Warning Letter, FDA had prior communications with Zyto about the regulatory status concerns going as far back as 2015. In addition to the regulatory status-related assertions, the Warning Letter also includes a litany of quality system violations.

FDA Denies Petition to Recall Opioid Clinical Decision-Making Software. As we previously reported, in April 2023, the Center for U.S. Policy (CUSP) submitted a Citizen Petition requesting FDA deem Bamboo Health's NarxCare software a misbranded device and take appropriate enforcement action. NarxCare is a clinical decision support (CDS) tool marketed to help clinicians evaluate controlled substance data from state prescription drug monitoring program databases and other sources to make prescribing decisions. CUSP's petition raised concern about aspects of the NarxCare software that CUSP views as exceeding the scope of the 21st Century Cures Act exemption for non-device CDS software functions, including generation of predictive risk scores based on complex algorithm factoring (risk of addiction or overdose).

In a July 21, 2023 response, FDA denied CUSP's petition on the basis that "[r]equests for the agency to initiate enforcement actions and related regulatory activity are not within the scope of FDA's citizen petition procedures." FDA cited to section 10.30(k) of the Citizen Petition regulations, which provides "[t]his section does not apply to the referral of a matter to a United States attorney for the initiation of court enforcement action and related correspondence, or to requests, suggestions, and recommendations made informally in routine correspondence received by FDA." FDA explained that it interprets the regulation as covering not only situations where enforcement action is requested, but where related regulatory activities, such as investigations prior to the issuance of a Warning Letter or a recall, are required to determine whether subsequent enforcement actions may be taken.

Despite denying the CUSP petition, FDA explained that it will evaluate this matter to determine whether follow-up action is appropriate, noting enforcement action decisions are made on a case-by-case basis.

FDA Denies Petition to Issue Device Software Regulations. In July, FDA also issued a response to another device software regulation-related Citizen Petition, albeit one that was filed a decade ago in 2013. The petition requested that FDA issue regulations governing the safety and reliability of software in medical devices. FDA denied the petition, stating that FDA's existing regulations and guidance help ensure the safety and effectiveness of software device functions, including management of change activity and testing. FDA cited to design control, verification and validation, corrective and preventive actions, and other requirements in the Part 820 Quality System regulation that apply to software device changes. The agency also highlighted its recently issued guidance on the content of premarket submissions for device software functions and other guidance that addresses the management of device software change activity.

Notably, FDA's response confirms that "design control requirements apply to all classes of devices automated with computer software." This is based on 21 C.F.R. § 820.30, which specifies that "devices automated with computer software" are subject to design controls, a requirement that companies marketing digital health tools under Class 1 cGMP-exempt regulations at times overlook.

FDA Authorizes Marketing of Novel Diabetes Cognitive Behavioral Therapy Device. On July 7, 2023, FDA granted marketing authorization through the de novo pathway for Better Therapeutics' AspyreRx", a prescription digital therapeutic device indicated to provide cognitive behavioral therapy to adults with type 2 diabetes. Although the FDA decision summary has not yet been posted, the sponsor in a press release stated that "AspyreRx is backed by robust data demonstrating clinically meaningful and sustained reduction in HbA1c when used up to 180 days." AspyreRx is expected to launch commercially in Q4 2023.

FDA Announces Expansion of the Total Product Life Cycle Advisory Program Pilot (TAP Pilot). On July 31, 2023, FDA announced that as of October 1, 2023, the TAP Pilot will expand to include the Office of Neurological and Physical Medicine Devices. The TAP Pilot is a voluntary program that aims to encourage development of, and increase patient access to, safe, effective, high-quality medical devices by improving communication between the FDA and medical device sponsors. As of July 31, 2023, FDA has enrolled five devices in the TAP Pilot and is still accepting requests for FY 2023 for devices in the Office of Health Technology 2: Office of Cardiovascular Devices. For more information on the TAP Pilot, please see the November 2022 issue of Arnold & Porter's Virtual and Digital Health Digest.

Healthcare Fraud and Abuse Updates

DOJ and State AGs Continue to Target Fraudulent DME, Genetic Testing Schemes. On July 17, 2023, two men pleaded guilty for submitting false claims for unnecessary medical services. Daniel Carver owned and managed call centers that were utilized to facilitate deceptive telemarketing campaigns that targeted and solicited Medicare beneficiaries for unnecessary genetic testing and durable medical equipment (DME). Louis Carver worked for these call centers and acted as a straw owner for a laboratory that submitted false genetic testing claims. The Carvers, as well as their co-conspirators, paid bribes and kickbacks to telemedicine companies to receive completed doctors' orders, sold doctors' orders to laboratories and durable medical equipment companies in exchange for kickbacks, and forged doctors' and patients' signatures on orders. Between January 2020 and July 2021, the scheme resulted in the submission of over US$67 million in false claims to Medicare. Daniel Carver and Louis Carver face maximum penalties of 25 years and 10 years in prison, respectively.

In a similar matter, an owner of telemedicine companies pleaded guilty to his role in a US$44 million telemedicine fraud scheme also concerning medically unnecessary DME and genetic testing. Specifically, between January 2018 and August 2021, David Santana — the owner of Conclave Media (Conclave) and Nationwide Health Advocates (Nationwide) — used his companies to enter into business relationships with telemarketing companies, generating leads by targeting Medicare beneficiaries. Allegedly, telemarketers then paid Conclave and Nationwide on a per-order basis, generating orders for durable medical equipment and genetic testing. Santana allegedly conspired with medical staffing companies to find nurses and doctors willing to review and sign prepopulated orders, with the beneficiaries usually not being contacted. According to the charging documents, Santana allegedly knew that the suppliers and laboratories would use the signed orders to submit false and medically unnecessary claims to Medicare.

Department of Justice's (DOJ) continued crack down on medically unnecessary testing follows months of warnings and advisories from Health and Human Services Office of Inspector General on medical telehealth services program integrity risks. See further coverage in our May 2023 issue of Arnold & Porter's Virtual and Digital Health Digest. We anticipate continued DOJ action in telehealth, especially as the Centers for Medicare & Medicaid Services eyes finalizing its proposed 2024 physician fee schedule (PFS), which includes several changes to telehealth reimbursement, including increased scrutiny for claims submission to the Medicare Telehealth Services List. See our coverage of the 2024 PFS Proposed Rule in the July 2023 issue of Arnold & Porter's Virtual and Digital Health Digest.

Corporate Transactions Updates

Digital Health and Adoption of AI. Hospitals continue to pursue opportunities to partnerships leveraging AI capabilities. On August 2, 2023, Duke Health announced it will use its partnership with Microsoft to explore and develop new mechanisms of applying Microsoft's generative AI and cloud technologies (called Nuance) to its research and operations. Duke Health will use Nuance to optimize clinic schedules and attempt to predict which patients are most likely to be no-shows for their appointments.

Tech-giant Amazon recently joined the race to pioneer healthcare AI technology. On July 26, 2023, Amazon Web Services announced the launch of its new AWS HealthScribe tool designed to generate clinical documentation. AWS HealthScribe uses speech recognition and generative AI to improve efficiency by citing every line of generated text from conversations between clinician and patient during in-person and telehealth visits. AWS Healthscribe allows clinicians to review for accuracy before uploading to the electronic health record. Of note, AWS HealthScribe is HIPAA-eligible and is already being used by 3M Health Information Systems, Babyon, and ScribeEMR.

While there is no shortage of headlines, generative AI still has a long way to go before clinicians consider it the norm. As of August 7, only 6% of health system executives reported even having a generative AI strategy for their organization.

A Snapshot of the First Half of 2023 for Digital Health Startups. During the first half of 2023, U.S. digital health startups raised a notably low US$6.1 billion. If funding continues at a similar rate for the remainder of the year, digital health startups will have their lowest funding year in four years.

Further, digital health startups that previously raised early-stage rounds have opted for unlabeled funding deals to avoid valuation shortfalls or disclosing weak rounds of investments. Unlabeled funding deals, or deals where capital raises fail to have a Series A or similar status, comprised a staggering 41% of digital health funding deals during the first quarter of 2023. This is the highest proportion of unlabeled raises since RockHealth began tracking in 2011.

Other digital health companies have been unable to raise capital altogether and are either limping along or selling shop. At the end of 2022, 69% of all digital health ventures had raised funding in the past 18 months, whereas after the end of Q1 of 2023, only 35% raised funding. This suggests that over half of the digital health companies in North America need to quickly obtain additional financing to avoid going bankrupt like numerous digital health companies that have filed for bankruptcy this year, including Peer Therapeutics, SimpleHealth, the Pill Club, and Quil.

Policy Updates

White House Ramps Up Efforts to Regulate AI

  • White House Holds Briefings on AI. On July 21, 2023, the White House published a fact sheet titled "Biden-?Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI." President Biden hosted seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — to discuss the commitments these companies made to improve the transparent development of AI technology, including: (1) ensuring products are safe before introducing them to the public; (2) building secure and safe data systems; and (3) earning the public's trust. The White House coordinated with the following countries on these voluntary commitments: Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the United Arab Emirates, and the United Kingdom (UK). The administration intends for these commitments to "support and complement" Japan's AI-oriented G-7 Hiroshima Process, the UK's Summit on AI Safety, and India's Chair of the Global Partnership on AI.
  • President Biden Moves to Restrict Certain AI Investments in China. On August 9, President Joe Biden signed an executive order on "Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern" which directs the U.S. Department of Treasury to restrict and regulate investments into "countries of concern" related to three sectors: semiconductors and microelectronics; quantum information technologies; and AI. The same day, the White House announced a US$20 million "AI Cyber Challenge" led by the Defense Advanced Research Projects Agency to incentivize the development of software to address vulnerabilities while using AI.

Congress Remains Interested in Oversight of AI

  • Congress Remains Interested in Legislating AI. On August 4, 2023, the Congressional Research Service published a report titled "Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress." The report provides an overview of various AI issues being considered by Congress, including generative AI such as ChatGPT. According to the report, the 117th Congress introduced a total of 235 bills related to AI, six of which were enacted into law. As of June 2023, the 118th Congress has introduced 94 AI-related bills, none of which have been enacted into law. The bills cover a range of topics in the national security, education, and healthcare spaces, including federal oversight of AI, AI training for federal employees, and requirements for the private sector to disclose the use of AI or prohibit the use of AI in certain situations. The report includes a subsection on healthcare issues and cites to a September 2022 Government Accountability Office report detailing the various ML technologies assisting with diagnostic processes for patients diagnosed with cancer, diabetic retinopathy, Alzheimer's, heart disease, and COVID-19. The report notes that there continues to be "slow progress" in the adoption of AI technologies within the U.S. healthcare system.
  • Senator Warner Warns Hospitals of Preliminary Use of Google's AI Models. On August 8, 2023, Sen. Mark Warner (D-VA) sent a letter to Google expressing concern following reports that Google began providing Med-PaLM 2, a large language model designed to provide answers to medical questions, to U.S. hospitals for testing purposes. In his letter, Sen. Warner balances the fact that he believes AI "holds tremendous potential to improve patient care and health outcomes," yet the Senator claims any "premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors." In 2019, Sen. Warner raised similar concerns with Google "skirting health privacy laws through secretive partnerships with leading hospital systems."

Recent Federal Agency Efforts to Address Digital Healthcare Antitrust Concerns

  • FTC Withdraws Health-Related Antitrust Policy Statements. On July 14, the Federal Trade Commission (FTC) announced the withdrawal of two antitrust-related policy statements related to federal enforcement within healthcare markets in an effort to further promote fair competition: (1) Statements of Antitrust Enforcement Policy in Health Care (August 1996) and (2) Statement of Antitrust Enforcement Policy Regarding Accountable Care Organizations Participating in the Medicare Shared Savings Program (October 2011). FTC's actions come in part as a response to the Department of Justice's rescission of the same two policy statements in February 2023. The policy withdrawals come during a time of heightened scrutiny and oversight regarding health data sharing at FTC and DOJ, with U.S. Principal Deputy Assistant Attorney General Doha Mekki commenting earlier this year that unregulated data-sharing among industry partners can lead to issues such as price-fixing.
  • FTC Looks to Block Major Health Data Acquisition. On July 17, 2023, the FTC issued an administrative complaint seeking to block IQVIA, one of the largest healthcare data providers, from acquiring advertisement company Propel Media (PMI), which uses technology such as artificial media intelligence to develop consumer-facing advertisements within the U.S. health market. The FTC voted 3-0 in favor of blocking the proposed acquisition between IQVIA and PMI, which the FTC claimed would "eliminate head-to-head competition," therefore "driving up prices and reducing quality and choice."

EU and UK News

Regulatory Updates

EMA Reflection Paper on the Use of AI in the Medicinal Product Lifecycle. On July 19, 2023, the European Medicines Agency (EMA) published a draft reflection paper on the use and application of AI and ML at different stages of the medicinal product lifecycle. The reflection paper is part of the joint HMA-EMA Big Data Steering Group initiative to develop the European Medicines Regulatory Network's capability in data-driven regulation.

The EMA recognizes the positive impact that AI and ML systems may have on drug development and regulatory processes — for example, reducing the use of animal models during preclinical development; assisting the selection of patients for clinical trials; supporting data recording, analysis, and review for marketing authorization procedures; and managing adverse events in the post-authorization phase. However, the EMA also highlights various challenges with AI, such as understanding the design and inherent biases of the algorithms, determining what happens if there is a technical failure, and establishing compliance with ethical principles. The EMA advises that developers employing AI or ML at any stage of the lifecycle should perform a risk analysis and seek early regulatory support if the AI or ML system is assessed to have a potential impact on the benefit-risk balance.

Why is this important: The paper reflects the EMA's early experience with, and considerations on, the use of AI and gives a sense of how EMA expects applicants and holders of marketing authorizations to use AI and ML tools. The EMA has made clear that the use of AI should comply with existing rules on data requirements as applicable to the particular function that the AI is undertaking. It is clear that any data generated by AI or ML will be closely scrutinized by the EMA, and a risk-based approach should be taken depending on the AI functionality and the use for which the data is generated. The paper is open for comment until December 31, 2023 and will be discussed at an AI workshop held by the Heads of Medicines Agencies (HMA) and the EMA on November 20-21, 2023. Read our blog for more information.

UK House of Lords Report on AI. On July 18, 2023, the UK's House of Lords (HoL) published a report titled "Artificial Intelligence: Development, risks and regulation." The HoL report discusses the potential benefits and risks of AI, the UK government's proposed approach to the regulation of AI (set out in the white paper and discussed in our April 2023 issue of Arnold & Porter's Virtual and Digital Health Digest), and a comparison to the regulatory approach being taken by the EU and U.S. It also reflects on two recent appeals from stakeholders for rapid regulatory adaptation, these being an open letter in March 2023 from key figures in AI, science, and technology calling for an immediate pause on the creation of powerful AI systems and a joint report from Sir Tony Blair and Lord Hague of Richmond on how to develop and use safe AI in the UK. The HoL report concludes with three recommendations:

  1. The UK should initially diverge from the EU's "one-size-fits-all" approach in the form of the EU AI Act (see our Advisory for more details), but ensure that the UK's regulatory system enables UK companies to voluntarily align with EU regulation.
  2. The UK should initially align with the U.S. with the possibility of diverging later on as expertise increases.
  3. The UK should establish an AI regulator in tandem with the national AI laboratory named Sentinel.

Ethical Framework for Developers of AI. On July 19, 2023, the World Ethical Data Foundation (WEDF) published an open suggestions framework to assist stakeholders with the development of transparent, responsible, and ethical AI. The framework proposes three sets of questions (Me, We, and It) for each of the three core steps of AI building (Training, Building, and Testing):

  • "Me" questions are those each individual developer should consider prior to and throughout the development process.
  • "We" questions are questions the development group should consider, including whether the development group is diverse enough to address bias.
  • "It" questions encourage developers to consider the impact the AI system may have on the world.

Interested parties are able to suggest improvements to the framework in order to refine it for the whole community. The WEDF hopes that the standard will ensure a consistent approach to ethical AI development.

Label2Enable Survey on the Value Propositions for Health App Assessments. Label2Enable is an EU-funded project promoting the adoption of CEN-ISO/TS 82304-2, the international standard titled "Health software — Part 2: Health and wellness apps — Quality and reliability." The core content of the technical specification is a health app quality assessment framework, which Label2Enable intends to communicate as a score that will provide an easy overview of the assessment outcome. The report aims to provide the level of detail healthcare professionals need to recommend a health app and to provide insurers a basis for decision-making on reimbursement. The survey on the standard is open until the end of August 2023 and will gather views on the value propositions for health app assessment. The results will support the decision-making on a suitable business model for the adoption of CEN-ISO/TS 82304-2. The project is currently seeking responses from app developers, health providers, and health authorities.

New British Standard for Assessing AI Within Healthcare. On July 31, 2023, the British Standards Institution published a new standard (BS 30440) titled "Validation framework for the use of AI within healthcare." The standard is intended to be used as a framework for assessing the standard of AI systems used within healthcare and presents a set of auditable clauses, enabling conformity audits that lead to certification of these AI systems. Healthcare organizations could mandate BS 30440 certification as a requirement in their procurement processes to ensure the systems have met a known standard.

Privacy and Cybersecurity Updates

Report on Cybersecurity Threats to the EU Health Sector. On July 5, 2023, the European Agency for Cybersecurity (ENISA) published a report evaluating the threats faced by the EU health sector from cyberattacks. ENISA analyzed the cyber incidents that occurred from January 2021 to March 2023 across the EU and in certain neighboring countries (UK, Norway, and Switzerland) and against various stakeholders within the healthcare sector including healthcare providers, authorities, research facilities, and pharmaceutical companies. It found that ransomware accounted for 54% of the cybersecurity threats, a trend that is likely to continue given that only 27% of surveyed organizations had a dedicated ransomware defense program. During these ransomware attacks, cyber criminals would often illegally access confidential healthcare data, threaten to disclose the data, and extort the owner or holder for its return. Other threats included denial of service attacks, threats to disclose data, malware, and supply chain attacks. Vulnerabilities in medical devices were identified as a key concern, with the top future threat being targeted attacks on individual health data collected from wearables and medical equipment. The report concludes by advising health organizations to apply "cyber hygiene practices," such as having offline encrypted backups of critical data, cyber incident response plans, and more training for healthcare professionals.

Click here for additional issues of the Virtual and Digital Health Digest

Click here to opt into our mailing list for future issues of the Virtual and Digital Health Digest

* Heba Jalil, a Trainee Solicitor in Arnold & Porter's London office, contributed to this digest.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.