The Situation: Artificial intelligence is being used in innovative ways in the health care industry to drive down costs and improve clinical outcomes.

The Issue: The health care industry, including the provision of health care services using technology and artificial intelligence, is heavily regulated in the United States on a federal and state level.

Looking Ahead: Technology companies involved in the health care industry should consider these federal and state requirements when expanding operations in the United States.

Artificial intelligence ("AI") is broadly defined as the use of advanced computer science to develop machines that can learn, reason, communicate, and make human-like decisions. The use of AI in the health care industry is exploding as stakeholders continue to look for ways to lower costs, improve quality, and increase access to health care. In the administrative context, AI is improving hospital operations by streamlining tasks such as scheduling hospital procedures, identifying and filling workforce gaps, detecting high-risk patients, and assisting with insurance verification and other types of reimbursement procedures.

In the clinical context, AI is being used by physicians and other providers to assist in diagnosing and treating patients. For example, robotic systems used during surgery can offer physicians more precision and control. AI technologies capable of quickly mining medical records and images can provide preliminary diagnoses or help develop personalized treatment plans. Finally, applications offering symptom checkers, personalized medication reminders, and access to virtual providers enable individual patients to take control of their health and see a doctor without leaving the house.

As the use of AI in the clinical space increases and evolves, legal and regulatory risk can escalate, particularly with the growing attention and unique application of traditional regulatory principles not yet attuned to AI. In particular, AI companies should be thoughtful about using AI to assist with activities traditionally provided by human beings, such as gathering clinical information, diagnosing, and recommending treatment. These types of clinical activities are heavily regulated in the United States by both federal and state law, creating a complex and, at times, confusing tangle of statutes and regulations. This compliance landscape is further complicated by the fact that the development of appropriate legal principles and guidance is often slower than technology's advancing capabilities, leaving gaps that can cause ambiguity and make it difficult for even the most well-meaning companies to comply.

In particular, AI companies should consider the following issues as they expand business operations in the United States:

  • Laws Governing the Practice of Medicine. The practice of medicine is regulated at the state level. AI companies need to consider how state-specific requirements governing licensure, the practice of medicine (including telemedicine), fee-splitting, and other topics may affect operations and utilization of AI products.
  • Data Privacy and Ownership. Access to data is crucial for AI. However, health care data is heavily protected, and medical records are typically owned by the treating providers. AI companies should consider ways to utilize this integral data while remaining compliant with federal and state laws governing the protection of patient information and data ownership.
  • FDA. The Food and Drug Administration ("FDA") is responsible for regulating medical devices in the United States. AI companies developing digital health products should recognize how recent regulatory changes may affect them and that FDA is engaging industry to further refine its oversight approach. See the recent Jones Day Commentary addressing FDA's Evolving Regulation of Artificial Intelligence in Digital Health Products."
  • Product/Professional Liability and Malpractice. While laws limit who can practice medicine, rarely do laws explicitly address the increased use of AI in clinical operations. This gap has created significant ambiguity related to product and professional liability. For example, who is responsible for a bad outcome during a surgery that involves the use of a robot or AI process? Given these uncertainties, AI companies should consider these risks as they develop, manufacture, train customers, and market new technology.

Three Key Takeaways

  1. AI companies should consider federal and state laws and regulations that govern the practice of medicine when developing and utilizing AI in the clinical space.
  2. In particular, AI companies should be aware of potential licensing rules, data privacy and ownership protections, and FDA oversight that might apply to their U.S. operations.
  3. Finally, AI companies should consider the risks associated with using AI in the health care space, including the potential for product and professional liability claims.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.