The integration of AI tools like ChatGPT and GitHub Copilot into corporate structures can revolutionize efficiency and productivity, especially in software development and content generation.

However, this integration poses challenges, particularly for protecting intellectual property (IP) rights. AI tools process and generate vast data, some sensitive or proprietary. The inadvertent public disclosure of such information through AI interactions could compromise patent rights or trade secrets — a concern in stringent patent law jurisdictions like the United States.

The FDA has recognized the growing importance of software-based medical devices, especially those incorporating AI. The agency's Digital Health Innovation Action Plan underlines the transformative impact of digital technology in healthcare, including mobile medical apps, fitness trackers and clinical decision-support software.

The FDA anticipates continued market growth of software-based medical devices and new digital health technology. To address this, the agency has defined two categories — software in a medical device (SiMD) and software as a medical device (SaMD).

SiMD refers to software embedded in a medical device, while SaMD represents software intended for one or more medical purposes, functioning independently of hardware medical devices. The FDA uses these definitions to guide future regulatory frameworks and ensure the safety and effectiveness of these innovative medical technologies.

AI Risks in the Medical Device Realm

Adopting AI tools in medical devices necessitates a cautious approach due to the inherent risks. A key challenge includes the safety and efficacy of the devices themselves, but other risks can occur in the IP space. These can include the loss of patent or trade secret rights, unintended data-sharing, copyright authorship and patent inventorship issues, inaccurate and faulty outputs, and biases in AI systems.

For example, a public disclosure of inventive concepts or proprietary information, whether intentional or accidental, can have severe implications for patentability and trade secret protection, particularly in competitive industries. Additionally, the design of AI-based software, which could be incorporated into a medical device, raises concerns about unintentional data-sharing, potentially exposing sensitive information. Such a risk can arise where proprietary data, as used to train an AI model for one medical device, could be incorporated into a competitor's (or otherwise third party's) medical device unintentionally.

Copyright and patent inventorship issues can emerge as AI plays a role in creative and inventive processes, creating legal ambiguities around the ownership of AI-generated content. And inaccuracy in AI outputs, or "AI hallucinations," can lead to incorrect decisions, affecting high-stakes industries such as healthcare.

In addition, inherent biases in AI systems, stemming from training data or developer subjectivity, can perpetuate stereotypes and lead to discrimination or unfair practices. Addressing these risks requires robust policies and practices to ensure the safe and ethical use of AI in corporate environments.

Crafting a Robust AI Policy

To mitigate AI-related risks, medical device companies should consider developing comprehensive AI policies. A formal AI policy could outline acceptable AI tool uses, designate responsible personnel and include regular training on IP protection. Preventing public disclosures of sensitive information is crucial, as AI tools operate by processing and generating information. Negotiating clear terms with AI providers can help control data usage and reduce IP infringements.

To prevent disputes over copyright and inventorship, companies should ensure distinct outputs from AI tools and maintain clear logs of inputs and outputs. Categorizing data based on exposure risk allows targeted and effective AI tool usage, which can balance benefits with protection needs. And regular audits of AI outputs for accuracy and bias can be useful to maintain AI tools' reliability and ethical standing.

As AI technologies increasingly permeate the medical device world, developing a comprehensive AI policy will likely become a necessity, not just a strategic advantage.

Originally published by FDAnews.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.