In the present climate it is clear that there is enormous pressure on healthcare systems across the globe. An increasingly aging population leads to a number of healthcare challenges including that people are living longer generally with one or more chronic conditions. However, artificial intelligence (AI), machine learning (ML) and "big data" have the potential to revolutionise the healthcare space.  Developments in these technologies may help to streamline medical diagnoses and identify which therapies a patient will respond to, allowing the adoption of targeted therapies in an efficient manner.

Although the use of these technologies will likely have positive impacts on patients via, for example, earlier diagnoses of serious illnesses and faster drug development, companies developing and commercialising these technologies may have to deal with additional legal and ethical challenges.  In this article, we take a look at how companies could navigate these challenges.

Data Protection, Privacy & Security

Machine learning diagnostics are currently being developed for use in a variety of fields including oncology, pathology and rare disease. Examples of ML diagnostics include using algorithms to recognise cancerous tissues, using image recognition to identify cancerous cells in microscope images, and using facial recognition to detect phenotypes linked to genetic diseases. ML and AI are also used in personalised medicine to identify subjects with certain biomarker signatures.  This approach relies on the analysis of large datasets to identify biomarkers that are indicative of certain diseases or of a response to therapy.

It is clear that these data-driven approaches rely on access to and analysis of large amounts of our personal data.  In recent years, there has been a huge accumulation of personal, health-related data due to a shift towards digital health monitoring.  Patient data is now kept in digital records in hospitals, which means the data stored in those records can be harvested and analysed more easily to identify trends or patterns that may aid medical diagnoses.  Patients may also wear or use medical devices that are able to monitor patient health and transmit data to another party, such as a healthcare professional for analysis.  But are patients fully aware of where their data is being sent, how it is being stored and who may be accessing it?  Furthermore, many of us, knowingly or unknowingly, also collect huge amounts of data about our health and lifestyle ourselves and provide it to third parties.  Wearable technologies, such as the Apple watch, Fitbit, and Whoop, boast arrays of health-monitoring tools including blood oxygen monitoring, respiratory rate tracking and even electrocardiography.  Many companies who were traditionally thought of as technology companies have moved into the healthcare space. Some of these companies have partnered with healthcare institutions such as the NHS to access and analyse patient data in the development of diagnostics and personalised medicines.

However, in the UK at least, providing patient data to a third party for analysis may be in breach of the UK Data Protection Act, unless patients are properly informed about how their data may be processed or shared with others for processing.  Thus, any company aiming to use patient or personal data in order to develop an AI tool must ensure that they can access the data they need while protecting the privacy of those whose data is being accessed.

Ethics, Transparency & Explainable AI

One of the barriers to mass-adoption of AI in the healthcare industry is that the very nature of AI being a 'black box' means that people - both clinicians and patients alike - may not trust AI tools because they do not understand what the AI is doing.  AI tools already exist in the medical profession, as mentioned above, but a clinician may not be able to fully understand the diagnosis or treatment recommendations output by an AI tool and, in turn, may not be able to explain to a patient how they arrived at the recommendations.   If the AI recommendation turns out to be incorrect or results in harm to a patient, is the clinician liable for medical malpractice?  Both clinicians and patients need to be assured that the AI tool is being used as a tool to assist with existing decision-making processes and that the clinician is not being replaced by or guided by the AI tool.

AI tools need to be trained on real data in order for the AI to learn and make predictions, and the more training data the AI is given, the better it will perform.  However, it is well known that "garbage in means garbage out" and so it is important that the training data used to train an AI is reliable and valid.  AI tool developers should be sufficiently transparent about what kind of data they used to train the AI and whether there were any problems or shortcomings (e.g. data bias), in order to improve transparency and increase the likelihood of uptake of the AI tool in practice.

AI-based health apps or chatbots could also contain biases as a result of the data they have been trained upon.  For example, biases may exist in the training datasets simply because the datasets are not representative of all people in a population and/or because data scientists and AI systems may choose to analyse some parts of the datasets and ignore others.  This means that, where phenotype and genotype information is involved, a biased AI could lead to false diagnoses or to treatments that are ineffective for some subpopulations.  These biases may be, at least partly, resolved by collecting more data particularly from minority populations and/or specifying for which populations the AI should not be used.

Some of the issues surrounding the training of AI may be revealed during the process to obtain a patent for your AI tool (see below).  This is because when preparing a patent application, we will want to understand as much about the AI training process as possible, as the method for training an AI to perform a particular function is often patentable itself.  Increasingly, we are also required to specify the details of the training dataset, such as how it was collected, how large the training dataset is, whether it is publicly accessible, how patient permission has been obtained and how patient privacy has been maintained, what the dataset shows/contains, and so on.  The patent process may help to identify any biases and/or may help you to make the AI more explainable.

How to protect pharmaceutical and biotech innovation

Companies that operate in the pharmaceutical and biotech sectors are familiar with using patents to protect their end products: the new drug products or therapies they have developed after years of research.  Many companies are now using AI, ML and big data tools as part of their research techniques, and may even employ computer scientists and data scientists to develop bespoke versions of these tools. However, often, the companies still focus on protecting the end product and neglect the software-based tools that have been used to aid the research.  The software tools could provide a company with competitive advantages (e.g. faster drug discovery and faster to market), and so companies should consider protecting the tools that give them this edge.

There are a number of ways these tools could be protected.  For example, the functionality of the software tools could be patented.  This has the advantage of enabling the company to stop any third parties from using their patented software tools.  However, it also potentially provides another revenue stream: the company could commercialise the software tool itself and offer it to others working in the field for a license fee.

Some academics or researchers may not wish to obtain patents for their software innovation, on the basis that they are proponents of the open source movement.  It is worth noting that obtaining a patent does not prevent a company from distributing software under an open source license. If you patent something, you can decide whether to grant free licenses to the patent or to seek compensation (license fees) - patents give you more options than simply giving away your code.

If patent protection does not appear possible, copyright could be used to protect the software code and the company could either keep this secret, or could license out the source code to others working in the field.  Companies may also simply keep their software tools secret (as know how or as trade secrets), but this can be more difficult to do in practice.

If your business depends on technology based on AI, ML or big data, we can help you to secure the intellectual property (IP) protection you need, so that you can maintain your competitive edge.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.