The global AI market was valued at $95.60 billion in 2021 and is predicted to reach $1.85 trillion by 2030, registering a compound annual growth rate of 32.9 per cent. Alongside this growth and the proliferation of AI use cases across industries, governments have quickly focused their sights on providers of the technology. Laws that specifically target AI system providers in the EU, UK and the US currently appear only in draft form or as non-binding guidance, or have otherwise not yet come into effect. However, the evolution of such draft laws and guidance into binding legislative instruments – and the coming into effect of the pending regulations – will soon alter this landscape forever.

AI system providers in the EU, UK and the US should start considering and implementing the likely basic requirements of future laws now. Such action will limit the scope of any re-engineering needed to achieve compliance with new laws once they come into force. It will also allow AI system providers to avoid cutting legal corners, and taking on unnecessary risk, in a rush to achieve compliance by applicable deadlines.

To assist in such exercise, this Proskauer briefing sets out ten steps that AI system providers can take towards compliance with the likely basic requirements of future laws. These steps are intended to be useful preliminary actions, rather than exhaustive steps to absolute compliance. The steps are followed by snapshots of certain key laws and guidance items, including their respective scopes, and known or anticipated dates of effect.

Step 1/ Cataloguing and Assessment

Catalogue your AI systems and conduct detailed assessments of each AI system throughout its life cycle to identify its potential risks and actual impacts (including any individual, social, economic, environmental and ethical impacts).

In particular:

  1. during the initial design and development of your AI system, conduct impact assessments to identify and assess the potential impacts of the AI system;
  2. following the deployment of your AI system, conduct impact evaluations to assess the actual impacts of the AI system and identify any unintended consequences; and
  3. throughout the life cycle of your AI system, conduct risk assessments to identify and assess the potential risks (including technical, operational and security risks) associated with your AI system.

Pay particular attention to categories of persons or groups (including marginalised persons and vulnerable groups) that may be affected by your AI system.

Be sure to include in your organisation's risk register any risk items that you identify.

Step 2/ Risk and Impact Management

Implement, on an ongoing basis, technical measures (addressing the AI system itself) and organisational measures (addressing, e.g., governance) to manage and mitigate the impacts and risks that you have identified in Step 1.

For example, you might mitigate the risk of AI system 'drift' from a technical perspective by retraining or fine-tuning your AI system on new data, using real-time online learning and/ or implementing ensemble methods (e.g., bagging, boosting or stacking).

Similarly, you might mitigate explainability and transparency risks from an organisational perspective by developing improved documentation procedures and creating easily understandable customer policies.

Establish reporting lines and an accountability framework that identifies the responsibilities of management and other staff in relation to AI system compliance, including which persons or teams are responsible for Steps 1–10.

Step 3/ Record-keeping and Traceability

In respect of each of your AI systems, document its key characteristics on an ongoing basis.

Key characteristics include each system's purpose; design and development process (including any substantial modifications); design specification and architecture; data sourcing and management; training and fine-tuning methodologies; testing protocols; capabilities and decisions; risk and impact assessments and evaluations; risk and impact mitigating measures; quality controls; and faults, failures and malfunctions.

Establish a quality management system to record how your AI system complies with applicable laws and ensure your AI system is traceable (e.g., by using a version control system to track changes to training data, or a data lineage tool to track data flows through the system).

Automate your record-keeping process where possible.

Step 4/ Bias and Training Data Qualities

Execute strategies to ensure your AI system does not discriminate against individuals or create unfair commercial outcomes.

Ensure your AI systems are trained and finetuned on high-quality, relevant datasets that are checked for errors, are representative and consider diversity factors like age, gender and ethnicity.

Consider using technical processes to reveal traits in datasets that most heavily influence decisions/outputs, and to highlight and remove sources of bias. For example, pre-process datasets to maintain as much accuracy as possible while reducing/removing any relationship between outcomes and protected characteristics. Alternatively, rebalance imbalanced datasets by adding or removing data about under/overrepresented subsets of the population.

Step 5/ Quality, Robustness, Accuracy and Cybersecurity

Implement an ongoing management system to maintain the quality and performance of your AI system.

This management system should:

  1. ensure the legal compliance of your AI system;
  2. ensure, to the extent possible, that your AI system can withstand unexpected events without failing or producing incorrect results. Ensure that it is robust to faults and inconsistencies by using redundancy solutions, e.g., backups;
  3. ensure that your AI system is accurate in accordance with the generally acknowledged state of the art. This may involve using accuracy metrics and including levels of accuracy in your instructions documentation. Implement processes to assess and adjust the accuracy of outputs, including hyperparameter running;
  4. ensure that your AI system is protected from unauthorised access and disruption. Identify and safeguard against AI-specific security incidents, including leakage of data, model inversions, data poisoning and prompt injections. Consider subscribing to security advisories to receive alerts of vulnerabilities and ensure patching processes are in place where components are externally maintained;
  5. introduce real-time monitoring techniques to flag and trigger the remediation of incidents relating to items (a) – (d); and
  6. allow end users to report inaccurate, biased or otherwise problematic outputs.

Click here to continue reading . . .

Providers Of AI Systems Ten Steps Towards Future Compliance

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.