Introduction

The launch of ChatGPT in late November 2022 marked the start of a new phase in the adoption of artificial intelligence (AI). Described as the fastest growing consumer application in history, ChatGPT made the capabilities of foundation models tangible to a non-technical audience. Combined with its general purpose use, this opened up a seemingly endless vista of potential AI use cases for businesses.

In response, many organisations are exploring potential applications of AI for commercial and corporate function processes as a priority. There has also been a large growth in AI services and delivery models, from end-to-end applications for specific tasks to cloud providers offering foundation models within organisation-specific tenants for a range of tailored deployments. At the same time, the large AI developers are continually releasing new versions of models and new business models for their delivery.

For business leaders, the speed of adoption of AI combined with the pace of new developments makes it a fraught area in which to define business strategy and make decisions. This challenge is compounded by an almost comparable pace of change, by regulatory standards, in legislative proposals, enforcement, guidance and standards. However, the focus on the possibilities of AI developments and the fear of missing out driving use case ideas can distract from strategic considerations for AI adoption, particularly for managing risks.

How can organisations not be left behind in exploring the potential benefits of AI while still sensibly addressing the myriad factors relevant to managing potential risks? This is the challenge faced by the burgeoning field of AI governance. This article seeks to highlight some of the key areas of concern and provide a view on a growing consensus on pragmatic responses, with a particular focus on programmatic approaches to managing data and privacy risks in the AI lifecycle.

An Evolving Risk Landscape – Defining A Response

Until a few years ago, much of the discussion on AI risk focused on issues related to the development and operation of AI models. This was largely driven by the discipline of AI ethics with a principle-based approach addressing, in particular, issues of fairness, bias, explainability, robustness and human centricity in the context of how AI systems arrive at decisions or outputs. These principles continue to underpin both regulatory initiatives (see, for example, the influence of the OECD AI Principles on legislative developments in the EU and the US) and the policies created by more mature organisations.

However, the shift in the ecosystem described above, with the growth in integration of AI into business processes, the increasing complexity of AI supply chains and evolving regulatory responses means a wider lens is often needed on potential risks with a more holistic approach to data and privacy issues arising from AI use beyond model assessments.

To help achieve this more holistic view of AI risk exposure and define governance responses, there are three broad areas that may be helpful for companies to consider: the overall company AI strategy and the approach for its adoption, the regulatory landscape and its application to company activities and AI use, and expectations for responsible AI use from external stakeholders and internal values.

1. Company AI Strategy And Approaches

Understanding overarching AI strategies and the approach for deployment of AI is a key component of identifying risks and priorities for remediation and governance focus areas.

One of the first areas to consider will be the mix in approach between "buy and build" in the adoption of AI.

For companies relying on a "buy" approach, procuring existing AI services or models, the focus will be on third party risk management and risks arising at the point of deployment from AI use. Factors to consider include assurance on expected testing, controls and compliance of the provider in relation to the development of the AI system, the nature of relationships with suppliers and the flow of data between parties, rights and obligations attached to deployments of models and data processing, how to govern the deployment of AI systems, and business resilience related to reliance on third party suppliers.

For the "build" approach, where companies develop their own AI models and systems, organisations will need to take primary responsibility for assurance on the development of the AI system. Additional considerations in this scenario will be compliant collection and use of training data, and testing throughout development cycles in areas such as robustness, security, privacy and discrimination. There will also be questions on managing risks arising from downstream use of models and handling third-party rights in relation to data used for models such as privacy requests from individuals.

Linked to the buy versus build approach, are considerations of the infrastructure that will support AI systems. How and where will AI systems be hosted, what data sources or assets will be connected to AI systems, what will be the interfaces for interactions, will there be hardware involved, etc.

Another area to consider in relation to the overall AI strategy relevant to risk is the approach for data use. This can be broken out into consideration of data relevant at different stages of the AI system lifecycle: data sources for training (the data used to develop and train the model for the AI system), data sources for inputs (the data that causes the AI system to produce a result), and data arising from outputs (the result of the output from an AI system).

This can be combined with consideration of the broad buckets of data that will or will not be available for processing via AI systems across different stages of the AI system lifecycle: master data on key business products and structures, transactional data on purchases, sales, logistics and resourcing, unstructured data in non-defined formats across an organisation or third-party data collected and managed by providers without a direct relationship with the end customer, as well as the categories of individuals that will be affected and their locations depending on where AI systems will be deployed.

Clearly, these represent high-level positions with the realities of AI adoption containing many nuances, mixes of approach and additional considerations and complexities such as the use of open-source models. However, gaining a broad understanding of the organisation's desired approach and a common view of benefits and risks at a strategic level can help both to level set on expectations and create a shared sense of purpose between commercial use and risk management.

Consideration of the overall AI strategy can also help to position AI governance programmes for success by demonstrating how they can enable company goals and also address key areas of risk in a language understandable to the decision-makers responsible for these strategies and the alignment of interests. This can allow teams responsible for AI governance to show how responses also bring clear organisational benefits such as improving data accuracy for better decision making, helping accelerate widespread adoption via effective guardrails, and streamlining contracting with business customers and business partners.

2. Regulatory And Legal Developments

The next broad area to consider is the (harder and softer) compliance obligations and risks arising from law, regulatory guidance and standards. As for AI strategies, it is possible to take a systematic approach to start to identify priorities and focus areas relying on an understanding of company AI strategies and deployment activities discussed above, the existing application of rules to company activities and horizon scanning of new and emerging regulation.

Business leaders will need to consider these issues both from a general perspective and within the context of the geographies, sectors and industries they operate across. This section sets out some recent developments across existing data rules, AI-specific legislation and other legal developments.

a. Existing Data Rules

Where personal data is involved, or where an AI model will be informing decisions related to individuals, existing privacy and data protection rules are likely to apply. However, while many privacy concepts are well established, and data protection authorities have issued guidance on data protection and AI use, there are many areas of uncertainty that organisations will need to navigate and continue to track.

For instance, in January 2024, the ICO launched a consultation on how aspects of data protection law should apply to the development (and use) of generative AI models.1 The consultation raises issues on what constitutes a legitimate interest for using data to train generative AI, as well as questions on expectations for compliance with the data accuracy principle, data subject rights and the purpose limitation principle in the context of generative AI development and deployment.2

There is also an evolving view of rules on automated decision making under Article 22 of the EU General Data Protection Regulation (GDPR), which gives individuals protections against decisions with legal or similarly significant effects based solely on automated processing (apart from in certain limited situations). In December 2023, the Court of Justice of the European Union (CJEU) ruling in the SCHUFA case suggests that credit reference agencies engage in automated individual decision-making when determining credit repayment probability scores which significantly influence lenders when considering whether to "establish, implement or terminate" contracts with an individual.3 The CJEU's ruling establishes that the obligation to comply with Article 22 of the GDPR falls on the credit agency as well as the lender, with potentially far-reaching implications for organisations that carry out, as well as rely on, automated processing activities.

Further guidance on the regulation of AI technology has also been issued by the US Federal Trade Commission (FTC), particularly as it relates to considerations for unfair and deceptive practices where there may be discrimination, bias or false claims related to the AI.4 Most recently, the FTC has issued guidance warning companies against changing privacy notices or terms of service to allow exploitation of customers' data for AI (or other purposes) without appropriate notice to individuals.

b. Emerging Litigation

Unsurprisingly, we are already seeing a rise in AI related litigation as regulators and litigators race to get an adequate regulatory regime in place. Big tech companies leading the way in AI development have faced class action lawsuits in the US for alleged violation of data privacy laws in the use of personal data for training models as well as a number of intellectual property cases in relation to rights in data used to train AI systems. While some actions alleging the harvesting of personal data have been dismissed, disputes will continue to develop around the intersection of privacy laws, intellectual property and the development (and training) of AI models.

c. AI-Specific Legislation

There are already some AI-specific regulations globally. In 2021, New York enacted legislation on Automated Employment Decision Tools (AEDTs), taking effect in July 2023, stipulating that employers and employment agencies using AEDTs must complete bias audits on an annual basis, and make the use of AEDTs clear to employees and applicants.5

Meanwhile, in the state of Colorado, Senate Bill 21-169 was signed into law in 2021 which prohibits insurance companies from using External Consumer Data and Information Sources (ECDIS), and algorithms or predictive models based on ECDIS in such a way that might constitute 'unfair discrimination' against customers based on protected characteristics (including race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression). Under the rules, ECDIS include credit scores, purchasing patterns, home ownership, social media activity, education, court records and other personal information sources.6

Added to these are a number of proposed pieces of legislation to regulate AI, the most anticipated of which is the EU AI Act. Expected to come into force later in 2024 (with phased dates to come into effect), the Act will be the first comprehensive law regulating the use and development of AI technologies globally.7

Similar to the EU General Data Protection Regulation (GDPR) for privacy, the Act takes a risk-based approach and aims to set the standard globally for AI regulation. The most onerous obligations, except for outright prohibition of certain practices, will fall on AI use considered 'high risk', with initial studies by the EU suggesting this would apply to 5-15% of AI systems. There are also specific requirements for providers of general-purpose AI systems. While there are similarities with the GDPR, there are also areas of tension including differing definitions for the roles of parties ('developer and deployer' versus 'controller and processor'), and it remains to be seen how both sets of regulation will work together in practice.

In the US, the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (October 2023) establishes the US regulatory approach to AI and sets out considerations across a wide range of areas, including mitigating the potential threats to personal data posed by AI while acknowledging the need to provide training data to advance AI systems.8 It calls on Congress to pass bipartisan privacy legislation to protect the privacy and civil liberties of American citizens. It also requires that federal agencies encourage the private sector to deploy AI safely.

Other developments include draft Automated Decision Making Technologies rules proposed by the state of California in November 2023 which would require any organisation using automated decision-making technology to provide notice and opt-out options for consumers, as well as instructions on how to access these rights.9

3. Stakeholder Expectations And Company Values

A third important area in defining AI governance responses is that organisations consider the expectations of relevant stakeholders and alignment with company values and ethical positioning. Understanding the expectations of customers, shareholders, business partners, counterparties, employees and other groups key to businesses' success can play an important part in defining the focus areas and priorities for AI governance programmes. Different stakeholder groups will have differing levels of importance with differing emphases for different businesses in different geographies and sectors.

This may play a role both in the nature and level of issues and risks that are addressed and in the form of how governance and accountability are demonstrated. For example, consumer-facing organisations may place emphasis on explainability at a less technical level than business-to-business organisations and, where consumer-facing organisations may want to focus more on external communication of their AI governance efforts, business-to-business organisations may instead want to make sure they have collateral in place to respond to customer risk assessments or to demonstrate adherence to industry standards to meet procurement requirements.

Similarly, AI governance responses should take account of company values and desired positioning on responsible AI use. ESG programmes, sustainability reporting, internal or public commitments to values, and codes of conduct and other internal policies that set out the guiding principles which govern the way in which an organisation conducts itself can all help provide a further framework from which to define and drive adherence to company AI governance stances.

Considering the fast-moving nature of developments, regulatory uncertainty, and the combination of emerging risks and AI's ability to amplify effects via speed and scale, being able to tie controls and standards to wider conceptions of risk and governance objectives can be helpful to anticipate and avoid future issues that may otherwise be missed.

Developing A Response

For companies implementing AI strategies and AI systems, privacy and wider data risk will be essential points of consideration. Adopting a programmatic approach and mapping out implications for data use and privacy from overarching organisation AI approaches can help to develop a pragmatic, effective and sustainable response proportionate to the organisation's planned AI activities, risk exposure and risk appetite.

In order to achieve this, there is a need both for privacy and data risk domain experts to have awareness of, and input into, wider organisation AI strategies – from decisions on model development, deployment and infrastructure, to training and skills development – as well as taking a strategic approach to AI governance.

Helpful for this approach will be defining a target state for AI governance and a view of desired maturity over time. Identifying the principles and trends underlying regulatory developments and using these to guide overarching responses can also help with anticipating future requirements and 'baking-in' responsible AI culture and controls to help ensure ongoing compliant and streamlined AI integration.

While taking a long-term view, organisations should also identify quick wins that will immediately improve risk posture in areas such as:

  1. Configuring gateways and checkpoints in existing processes to trigger reviews and advisory support for AI systems
  2. Implementing high-level policies, guidelines and decision frameworks considering organisation AI use.
  3. Adding AI specific modules to existing assessment processes – vendor risk assessments, privacy impact assessments, etc.
  4. Recruiting volunteers to support risk management. With the current interest in AI adoption, it may well be a good time to take advantage of the enthusiasm and interest to sign up 'AI governance champions' across the business, to cultivate expertise, accountability and ownership.

Organisations should also consider peers' activities and look to move with the market. As AI governance practices mature, there appears to be a growing consensus on the following enterprise approaches and considerations:

  1. Carrying out risk assessments at a use-case level, rather than assessing the overall technology.
  2. Developing an inventory of AI use across the enterprise to understand activities and risk exposure.
  3. Implementing an overarching policy framework and multi-functional governance model to bring in the varied domain expertise necessary to address the myriad risks from AI use.
  4. Looking to take advantage of, and adapting, existing frameworks. For example, there appears to be a growing consensus on the use of the National Institute of Standards and Technology's (NIST) AI Risk Management Framework as a starting point for AI governance programmes, particularly in the US.10
  5. Translating AI governance principles and compliance obligations into technical requirements is often a pain point for organisations. Making it a collaborative process and providing technical teams clarity on desired outcomes as the people best placed to identify the appropriate measures can help to avoid this issue.

The complexity of the task of developing and implementing an effective AI governance programme against the backdrop of changing technology, business needs and regulatory developments, can seem a daunting prospect. The most important step, however, is to start. Adopting an iterative approach which allows continual feedback and learning from activities can instil confidence and trust in the technology whilst managing privacy and data risks. Change is here and organisations must adapt accordingly – those that proactively manage this transition will be best-placed to thrive in the future.

Footnotes

1. https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-series-on-generative-ai-and-dataprotection/

2. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/data-protection-principles/a-guide-to-the-data-protection-principles/

3. https://curia.europa.eu/juris/document/document_print.jsf?mode=req&pageIndex=0&doc id=280426∂=1&doclang=EN&text=&di r=&occ=first&cid=5088474

4. https://www.ftc.gov/business-guidance/ blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

5. https://www.nyc.gov/assets/dca/downloads/ pdf/about/DCWP-AEDT-FAQ.pdf 6. https://doi.colorado.gov/for-consumers/sb21- 169-protecting-consumers-from-unfair-discrimination-in-insurance-practices

6. https://doi.colorado.gov/for-consumers/sb21- 169-protecting-consumers-from-unfair-discrimination-in-insurance-practices

7. https://ec.europa.eu/commission/presscorner/ detail/en/qanda_21_1683

8. https://www.whitehouse.gov/briefing-room/ presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

9. https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf

10. https://www.nist.gov/itl/ai-risk-management-framework

Originally published by 11 March, 2024

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.