Spend the next 6 minutes with us learning about best practices for the use of Artificial Intelligence (AI) by asset managers from the Autorité des marchés financiers (AMF) and the Ontario Securities Commission (OSC).

Risks and opportunities currently raised by AI

Asset managers, regulators and governments continue to assess the risks and opportunities presented by AI in the capital markets. Many asset managers are actively evaluating whether and how to integrate AI into their business, compliance and investment processes. This analysis is taking place against the backdrop of breakneck paced technological developments, a frequently changing regulatory environment and complex markets.

Fun with ChatGPT

To have a little bit of fun, we decided to ask ChatGPT to tell us what the biggest risks are to asset managers when using AI – the response isn't half bad. ChatGPT has flagged many of the same concerns that prompted both the AMF and the OSC to issue publications on the use of AI in the financial sector.

The biggest risk to asset managers of using AI is the potential for unintended consequences due to the complex and rapidly evolving nature of AI technology.1

ChatGPT also cites the following risks of overreliance on AI:

  • Model risk
  • Data quality and bias
  • Regulatory compliance
  • Operational risk
  • Ethical and reputational risk
  • Human capital and skills gap

To address these risks, asset managers must adopt a comprehensive risk management framework that encompasses model validation, data governance, regulatory compliance, operational resilience, ethical considerations and talent management. This framework should prioritize transparency, accountability, and responsible AI use to mitigate the potential downsides of AI adoption while maximizing its benefits for investment decision-making and client outcomes.

Summary of best practices

Overall there are common themes emerging for AI best practices in the asset management industry, namely:

  • the need to have quality data;
  • thoughtful design of AI;
  • ethical frameworks;
  • transparency;
  • protection of privacy;
  • AI-specific governance processes and an in-house understanding of AI, which will enable justification and oversight.

These best practices are generally in line with the framework proposed by the Government of Canada in the Artificial Intelligence and Data Act (AIDA), which is currently undergoing review in the House of Commons as part of Bill C-27. BLG published an article for clients on AIDA in October 2023: Bill C 27: Upcoming amendments to the legislation on the protection of privacy and artificial intelligence.

Overview of AI publications: AMF and OSC

AMF Paper

In this bulletin, we extrapolate what our clients can learn from recent regulatory publications and reports on AI, with a focus on the AMF Issues and Discussion Paper on Best Practices for the Responsible Use of AI in the Financial Sector (the AMF Paper). The AMF Paper outlines Québec's expectations regarding the development and implementation of AI systems by financial sector stakeholders in the province. These stakeholders include financial institutions, investment fund managers, dealers and advisers, firms, independent partnerships and independent representatives, credit assessment agents, and other companies and professionals subject to the laws and regulations administered by the AMF (Financial Players in the AMFPaper but referred to herein as firms for simplicity).

The AMF Paper is open for public comment through a consultation process that will close on June 14, 2024. As a result, the AMF Paper does not, currently, impose any new requirements on firms but it is likely to be finalized – with some modifications – as guidance and best practices in the near future.

AI Report

This bulletin also references findings from the OSC and Ernst & Yong LLP report published in late 2023: AI in Capital Markets: Exploring Use Cases in Ontario (the AI Report). Taken together with the AMF Paper, we believe these documents set out instructive expectations for the use of AI across the Canadian financial markets.

The AI Report notes that investment banks, fund managers and financial services firms are adopting AI solutions for three main purposes:

  1. improving efficiency;
  2. generating revenue; and
  3. managing risks.

As of October 2023, the AI Report notes that firms were primarily using AI to improve their existing products and services, rather than for the purposes of creating or adding new ones. Firms also tended to prioritize lower-risk AI systems, such as trading and surveillance, over those requiring greater human supervision, such as asset price forecasting.

Additionally, the AI Report outlines specific challenges that AI solutions present to capital markets participants, many of which clearly overlap with the areas of concern addressed in the AMF Paper. These include:

  • The explainability of AI systems
  • The difficulty in handling diverse data sources
  • AI-specific governance frameworks
  • Ethical considerations
  • Costs
  • Market stability
  • Operation models and culture resulting from technological disruption.

Discussion of the AMF paper

Given the commonality of risks and opportunities arising from the use of AI raised by both the AMF Paper and the AI Report, we urge clients to read the following carefully and to interrogate whether:

  1. the recommendations and discussions in the AMF Paper can and should represent best practices and;
  2. whether your firm can or will be able to adopt such best practices to support your use of AI.

We anticipate that these proposed best practices, when finalized, will form the guiding principles for the use of AI for the Canadian Securities Administrators (CSA) as a whole.

While recognizing that AI systems (defined as a technological system that, autonomously or partly autonomously, processes data related to human activity through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions) have the potential to foster significant benefits for consumers and businesses alike, the AMF is proposing 30 best practices applicable to all AI systems implemented by firms.

In the near term, we anticipate that the CSA will continue to monitor and communicate on the ways in which AI is used in financial services and that firms and their service providers can play a key role in establishing industry-wide best practices that will promote the responsible use of AI.

The AMF Paper covers the following broad topics regarding the use of AI. We divided the 30 specific recommendations under each heading for ease of reference.

Consumer protection

  1. Using AI in consumers' best interests
  2. Respecting consumers' privacy
  3. Increasing consumer autonomy
  4. Treating consumers fairly
  5. Managing conflicts of interest in consumers' best interests
  6. Consulting consumers required to use AI

Firms should ensure that the use of AI systems is in the consumers' best interests and is consistent with applicable privacy laws. The AMF emphasizes that firms should clearly inform consumers if they are interacting with AI systems that present themselves as humans and should obtain consumers' consent to the collection and use of their data. Any request for consent should describe the purpose of the AI systems, the nature of the data collected, and any potential intrusion into consumers' private lives, such as monitoring of behaviour, in plain language.

AI systems should be trained with data that is representative of the target population and does not reflect, for example, biases from discriminatory practices. Firms should identify vulnerable groups of consumers and assess any potential impacts on such groups during the design phase of the AI system.

Transparency for consumers and the public

  1. Disclosing information about the AI design and use framework
  2. Disclosing information on the use of AI in products and services
  3. Explaining outcomes relating to a consumer
  4. Providing consumers with communication channels and assistance and compensating mechanisms

Consumers should have access to the information they need to assess the benefits and risks associated with the use of AI in the context of procuring a financial product or service, especially when making a product or service decision. Consumers should also be able to obtain an explanation of the process and main factors that led to the outcomes or decisions provided by the AI systems. They should be able to have such outcomes or decisions reviewed by a competent person who is able to explain the functioning of the AI system and to present their arguments challenging the outcome or decision. Particular attention should be paid to vulnerable consumers.

Appropriateness of AI systems

  1. Justifying each case of AI use
  2. Prioritizing the simplest, most easily explainable treatment

Firms should ensure that the use of AI systems is appropriate and justifiable in the circumstances, taking into consideration the target population and desired outcomes. Firms should ensure that the benefits of using AI outweigh the foreseeable risks and harm and that any outcomes obtained using AI systems are generally better than those obtained without using it or by using simpler, more easily explainable technologies. It is unclear from the AMF Paper whether a lower cost of obtaining investment services or advice will be considered to be a "generally better" outcome that supports the use of AI.

Responsibilities when using AI

  1. Being accountable for the actions and decisions of AI
  2. Making employees and officers accountable with respect to the use of AI
  3. Implementing human control proportional to AI risks

Firms retain responsibility and oversight, despite the use of AI. Firms should not attempt to attribute responsibility to AI systems. To this end, firms should remain responsible for all outcomes and harms caused by an AI system, including systems acquired from third parties, and their employees and officers should be made aware that they remain accountable for their actions and decisions. Firms should establish human controls, including reviews of AI system decisions that adversely affect a consumer's ability to obtain a financial product or service or any other decision that has a high impact on the consumer's financial well-being. This will necessitate adequately informed resources that have the appropriate skillset to review and/or oversee the complex decision making of these AI systems.

AI design and use

  1. Overseeing AI design and use
  2. Establishing a code of ethics for the design and use of AI
  3. Creating an environment favourable to transparency and disclosure
  4. Establishing a consistent approach to AI design, deployment and monitoring
  5. Facilitating the creation of diversified work teams
  6. Conducting due diligence on third-party AI
  7. Using AI in a manner enabling the achievement of sustainable development objectives

The AMF notes the need for appropriate governance frameworks, clarity with respect to roles and responsibilities, and a clear ethical framework. Firms should establish ethical codes for the use of AI that are explained in plain language to the employees to whom they apply and that establish penalties for non-compliance. In addition to ensuring that firms are properly resourced with respect to AI – these teams should be diverse in terms of gender, culture and age as well as professional skills – the officers of the financial entity should have clearly defined roles. In connection with firms' obligations to oversee their third-party service providers, there are specific diligence considerations when retaining third party AI and/or in overseeing the provision of AI services. Contractual arrangements should contain clear audit rights, service levels and mechanisms for redress. The AMF also notes the importance of allowing concerns related to AI to be raised anonymously and without fear of reprisal, to bolster transparency and understanding. Firms wanting to learn more about AI governance may wish to review Decoding Tomorrow: BLG Primer on AI governance.

AI-associated risks

  1. Assessing the risks associated with the use of AI
  2. Ensuring AI security
  3. Governing the data used by AI
  4. Managing the risks associated with AI models
  5. Performing an impact analysis and testing on AI
  6. Monitoring AI performance on an ongoing basis
  7. Regularly auditing AI
  8. Training employees and users on AI

Firms should adopt frameworks to mitigate the risks associated with AI systems and appropriately monitor their performance. One example of such a risk management framework is the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, discussed here in BLG's NIST AI governance and risk management framework insight. Firms should establish measures that seek to ensure the integrity of the data used as inputs or to train AI systems, including processes for assessing data representativeness and quality and for determining whether discriminatory biases are present. Particular attention should be given to risks arising from interconnections between AI systems when, for example, the outcomes of one AI system are used as inputs for another AI system. Firms should ensure that employees and officers designing and working with AI systems have the necessary technical skills and sufficient knowledge of the ethical risks to perform their tasks properly.

An impact analysis should be conducted during the design phase of an AI system to evaluate various factors, including the appropriateness of using AI in the circumstances, whether any incursions into consumers' private lives are justified, and the nature and outcomes of any potential conflicts of interest. Impact analyses should be conducted by a diversified multidisciplinary team that is competent (particularly in the area in which the AI systems will be used) and trained in AI-related ethical issues.

This will require firms to adopt an inter-disciplinary approach to AI adoption and necessitate employees or advisers with the skillset to render a nuanced analysis of the AI system. Firms may also require additional support to understand, manage, control, and disclose any material conflicts of interest that may arise from the use of AI systems as these may not be as readily apparent as say, investment-related conflicts of interest.

Firms should ensure that appropriate systems are in place to monitor deployed AI systems to detect deviations from normal operation (such as material model drift or a degradation in input data quality), discriminatory or inequitable outcomes, inappropriate use, or use for harmful purposes. To interrupt the use of an AI system if its performance deteriorates beyond a given threshold, circuit-breaking features should be implemented and periodically tested. Lastly, AI systems should be audited in a manner consistent with the AI systems' impact on consumers and the risks associated with their use.

Conclusion

BLG is a trusted leader in navigating the rapidly evolving AI landscape and can assist you in understanding what it means for your organization in terms of regulations, diligence and adoption, supervision, integration and ongoing business operations.

About BLG

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.