Senior managers need to be confident in their approaches to their firms' use of AI in order to manage regulatory expectations and deliver good customer outcomes

1422382a.jpg

AI is transforming the global economy. Firms' leaders and senior managers are grappling with how this will impact financial services. The opportunities are significant: generative AI is estimated to lead to increases in productivity of between 2.8% to 4.7% of banks' annual revenues (a US$200 – 340 billion increase)1. At the same time, customers and employees are concerned about how this opaque technology will impact them. As discussed in our Global Bank Review 2023, uncertainty continues on where laws and regulation of AI will ultimately land in different countries.

Firms cannot stand still. Competitive pressure is driving them to explore, test and deploy AI in more areas of their business. The need to act quickly is also leading to the use of third-party products to source AI capabilities and infrastructure, including AI-as-a-service and off-the-shelf AI models. Firms' senior managers must ensure the corresponding governance and risk management is robust. The technology may be cutting edge, but the risks are familiar and signposted in numerous publications by regulators.

"AI is a game-changer...one that senior managers in financial services really need to get ahead of.

Jon Ford
London

As regulators focus on the outcomes, senior managers build AI literacy

In a recent speech on AI, the Chair of the US Securities and Exchange Commission (SEC) remarked: "We at the SEC are technology neutral...we focus on the outcomes, rather than the tool itself." Indeed, the message from the UK financial services regulators in their recent round-up of feedback on AI and machine learning is that the industry highly values technologically agnostic regulation. There are sound reasons for such an approach, but it demands from firms (and their senior managers) a technologically literate application of governance, oversight and risk management. Consistent with observations made by the International Organization of Securities Commissions (IOSCO) in 2021, this does not necessarily require technical expertise from senior management overseeing AI control functions, but sufficient technical understanding given their ultimate responsibility and accountability for their firm's use of AI.

"AI changes everything. For financial services firms and their senior managers, there are big opportunities...and big risks.

Simone Hui
Hong Kong

In the EU, the European Parliament is proposing new wording in the draft AI Act to require providers and deployers of AI systems to ensure a sufficient level of AI literacy among staff and others dealing with the operation and use of AI systems on their behalf. In Hong Kong, regulators have yet to require senior management themselves to have sufficient technical expertise but have stressed that AI governance committees are expected to include members with sufficient technical skills to advise senior management.

Regulators will expect senior managers to have access to necessary experience in order to meaningfully subject their firms' proposed uses of AI to appropriate oversight and monitoring. In addition, there will be an increased expectation on senior managers to have sufficient understanding of AI models and their data inputs to enable them to evaluate and interrogate model results and guard for bias, discrimination, and other poor customer outcomes.

"Senior managers in financial services cannot afford to think of themselves as 'technology neutral', they must engage with AI.

Michelle Virgiany
Jakarta

This is apparent from the UK Prudential Regulation Authority's (PRA's) model risk management (MRM) principles for banks, which will come into force in May 2024. These principles have been developed with AI models in mind. Governance, one of the five principles of MRM, includes an expectation that the board provide challenge to the outputs of the most material models, including AI models. This will require them to understand:

  • the capabilities and limitations of the models;
  • the model operating boundaries under which model performance is expected to be acceptable;
  • the potential impact of poor model performance; and
  • the mitigants in place should model performance deteriorate.

In addition, firms should identify a relevant senior manager(s) to assume overall responsibility for the MRM framework, its implementation, and the execution and maintenance of the framework. Similar senior management responsibility is being consulted on in aspects of the US SEC's proposed new conflicts of interest rules for use of AI by broker-dealers and investment advisers.

In Singapore, the focus is on public-private collaboration to develop toolkits to assist firms in complying with the principles of fairness, ethics, accountability and transparency when assessing or developing governance frameworks for the use of AI. With the support of the Monetary Authority of Singapore (MAS), an industry-led whitepaper will be published in early 2024 which will cover the responsible use of generative AI from a banking perspective.

In conclusion

In countries where AI-specific regulations or guidelines are still forthcoming, senior managers should be mindful of existing laws and regulations which may be applicable to the use of AI generally and anticipate upcoming regulations when designing their business and operations and products by looking at how other jurisdictions such as the UK and EU have started governing AI.

Senior managers of global firms will also be expected to draw upon their experience in navigating cross-border regulatory frameworks as the global AI regulatory landscape continues to evolve.

Footnote

1. The economic potential of generative AI: The next productivity frontier – McKinsey & Co, June 2023

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.