In an era of rapid development and deployment of artificial intelligence technologies, we stand witness to new, reactive regulation of key elements of these technologies, such as the California Consumer Privacy Act, or CCPA, effective this month. This comes in parallel with assertions by many corporate leaders that the proper focus of corporate purpose is the interests of employees, customers, suppliers, shareholders and society in general, rather than sole primacy of current shareholder interests.1

Companies' increasingly omnipresent use of AI technologies2 as part of a product or service offering, or as a means to optimize operations, has correspondingly increased AI's importance to corporate strategic planning and governance.3

AI applications have also created new risks, adversely affecting companies' reputations and relationships with their workforce, from litigation attacking biased outputs of AI algorithms,4 to a crowd-created discriminatory chat bot,5 to protests against a newly constituted AI ethics board that was consequently disbanded.6

Recent shareholder proposals are calling upon boards to ensure proper AI governance, such as a shareholder proposal at Google Inc. calling for board-level oversight of AI technology through a "societal risk oversight committee."7

These developments have implications for the board oversight required for corporate activities involving AI, including quickly evolving regulatory schemes affecting AI deployment, such as the CCPA, which affects the collection and use of data often used for these technologies.

Yet, studies show that information technology expertise continues to be a vastly underrepresented boardroom skill,8 and some companies still have no defined process for managing technology risk.

Within this landscape this article discusses why, and what considerations involving AI are likely to be important for the effective discharge of fiduciary duties by a board of directors.

Why It Matters

Directors of Delaware corporations are required by state corporate law to fulfill duties of loyalty and care. This includes the duty to exercise oversight over corporate risks. Judicial deference to board decisions, when challenged, is predicated on the board's acting and making decisions in good faith, informed by material facts and considerations relevant to the matter, and exercising their judgment on that basis.

Recent cases in Delaware reinforce the need for the board to actively engage in oversight, understand key risks, and establish and monitor a compliance program designed to produce information for reporting. Indeed, devotion of board attention to, and reporting and discussion of specific legal, regulatory and financial compliance and other considerations of AI may be critical.

When material, summaries of compliance reports should be provided to boards "on a consistent and mandatory basis," establishing a "board-level system of mandatory reporting."9

In addition, the current focus on stakeholder corporate governance to best serve corporate viability and long-term success would involve accounting for the interests of relevant stakeholders, including consumers, employees and the public, as well as shareholders.10

In light of these multifaceted expectations of boards, directors face additional pressure where employees or consumers oppose uses of AI technologies, such as as instance where employees of a large technology company demanded cancellation of a contract with the U.S. Army to supply augmented reality headsets for soldier training.

Board members should be prepared for AI technologies to affect their considerations not solely in respect of shareholders, but more broadly in light of various stakeholder interests.

What to Address

As a starting point, oversight should involve a strategy-level discussion to develop an initial understanding of certain AI-relevant subjects. For example, use and development of AI technology can be affected by various areas of law, including product liability, advertising concerns, unfair competition and sector-specific concerns, such as healthcare and fintech regulation.

Thereafter, significant changes or developments should be periodically reported to and reviewed by the board either directly or through an appropriate committee and subject to controls and processes to facilitate effective management. In such process, boards would consider the following:

To see the full article click here

Footnote

1 See, e.g., World Economic Forum, "Davos Manifesto 2020: The Universal Purpose of a Company in the Fourth Industrial Revolution," Dec. 2019, available at http://www.wlrk.com/docs/weforumorgDavosManifesto2020TheUniversalPurposeofaCompanyintheFourthIndustrialRevolution.pdf; "Business Roundtable Redefines the Purpose of a Corporation to Promote 'An Economy That Serves All Americans,'" Aug. 19, 2019, available at https://www.businessroundtable.org/business-roundtable-redefines-the-purpose-of-a-corporation-to-promote-an-economy-that-serves-all-americans; Eduardo Gallardo, "On an Expansive Definition of Shareholder Value in the Boardroom," The CLS Blue Sky Blog, Oct. 22, 2019, available at https://www.gibsondunn.com/wp-content/uploads/2019/10/Gallardo-On-an-Expansive-Definition-of-Shareholder-Value-in-the-Boardroom-CLS-Blue-Sky-Blog-10-22-2019.pdf.

2 Per PWC Governance Insights Center, AI is an umbrella term for "smart" technologies that are aware of and can learn from their environments. Robotic process automation (RPA), machine learning, natural language processing and neural networks all incorporate AI into their operations.

3 Indeed, references to AI in SEC risk factor disclosures for public companies have grown exponentially — from almost none in 2016 to more than 80 in 2019 — reflecting AI's broadening material impact on business risks and even entire business models.

4 Thomas Beardsworth and Nishant Kumar, "Who to Sue When a Robot Loses Your Fortune," Bloomberg, May 5, 2019, available at https://www.bloomberg.com/news/articles/2019-05-06/who-to-sue-when-a-robot-loses-your-fortune.

5 Daniel Victor, "Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk," NY Times, March 24, 2016, available at https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html.

6 Jillian D'Onfro, "Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch In The Wake Of Employee Protest," Forbes, Apr. 4, 2019, available at https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/#40f183776e28.

7 https://www.fnlondon.com/articles/shareholders-quiz-google-on-ai-risks-20190618?mod=hp_LATEST.

8 Agenda, "CIOs: Boards Don't Get IT" by Amanda Gerut, August 13, 2018.

9 E.g., Marchand v. Barnhill, 212 A.3d 805, 813 (Del. 2019).

10 Business Roundtable Statement on the Purpose of a Corporation, August 2019.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.