1046770a.jpg

A critical component of a successful employer-employee relationship is the employer's fair and equitable treatment of employees, often embodied in the employer's employee engagement, retention, and compensation practices.  When it comes to compensation, U.S. employers must comply with federal and applicable state equal pay laws that prohibit discriminatory pay practices, and a myriad of state and local laws banning inquiries into, or the use of, prior salary history in setting pay.  Yet, compensation bias and discrimination still exist and continue to be the subject of government investigations, audits, and litigation.

With the growing use of artificial intelligence (AI) tools in every aspect of our lives, it is not surprising that companies are increasingly deploying AI and machine learning algorithms in various human resources functions, including compensation management and decision-making, as well as in the prediction of future employee performance.  Many argue that AI-based compensation tools can be used to close pay gaps and end discriminatory compensation practices.   The question arises, however, as to whether these tools can provide employers with a reliable method of providing objective, fair and precise compensation structures for their workforce, or whether they could intentionally (or unintentionally) cause an employer to violate applicable law and perpetuate bias in compensation structures.

Consider the following scenario:

A professional services firm has a large percentage of its employees working remotely, consequently, supervisors do not believe they have a good understanding of the contributions each employee makes.  So the firm collects metrics that are intended to reflect the value of each employee's contribution to the company.  Because many metrics are subjective and hard to measure, the firm uses proxy metrics such as time spent in training, the type of training, and "electronic presence," which is a measurement of time each employee spends electronically active working at the computer.  The firm uses an algorithm that is trained on all the data collected for all workers, and that algorithm makes a compensation recommendation based on its analyses of the collected data points and its predictions of future performance.  The employee's supervisor makes the ultimate compensation decision, informed by the algorithm's recommendation as well as qualitative and subjective assessments of the employee's contributions (such as quality of work product, timely completion of projects, responsiveness, enhancement of skills, innovation).  The firm also collects market data to reflect, for each function at the company, the going market rate of pay as well as trends in compensation for certain skill sets, and uses the algorithm to assist supervisors in making salary recommendations for new hires and for purposes of promotions.  The supervisors are increasingly relying on the algorithms, especially when they do not have time to review each employee or candidate's file completely.  The company believes the system is fair, but hasn't done any special testing to identify any particular biases.  The question arises: to what legal risks, if any, is the employer exposed?

In this scenario, the AI-based compensation tool is being used for multiple purposes, from setting pay for new hires, to determining promotions, and to assessing remote worker performance.  Despite these well-intended uses, there are potential legal risks depending on the nature and source of the data used to train the algorithm.

Datasets used to train the algorithms may be comprised of an employer's existing internal data, data from external sources, or a combination of both.   Employers should evaluate the quality of the initial data collected, and monitor any evolution of the data, for discriminatory factors.  These datasets may include employee data for particular positions requiring a certain educational, skill or experience level with varying compensation levels.  The appropriate grouping of employees performing the same job functions or job types is also critical to the assessment. AI tools should be carefully calibrated to compare the proper categories of employees. Any errors in these datasets could skew the results produced by the AI tool in a manner that could adversely affect the employer's compensation practices.

In general, with these datasets, we must consider whether use of the AI tool can lead to compensation decisions that have a disparate impact on employees.  Further, could use of the AI tool potentially violate applicable laws that prohibit inquiries into prior salary histories?  Should the results from these AI tools be used by employers to make compensation decisions without additional input from supervisors?

These are among the questions that we'll explore in our upcoming Labor & Employment-Recruiting and Compensation workshop.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green's virtual briefing on  Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click  here.

1046770b.jpg

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.