Human resources leaders of retail companies and many in the organizational process world believe that new artificial intelligence (AI) will revolutionize how HR functions. At least for the near future, they are only partly right. In the HR context, AI typically refers to data that is processed by algorithms to make decisions regarding employees. Simply put, the belief in the HR world is that "cognitive computing" will transform HR's decision-making process and improve the retail employee's experience.

Not so fast—AI certainly is promising in our world, but let us look at the law. A retail corporation is always responsible for the decisions it makes regarding employees. That responsibility sometimes turns into legal liability. For example, corporate decisions made that are more adverse to legally protected groups such as women, minorities, veterans or the disabled create legal compliance issues. Compliance problems lead to lawsuits, legal expenses, branding concerns and decreased work output efficiencies.

Yes, humans can be biased even unwittingly. In fact, the Human Resource Professional Association (HRPA) found that even employers who strive to be inclusive may subconsciously favor people like themselves (unconscious bias). Additionally, Harvard's Implicit Association Test (IAT) demonstrates that humans have language biases as well. In the HR world, the rationale for AI is that biases find their way into job descriptions, resume selections and thus the hiring process. So the well-intended thinking is: use algorithms designed to find and eliminated the bias patterns. Using the same rationale, it is believed that AI could also present hiring managers with candidates who may have been screened out by human tendency to favor candidates with similar traits, competencies or use of language.

AI, HR and the Law

As stated, when reading articles from Forbes, HR magazines, Business Journals, etc., it is clear the writers believe AI is going to revolutionize HR. Notwithstanding, changes in the law and legal requirements are not controlled by technological advancements. In fact, the maxim natura non facit saltum ita nec lex (i.e., nature does not make a leap, thus neither does the law) stands for the principle that the law and legal responsibilities—while not static—should not change quickly. Therefore, from the legal compliance and enforcement perspective, those magazines, journals and HR experts are either misguided or are not referring to the near future regarding decisions of hiring, firing, lay-offs, pay, promotions, benefits and other terms and condition of employment. Simply put, if the tools that a retail company uses create disparities of 2 percent or greater, OFCCP & DOL, DOJ and EEOC do not care if the disparities were created by a human or an algorithm. Intent is irrelevant. Disparate impact is all that matters.

AI is constantly learning. So it can learn abias/mirror human bias.

Amazon, last week, scraped its internal AI recruiting tool as the tool had a bias/discriminated against women. The program actually penalized in points applications that contained the word "women's." The AI favored men as it learned the tech field is dominated by men. So things that indicated female—such as girls' school, women's college, female sport team, etc., downgraded the applicant. Amazon quickly said the program was never used in an official capacity. Interestingly, in the STEM world, some argue AI biases prove that the biases are determined neutrally and thus accurate and fair. However, this is a dangerous doubling-down approach that will not impress government enforcers or private litigants.

Moreover, there is a "Catch-22" here: leaving decisions concerning hiring, terms and conditions solely up to AI that causes disparities can be argued negligent, but not using technology to improve diversity in your workforce and to decrease pay gaps can also be used against you. In other words, compliance is result orientated. From an enforcement stance, the outcome is all that matters. The HR technology landscape continues to be disrupted by AI, but HR must also balance cognitive tech advancements with legal compliance requirements. Without question, AI has administrative use in terms of speed, e.g., automate business processes and reduce administrative load, and help run an internal audit for pay equity. However, if AI creates a disparity, a corporation's human capital must review and rectify the disparity. Government enforcement agencies will not be lenient because a retail corporation's AI created a disparity as opposed to a person.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.