Use of AI in Hiring: How Employers Can Stay Out of Trouble

An increasing number of employers – particularly large ones – are using artificial intelligence in the hiring process. AI has a tremendous ability to screen large numbers of applicants almost instantaneously, and sometimes does a better job than humans at applying criteria objectively and without discrimination.

However, nobody is perfect, and that includes AI.

AI algorithms are often set up to select applicants with traits associated with employees who have been good for the employer in the past. That sounds like a good idea at first, but algorithms that look at “who’s been good in the past” may still skew heavily toward white male applicants. It has also been known to “select out” women with college majors in predominantly “female” subjects and older applicants whose applications show that they have been in the workforce for a long time. It has also been known to sometimes discriminate against individuals with disabilities.

These AI flaws, which may be rectified in the future, can cause problems for employers trying to screen applicants quickly but on a non-discriminatory basis.

In addition to the above vulnerabilities, AI is still not quite ready to handle reasonable accommodations.

“Promising practices”

The U.S. Equal Employment Opportunity Commission recently issued some helpful guidance related to employers’ use of artificial intelligence and algorithms and the Americans with Disabilities Act. The Guidance purports to relate to all aspects of employment, but its primary focus is hiring.

The following are what the EEOC considers to be “promising practices” when using AI in the hiring process:

  • Employers may need to offer reasonable accommodations to applicants with disabilities so that they can even apply for a job in the first place. Thus, the EEOC recommends that employers train hiring personnel (that is, the real people who are involved in the hiring process) “to recognize and process requests for reasonable accommodation as quickly as possible” in connection with the applicant screening process.
  • Employers should also train hiring personnel to look for other ways of assessing job applicants if the standard tools screen them out because of disabilities.
  • If requests for reasonable accommodation go to the AI vendor rather than to the employer, the vendor should refer the requests to the employer immediately.
  • Employers should use AI tools “that have been designed to be accessible to individuals with as many different kinds of disabilities as possible.”
  • Employers should let all applicants know that reasonable accommodations are available for those who have a legitimate need (and give them the necessary human contact information to allow them to make requests).
  • Employers should “[describe], in plain language and in accessible formats, the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating.” To avoid making it too easy for applicants to “game” their answers, employers should keep this description general. For example, “We administer a personality test by computer that has multiple choice questions. We use the answers to determine whether the applicant will be compatible with our corporate philosophy and with co-workers and customers. If you need a reasonable accommodation in connection with the format of the test or the assessment of your test results, please click here.”
  • Employers should “[ensure] that the algorithmic decision-making tools only measure abilities or qualifications that are truly necessary for the job.”
  • Employers should “[ensure] that necessary abilities or qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications.” (Emphasis is the EEOC’s.)
  • Employers should ensure that their AI tools do not make improper pre-offer medical inquiries.
  • Employers should be open to considering reasonable accommodations even if the request is made during, or after the applicant fails, the screening. (The employer still has the right to request appropriate medical documentation before granting such requests.)

These same principles should generally apply when applicants need reasonable accommodations for their religious beliefs or practices, or for pregnancy and pregnancy-related conditions.

Employers beware!

It is important to realize that employers can be liable for violations of the anti-discrimination laws in connection with the use of AI in hiring, even if the screening is performed by the vendor rather than the employer, and even if the employer doesn’t know it happened.

Employers should also realize that they won’t necessarily be in the clear just because they use AI that is “bias-free” or “validated.” A “bias-free” tool may not help much in situations requiring reasonable accommodation. “Validated” means only that the tool has been found to assess for characteristics that are needed for performance of the job.

Robin E. Shea is a partner at Constangy, Brooks, Smith & Prophete. She has 30 years of experience in employment litigation, including Title VII and the Age Discrimination in Employment Act, the Americans with Disabilities Act, the Genetic Information Nondiscrimination Act, the Equal Pay Act, and the Family and Medical Leave Act; and class and collective actions under the Fair Labor Standards Act and state wage-hour laws; and labor relations. She provides preventive advice to employers and conducts training for human resources professionals, management, and employees on a wide variety of topics. Shea may be reached at rshea@constangy.com.