Preventing Employment Discrimination in the Age of AI: EEOC Offers Guidance

annual pay and overtime new laws
Not All High Earners Are Exempt from Overtime Pay
May 1, 2023
age discrimination NJ 2023
NJ Worker Gains Opportunity for Age Bias Lawsuit Hearing
June 12, 2023
Show all
employment discrimination in the age of artificial intelligence

By: Francine Foner, Esq. and Ty Hyderally, Esq.

Can an employer’s Artificial Intelligence (“AI”) screening tools result in a claim of unlawful discrimination?

The EEOC believes so, based upon its recent publication of a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. (“Guidelines”). The Guidelines, released on May 18, 2023, are “focused on preventing discrimination against job seekers and workers.” As the EEOC further explains, the “publication is part of the EEOC’s ongoing effort to help ensure that the use of new technologies complies with federal EEO law by educating employers, employees, and other stakeholders about the application of these laws to the use of software and automated systems in employment decisions.”

Title VII of the Civil Rights Act of 1964 (“Title VII”) prohibits discrimination in employment based upon race, color, religion, sex or national origin. Title VII prohibits both intentional “disparate treatment” of employees, as well as employment practices which have a “disparate impact” or “adverse impact” on employees, based on their race, color, religion, sex or national origin.  According to the Guidelines, different types of AI used to evaluate candidates in the employment process could unintentionally result in “disparate impact” discrimination of protected classes under Title VII. Such AI tools include: “resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test.”  The EEOC guidelines use the term “algorithmic decision-making tool” broadly to refer to all these kinds of systems.

The Guidelines

The Guidelines explain that employers can assess whether a particular algorithmic decision-making tool has an adverse impact on a particular protected group by checking if its use results in a “selection rate” for individuals of a protected group that is “substantially” less than that for individuals in another group.  If so, the tool will violate Title VII, unless the employer can show that the difference is “job related and consistent with business necessity.” The “selection rate” is “calculated by dividing the number of persons hired, promoted, or otherwise selected from the group by the total number of candidates in that group.”

To determine whether a selection rate is substantially different for one group, the Guidelines use a “four-fifths” rule a general rule of thumb for determining whether the selection rate for one group is “substantially” different than the selection rate of another group. That rule states that one rate is “substantially different” than another if their ratio is less than four-fifths (or 80%).   The Guidelines provide an example of application of this rule, in which “the selection rate for Black applicants was 30% and the selection rate for White applicants was 60%. The ratio of the two rates is thus 30/60 (or 50%). Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule says that the selection rate for Black applicants is substantially different than the selection rate for White applicants in this example, which could be evidence of discrimination against Black applicants.”

The Guidelines also clarify that an employer remains responsible for an algorithmic decision-making tool which has a disparate impact upon a protected class, even if the test was developed by an outside vendor. The Guidelines further encourage employers to proactively assess any algorithmic decision-making tool in use on an ongoing basis to determine if they are compliant with Title VII’s prohibition against disparate impact discrimination. The Guidelines are part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, “which works to ensure that software—including AI—used in hiring and other employment decisions complies with the federal civil rights laws that the EEOC enforces.”

The EEOC’s ongoing monitoring of the potential for discrimination arising from the use of AI in the employment process, and publication of Guidelines which define when AI can constitute disparate impact discrimination, is welcome news for employees.

 

En nuestra firma hablamos español. This blog is for informational purposes only.  It does not constitute legal advice, and may not reasonably be relied upon as such.  If you face a legal issue, you should consult a qualified attorney for independent legal advice with regard to your particular set of facts.  This blog may constitute attorney advertising.  This blog is not intended to communicate with anyone in a state or other jurisdiction where such a blog may fail to comply with all laws and ethical rules of that state of jurisdiction.

Comments are closed.