Legal Alerts
Legal Alerts

Employer Alert: EEOC Issues Guidance on the Use of AI in Recruitment

Featured: Doris T. Bobadilla, Wendell Hall

On March 18, 2023, the EEOC released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” designed to aid employers and tech developers as they design and adopt new technologies, including those that incorporate artificial intelligence, for use in employee recruitment and promotion.  The technical assistance is focused on preventing discrimination against job seekers and workers.

EEOC Chair Charlotte A. Burrows noted, “As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality.”

Among the questions addressed by the technical assistance is:

  • Is an employer responsible under Title VII for its use of algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor?
  • In many cases, yes. For example, if an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor. In addition, employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf. This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf.
  • Therefore, employers that are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII. If the vendor states that the tool should be expected to result in a substantially lower selection rate for individuals of a particular race, color, religion, sex, or national origin, then the employer should consider whether use of the tool is job related and consistent with business necessity and whether there are alternatives that may meet the employer’s needs and have less of a disparate impact.  Further, if the vendor is incorrect about its own assessment and the tool does result in either disparate impact discrimination or disparate treatment discrimination, the employer could still be liable.

The EEOC encourages employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately negative effect on a basis prohibited under Title VII or treat protected groups differently.

The EEOC emphasized that the technical assistance is not a new policy but instead is a document that applies established Title VII principles and previously issued guidance.  It is designed to clarify existing requirements in consideration of the expanding use of software, algorithms, artificial intelligence and algorithmic decision-making tools.

In addition to taking the new technical assistance into consideration when using AI, we recommend that employers:

  • Ensure that AI systems are transparent and that they are not used to discriminate against individuals that are part of a protected class.
  • If the tool is provided by a third party, ask the vendor what steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII.
  • Obtain consent from applicants and employees before collecting data about them.
  • Otherwise use AI systems in a way that respects people’s privacy.

The point is that under existing federal civil rights laws (as well as municipal laws referred to in our previous post:  Employment Trends: How AI is Changing the Way We Hire,) employers can be held liable for discrimination that results from the use of AI in the recruitment process, and are also potentially liable for privacy violations that may result.  Accordingly, employers need to proactively assess the impact of the tools that they are using.

Disclaimer: This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Galloway and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions.

Doris Bobadilla

dbobadilla@gallowaylawfirm.com

Wendell Hall

whall@gallowaylawfirm.com

RELATED

Get the latest insights
in your inbox

Get the latest insights in your inbox