Margie Faulk
Whether employers realize it or not, Artificial Intelligence (“AI”) currently is used in most
workplace. Although AI can be tremendously beneficial in the right circumstances, it also can
create significant liability for employers who do not leverage it appropriately.
AI is the use of machines to perform tasks traditionally performed by the human brain. It can
take many forms. For instance, generative AI, like ChatGPT, can create documents or
presentations from scratch. Algorithmic or decision-making AI use algorithms to screen
candidates, and video and voice recognition software can rate a candidate’s cultural fit with your
organization. Conversational AI, or chatbots, are used to manage initial complaint intake or
employee requests for information. Digital assistants can manage calendars, edit and grammar
check documents, and create transcripts or outlines of recorded meetings. This list goes on and
on.
First, front-line HR managers and procurement folks who routinely source AI hiring tools do not
understand the risks. Second, AI vendors will not usually disclose their testing methods and will
demand companies provide contractual indemnification and bear all risk for the alleged adverse
impact of the tools."
Employers can't rely on a vendor's assurances that its AI tool complies with Title VII of the Civil
Rights Act of 1964. If the tool results in an adverse discriminatory impact, the employer may be
held liable, the U.S. Equal Employment Opportunity Commission (EEOC) clarified in new
technical assistance on May 18. The guidance explained the application of Title VII of the Civil
Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR
related uses.
The EEOC puts the burden of compliance squarely on employers. "[I]f an employer administers a
selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor," the agency
states in its technical assistance guidance.
States are also reviewing their exposure to AI and as a result, they already have laws in place
related to the use of artificial intelligence in the workplace. This will impact Employers in multi
state locations, especially remote employees.
Current cases include Workday Inc., a maker of AI applicant screening software, which is in the
middle of a class action lawsuit that alleges its products promote hiring discrimination. The
lawsuit, filed in February 2023 alleges that Workday engaged in illegal age, disability, and race
discrimination by selling its customers the company’s applicant-screening tools, which use biased
AI algorithms.
Other pending court cases will reveal the risk that Employers are taking. They need to prepare to
include policies to protect the company, consumers, and employees.
Margie Faulk
PHR, SHRM-CP HR Compliance Advisor/Speaker