As part of the requirements under Executive Order 14110, which called for U.S. government agencies, including the Department of Labor, to publish guidance for Federal contractors on the use of AI relating to nondiscrimination in employment decisions, the OFCCP released their guidance along with a landing page for the use of AI systems. Although various pundits disagree as to the legality of the guidance, the guidance itself does not state anything that would be unexpected in the new AI world.
In the federal contractor world, the OFCCP updated its scheduling in 2023 adding in item 21 for submission in the letter requests, among other things, information about any artificial intelligence, machine learning, automated systems, or other technology-based selection procedures. OFCCP has defined AI very broadly from ranking systems embedded into applicant tracking systems based on key word searches to generative AI embedded in newer HR systems.
This new guidance includes a brief list of FAQs that address some important questions for employers and provides what they call “promising practices” for the use of AI. Specifically, the OFCCP states:
“While some federal contractors may use AI systems to increase productivity and efficiency in their employment decision-making, the use of AI systems also has the potential to perpetuate unlawful bias and automate unlawful discrimination, among other harmful outcomes. OFCCP answers questions and shares promising practices below to clarify federal contractors’ legal obligations, promote EEO, and mitigate the potentially harmful impacts of AI in employment decisions.”
Whether an employer is a federal contractor or not, the AI issue is real in terms of potential in discrimination. The EEOC had previously issued guidance on AI with respect to disability and is providing even more information as it relates to discrimination. Last year, the EEOC settled a case in which AI was screening out applicants based on age (the iTutor case).
The first three FAQs describe AI in the employment context:
1. What is Artificial Intelligence, or AI?
As set forth in 15 U.S.C. § 9401(3), AI is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
2. What is an algorithm?
Generally, an “algorithm” is a set of instructions that can be followed by a computer to accomplish some end. Human resources (HR) software and applications use algorithms to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employees. Software or applications that include algorithmic decision-making tools are used at various stages of employment, including hiring, performance evaluation, promotion, and termination.
3. What are “automated systems” and AI in the employment context?
In the employment context, the term “automated systems” broadly describes software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. The White House Blueprint for an AI Bill of Rights includes examples of automated systems “such as workplace algorithms that inform all aspects of the terms and conditions of employment including, but not limited to, pay or promotion, hiring or termination algorithms, virtual or augmented reality workplace training programs, and electronic workplace surveillance and management systems.” For example, an automated system may help a federal contractor’s HR professional sift through hundreds or thousands of resumes, identifying applicants that meet basic requirements for a job. A federal contractor could also use AI to determine which criteria to use when making employment decisions – for instance, in the previous example, to define the parameters by which the resumes are filtered and reviewed.
The OFCCP then acknowledges the benefits of AI but sets out specific requirements for federal contractors when using AI:
- Maintain records and ensure confidentiality of records consistent with all OFCCP-enforced regulatory requirements.i For example, contractors must keep records of resume searches, both from searches of external websites and internal resume databases, that include the substantive search criteria used.
- Cooperate with OFCCP by providing the necessary, requested information on their AI systems.
- Make reasonable accommodation to the known physical or mental limitations of an otherwise qualified applicant or employee with a disability as defined in OFCCP’s regulations, unless the federal contractor can demonstrate that the accommodation would impose an undue hardship on the operation of its business.
- An accommodation is any change in the work environment or in the way things are customarily done that enables an individual with a disability to enjoy equal employment opportunities. The contractor must make available the same level of benefits and privileges of employment to a qualified applicant or employee with a disability that are available to the average similarly situated employee without a disability.
- The reasonable accommodation obligation extends to the contractor’s use of automated systems, including but not limited to, electronic or online job application systems.
OFCCP then points out that the use of AI will likely be an “assessment” that must meet the Uniform Guidelines on Employee Selection Procedures (UGESP). Specifically, OFCCP states that federal contractors must:
- Understand and clearly articulate the business needs that motivate the use of the AI system.
- Analyze job-relatedness of the selection procedure.
- Obtain results of any assessment of system bias, debiasing efforts, and/or any study of system fairness.
- Conduct routine independent assessments for bias and/or inequitable results.
- Explore potentially less discriminatory alternative selection procedures.
OFCCP has made it clear that the federal contractor is responsible for the use of AI in its HR systems, although there is a case in the 9th Circuit Court of Appeals that may lead to liability to the vendor. In that case, Workday is being sued by an applicant saying its product screened them out for a job. The applicant was African American. As most know, vendors will not provide access to their algorithms and refuse to indemnify the customer if liability arises. OFCCP has confirmed ultimate responsibility is the contractor: “Employers cannot escape liability for the adverse impact of discriminatory screenings conducted by a third party.”
OFCCP then identifies promising practices for contractors:
- Transparency and notice to employees and applicants. OFCCP advises federal contractors to provide clear notice about their use of AI, what data is being collected, and how employment decisions are being made.
- Stakeholder engagement. OFCCP advises federal contractors to consult with employees and/or their representatives in the design and deployment of AI systems.
- Fairness assessments and ongoing monitoring. OFCCP advises federal contractors to analyze AI tools for discriminatory impact before use, at regular intervals during use, and after use. Where issues are identified, OFCCP advises employers to take steps to reduce adverse impact or consider alternative practices. While some of this advice may sound obvious to those experienced in AI risk management, OFCCP’s inclusion of these concepts emphasizes their importance. Likewise, OFCCP also advises employers to ensure meaningful human oversight of AI-based decisions. While OFCCP does not provide greater clarity regarding how it might be defining “meaningful human oversight,” the inclusion of this concept in OFCCP’s “Promising Practices” invites contractors and employers to examine and consider how human oversight mechanisms may or may not be present in their existing AI processes.
- Documentation and recordkeeping. OFCCP advises federal contractors to retain information about the data used to develop AI systems, the basis for decision-making, and the results of any fairness assessments or monitoring.
- Vendor management. OFCCP advises federal contractors to carefully vet AI vendors. Among other things, OFCCP’s “Promising Practices” discuss the desirability of verifying the source and quality of the data being collected and analyzed by the AI system, the vendor's data protection and privacy policies, and critical information about the vendor's algorithmic decision-making tool, such as the screening criteria, the predictive nature of the system, and any differences between the data used for training and validation and the actual candidate pool. OFCCP also discusses the desirability of having access to the results of any assessment of system bias, debiasing efforts, and/or any study of system fairness conducted by the vendor and cautions that relying on vendor assurances alone is not enough – emphasizing the importance of being able to verify key information about the vendor's practices.
- Accessibility for individuals with disabilities. OFCCP advises federal contractors to ensure AI systems are accessible, allow for reasonable accommodation requests, and accurately measure job-related skills regardless of disability.
OFCCP, among others including state and local jurisdictions, recommend that the contractor provides notice to applicants, employees, and representatives, encouraging employers to inform people who will be subject to a process involving AI how their data will be captured, how it will be used, and how the AI system may have contributed to their hiring/rejection.
Although the OFCCP applies to federal contractors, nonfederal contractors can use this as a basis for developing their compliance programs concerning AI in HR systems. In addition, the DOL has released their own set of guidelines. It is important for HR to ask their vendors if AI is embedded in their programs, what it does, and how the results can be measured if being used.
Source: Seyfarth