Disparate impact is a difficult method to prove discrimination. Generally, it is a practice or policy that is neutral on its face yet has significant impact on an affected party. The disparate impact analysis was first discussed in Griggs v. Duke Power, 401 U.S. 424 (1971), which held that Title VII “proscribes not only overt discrimination but also practices that are fair in form, but discriminatory in operation”. In 1991, Congress amended Title VII to add Section 703(k), which codified how “an unlawful employment practice based on disparate impact” could be established.
In the Griggs case, there was a requirement to have a high school degree for promotions. Later, Duke Power added two tests that in theory would assist those without a high school degree to obtain a promotion. However, once these tests were implemented, it was found that White employees were ten times more likely to pass than Black employees. The Supreme Court ruled that the company's employment requirements did not pertain to applicants' ability to perform the job and so were unintentionally discriminating against Black employees.
In Teamsters v. United States, 431 U.S. 324 (1977), the Supreme Court upheld the use of the disparate impact approach to demonstrate that the seniority system, established through collective bargaining agreements between the employer and the union, resulted in discriminatory practices.
Given all these issues, especially revolving around testing, the EEOC in 1978 adopted the Uniform Guidelines on Employee Selection Procedures or “UGESP” under Title VII. See 29 C.F.R. Part 1607.1 UGESP provided uniform guidance for employers about how to determine if their tests and selection procedures were lawful for the purposes of Title VII disparate impact theory. UGESP outlines three different ways employers can show that their employment tests and other selection criteria are job-related and consistent with business necessity. These methods of demonstrating job-relatedness are called “test validation.” UGESP provides detailed guidance about each method of test validation.
However, the Supreme Court later questioned the validity of statistic-only evidence to prove discrimination. Although the case was a disparate treatment case, in Ricci v. DeStefano, 557 U.S. 557 (2009), Justice Scalia specifically wrote in his concurring opinion that ”I join the Court’s opinion in full, but write separately to observe that its resolution of this dispute merely postpones the evil day on which the Court will have to confront the question: Whether, or to what extent, are the disparate-impact provisions of Title VII of the Civil Rights Act of 1964 consistent with the Constitution’s guarantee of equal protection?” Justice Scalia was not a proponent of disparate impact theory and would have reversed the decision approving the use of it as the basis for discrimination.
Catching up to today, on April 23, 2025, President Trump issued an Executive Order entitled “Restoring Equality of Opportunity and Meritocracy.” Specifically, the EO states: “It is the policy of the United States to eliminate the use of disparate-impact liability in all contexts to the maximum degree possible.” The EO directs all federal agencies, including the EEOC and Department of Justice, to deprioritize any enforcement and litigation regarding disparate impact claims. Further, the UGESP needs to be renewed at the end of March, and it has not been renewed as of the date of writing.
Although the federal government may be pulling back from disparate impact, the states may not. It is going to be a field day for plaintiff attorneys to establish new rules to play by if UGESP is not renewed. However, the disparate impact approach is a valuable tool for employers to examine tests, policies, and practices more deeply. It offers a way to proactively identify and address potential areas of discrimination. To carry out this type of analysis, employers need to collect demographic data such as race and gender for both employees and applicants.
For example, some sales representatives from testing companies may claim their tests have “100% validation,” but that’s misleading. In reality, well-designed tests typically show a validity correlation between 0.2 and 0.4. Any off-the-shelf test should also be validated specifically for the roles it’s being used to assess. This is known as portability validation. In short, proceed with caution, and be sure to conduct any analysis under attorney-client privilege.
Source: Seyfarth Shaw 4/24/25, EEOC