New York City’s AI Law is all Bark and No Bite - American Society of Employers - Anthony Kaylin

EverythingPeople this week!

EverythingPeople gives valuable insight into the developments both inside and outside the HR position.

Latest Articles

New York City’s AI Law is all Bark and No Bite

Last July, New York City (NYC) implemented its artificial intelligence reporting Law 144 for New York City Employers.  Specifically, the law requires employers to audit and notify candidates about the use of automated employment decision tools. 

Specifically, NYC’s law defines “automated employment decision tool,” or AEDT, as “any computational process, derived from machine learning, statistical modeling, data analytics or artificial intelligence, that issues simplified output, including a score, classification or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” 

The law provided further clarity around the phrase “substantially assist or replace discretionary decision making” to mean tools that:

  • Rely solely on a simplified output, such as a score, tag, classification or ranking, with no other factors considered.
  • Use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set.
  • Use a simplified output to overrule conclusions derived from other factors including human decision making.

A separate segment of the law adds two criteria that define machine learning, statistical modeling, data analytics and AI tools as “mathematical, computer-based techniques” that generate a prediction — i.e., an expected outcome for an observation — or that generate a classification — i.e., an assignment of an observation to a particular group. Employers are not prohibited from using AEDTs but must audit the use of the tools for bias against any protected category.

This rule was touted as a model for regulation and enforcement, but reality proves differently.  The law is simply ineffective.

Since the law allows employers to decide whether to comply or not, and with a lack of a safe harbor for employers that do comply with the bias audit posting requirement, experts have stated that the law has no bite.  Employers are able to opt out if they have a human in the chain of decision making for which they use the tool.  A Cornell University study published in January found that most Big Apple employers have opted out of complying. Of the 391 employers the study surveyed, just 18 posted the required audit reports to their websites. Further, a mere 13 followed the directive to publish notices alerting applicants that an automated employment decision tool was being used to evaluate them.

New York City Department of Consumer and Worker Protection is the regulatory authority for the law, but it does not have the authority to investigate employers on implementation.  Further, applicants who apply for jobs must complain to the agency and they may not know if any AI is being used in the hiring process. 

Further, the transparency requirement of the law could leave employers vulnerable to lawsuits if it shows adverse impact.  Therefore, employers are ignoring the law under the terms of the law. The states of New York, Connecticut, and California are moving forward bills that would regulate AI in the workplace.  For example, California, A.B. 2390 would bar employers from using AI tools that lead to discrimination and alert workers when they're being subjected to an AI tool, as well as provide a private cause of action for individuals to sue over violations. 

On a federal level, employers still must do their due diligence.  Dan Kuang, a nationally known expert on AI, states: “The single most powerful ‘check’ to AI-Bias is the Uniform Guidelines.  Contrary to common misconception, the Guidelines are NOT outdated or irrelevant; in the AI space, the Guidelines have never been more relevant.  The Guidelines forces a fundamental but very uncomfortable question: ‘What's the value proposition relative to adverse impact from biased algorithms?’”

Older AI tools are having a difficult time with compliance.   As Kuang points out, “They cannot demonstrate ‘value’ but there's tons of adverse impact.” 

Brian Marentette of Biddle states a similar sentiment: “While NYC Local Law 144 was a step in the right direction, it’s obviously woefully lacking the kind of rigor that needs to go into evaluating employment tools for disparate impact or bias.  The Uniform Guidelines on Employee Selection Procedures are given great deference by the courts and continue to be the standards to which employee selection tools are held in disparate treatment or impact claims under Title VII. The Guidelines and caselaw over the last 50+ years make it pretty clear how practices, procedures, tests, or other decision-making tools in HR should be examined for disparate impact, validity, and potential employment discrimination. You see very little of the same standards in NYC Local Law 144.  Employers who elect to do a bias audit may have a false sense of security that their AI tool is ‘safe.’  The tool’s outcomes may look fine per NYC LL144, but at a Title VII level could be very concerning.”

With a possibility of inconsistent state and local regulations, employers should recognize that AI in its infant form currently is problematic.  Further, Large Language Models (LLM) used in any hiring process is also problematic as it caters to the educated speaker, and does not accurately pick up slang, idioms, and dialectic speech patterns or issues of speech impediments and the like.  Therefore, employers who want to implement AI in hiring need to specially review for compliance concerns.  As a note, ASE is a resource for these reviews. For more information, contact Tony Kaylin at [email protected].

 

Source:  Law360 3/1/24

Filter:

Filter by Authors

Position your organization to THRIVE.

Become a Member Today