Skip to Main Content

New Jersey's Attorney General and Division on Civil Rights Starts 2025 With Guidance on AI Use in Hiring

Date: 1 May 2025
US Labor, Employment, and Workplace Safety Alert

On 9 January 2025, New Jersey Attorney General Matthew J. Platkin and the New Jersey Division on Civil Rights (DCR) announced the launch of a Civil Rights and Technology Initiative (the Initiative) “to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies.” As part of the Initiative, the DCR issued a guidance about how the New Jersey Law Against Discrimination (LAD) applies to discrimination resulting from the use of artificial intelligence (the Guidance).The Guidance addresses the use of AI in several contexts but is particularly relevant for employers who use AI to help screen applicants and assess employee performance.

Overview

Algorithmic Discrimination

The Guidance explains that New Jersey’s long-standing LAD applies to “algorithmic discrimination,” meaning discrimination resulting from an employer’s use of AI or other automated decision-making tools, in the same way it applies to other discriminatory conduct. Indeed, even if the employer did not develop the AI tool and is not aware of the tool’s algorithmic discrimination, the employer can still be liable for the discrimination that results from the employer’s use of the tool under the LAD. Therefore, employers must carefully consider how they use AI to avoid potential liability for algorithmic discrimination.

Disparate Treatment and Disparate Impact Discrimination

The Guidance gives several examples of algorithmic discrimination. It notes that AI tools can engage in disparate treatment discrimination if they are designed or used to treat members of a protected class differently. Relatedly, an entity could be liable for disparate treatment discrimination if it selectively uses AI only to assess members of a particular class, such as screening only Black prospective applicants with AI but not applicants of other races. Moreover, even if an AI tool is not used selectively and does not directly consider a protected characteristic, it may impermissibly “make recommendations based on a close proxy for a protected characteristic,” such as race or sex. 

AI tools can also engage in disparate impact discrimination in violation of the LAD if their facially nondiscriminatory criteria have a disproportionate negative effect on members of a protected class. The Guidance gives the example of a company using AI to assess contract bids that disproportionately screens out bids from women-owned businesses. 

Reasonable Accommodations

The Guidance also cautions that an employer’s use of AI tools may violate the LAD if they “preclude or impede the provision of reasonable accommodations.” For example, when used in hiring, AI “may disproportionately exclude applicants who could perform the job with a reasonable accommodation.” And if an employer uses AI to track its employees’ productivity and break time, it may “disproportionately flag for discipline employees who are allowed additional break time to accommodate a disability.” 

Liability

Notably, the Guidance takes a broad view of who can be held liable for algorithmic discrimination. Like other AI-related guidance and laws, under the LAD, employers cannot shift liability to their AI vendors or external developers. This is the case even if the entity does not know the inner workings of the tool or understand how it works.

Best Practices

To decrease the risk of liability under the LAD, employers should take certain steps to ensure the AI tools they are using to make or inform employment decisions are not engaging in algorithmic bias or otherwise violating the LAD. These steps include:

  1. Creating an AI group responsible for overseeing the implementation of the AI tools that is comprised of a cross-section of the organization, such as members of the legal, human resources, privacy, communications, and IT departments.
  2. Implementing AI-related policies and procedures.
  3. Conducting training on the AI tools and algorithmic bias and only allowing employees who have completed the training to use the AI tools.
  4. Thoroughly vetting AI vendors and tools.
  5. Securing appropriate contract provisions from the AI vendors that (A) the vendor’s tools comply with all applicable laws, including, without limitation, all labor and employment laws; (B) the vendor will provide to the employer any and all information reasonably requested by employer to understand the algorithms behind the tool and how such tool complies with all applicable laws to ensure the tool is not a “black box”; (C) possibly require a third party acceptable to the employer to audit the tool for such compliance on an annual basis with costs to be shared in some agreeable way; and (D) provide full indemnification to the employer for any breach of the provisions backed by required liability insurance policies.
  6. Swiftly addressing any issues identified in the audits or tests.
  7. Reviewing any employment practices liability insurance or other applicable insurance policies to see if coverage is available.
  8. Ensuring there is still a human element to any decision-making involving an AI tool.

Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of concerns related to emerging issues in labor, employment, and workplace safety law and are well-positioned to provide guidance and assistance to clients on AI developments.

Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination, https://www.nj.gov/oag/newsreleases25/2025-0108_DCR-Guidance-on-Algorithmic-Discrimination.pdf

This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.

Return to top of page

Email Disclaimer

We welcome your email, but please understand that if you are not already a client of K&L Gates LLP, we cannot represent you until we confirm that doing so would not create a conflict of interest and is otherwise consistent with the policies of our firm. Accordingly, please do not include any confidential information until we verify that the firm is in a position to represent you and our engagement is confirmed in a letter. Prior to that time, there is no assurance that information you send us will be maintained as confidential. Thank you for your consideration.

Accept Cancel