Skip to Main Content

AI in Recruiting and Employment Decision-Making: New California AI Regulations Strike a Balance Between Efficiency and Algorithmic Accountability

Date: 31 October 2025
US Labor, Employment, and Workplace Safety Alert

Introduction

The use of artificial intelligence (AI) in employment decision-making is no longer a theoretical, future-tense possibility. It is here and is reshaping how employers find, assess, and promote talent. As employers’ use of AI has increased, so has the development of AI regulation at the state and local level, including in California. As discussed in K&L Gates’ 29 March 2025 alert, California took a number of steps in 2025 to regulate the development and use of AI in employment to ensure that California employers’ use of AI tools is free of discrimination and bias.1 This alert takes a closer look at one of those recently implemented regulatory actions. On 1 October 2025, the California Civil Rights Council’s (CRC) March 2025 “Employment Regulations Regarding Automated-Decision Systems” took effect (CRC Regulations) under the Fair Employment and Housing Act (FEHA). Now, every California employer covered by the FEHA must practice algorithmic accountability when using Automated Decision Systems (ADS) and AI in employment decisions.2

The intent of the CRC Regulations is clear: innovation must serve fairness and equity, not undermine it. An AI tool’s efficiency, while powerful, cannot replace human oversight, judgment, and analysis. Under the CRC Regulations, human participation is required not only to understand how the tool impacts a candidate or employee’s opportunities but also to determine when and how to intervene when an ADS is used. 

Defining Automated Decision System

Under the CRC Regulations, an ADS is defined as:

“A computational process that makes a decision or facilitates human decision making regarding an employment benefit…derived from [or] using artificial intelligence, machine learning, algorithms, statistics, or other data processing techniques.”3 

The CRC Regulation’s definition of an “Artificial Intelligence System” is similarly broad—any “machine-based system that infers, from the input it receives, how to generate outputs,” whether those outputs are predictions, recommendations, or decisions.4 

In practice, that scope captures most of the AI-based technology now shaping employment decisions, such as:

  • Resume filters that rank or score candidates;
  • Online assessments measuring aptitude, personality, or “fit;”
  • Algorithms targeting specific audiences for job postings;
  • Video-interview analytics evaluating tone, word choice, or expression; and
  • Predictive tools drawing on third-party data.

If a tool influences an employment outcome, directly or indirectly, it likely qualifies as an ADS under the CRC Regulations.

Key Compliance Duties and Risks

The CRC Regulations establish a framework that blends civil rights principles with technical oversight. Employers must now take the following steps when implementing ADS and Artificial Intelligence Systems:

Prevent Discrimination (Direct and Indirect)

It is unlawful to use any ADS or selection criteria that creates a disparate impact against a protected class under FEHA. Scrutinizing liability does not stop with the question of intent. Impact must be considered. 

Conduct Bias Testing and Audits

ADS tools must undergo anti-bias testing or independent audits that are timely, repeatable, and transparent. A single validation at launch is not enough and will not demonstrate sufficient reasonable measures. Fairness checks must be integrated as regular and systemized maintenance practices.

Provide Notice and Transparency

Applicants and employees must receive pre-use and post-use notices explaining when and how ADS tools are used, what rights they have to opt out, and how to appeal or request human review.

Assert an Affirmative Defense Through Good-Faith Efforts

Employers facing claims under FEHA may defend themselves by showing reasonable, well-documented anti-bias measures including but not limited to: audits, corrective actions, and continuous oversight. But that defense is only as strong as the evidence supporting it.

Assume Responsibility for Vendors and Agents

Employers cannot outsource accountability. Bias introduced by a vendor or third-party platform remains the employer’s legal and ethical burden.

Retain Records for Four Years 

FEHA now requires retention of ADS-related documentation for at least four years. This retention requirement includes but is not limited to: data inputs, outputs, decision criteria, audit results, and correspondence.

Through these requirements, the CRC makes it clear that, while automation in decision-making is not prohibited, employers must be responsible stewards when implementing such tools.

Practicing Algorithmic Accountability

At the crux of its framework, the CRC Regulations reflect a push towards algorithmic accountability. Algorithmic accountability requires that technology partners with human judgment. Employers cannot claim ignorance of how an algorithm operates or what data an AI tool uses. To the contrary, under the CRC Regulations, an employer that uses AI without understanding its foundation and logic now creates its own form of negligence and potential liability.

The CRC highlights the importance of retaining human input in decision-making despite the use of AI tools. At a minimum, employers must incorporate a human element at some point in the lifecycle of an employment decision to avoid running afoul of the CRC Regulations. Accountability means transparency in process, traceability in data, and intervention when fairness is jeopardized. It means partnering with AI and leveraging its strengths without surrendering ethical, legal, and managerial responsibilities. 

Best Practices

To comply with the CRC Regulations, facilitate a culture of algorithmic accountability, and reduce risk, employers should consider the following practices:

Invest in Education and Awareness

Empower Human Resources and leadership teams with foundational understanding of ADS, its potential, its blind spots, and the social dynamics it can amplify. Oversight begins with literacy.

Engage Independent Auditors

External bias audits and model validations provide both credibility and objectivity. They also strengthen an employer’s affirmative defense by demonstrating due diligence.

Adopt Continuous Review and Monitoring

Bias is not a linear risk, and it can shift as data, users, and markets evolve. Regular audits, outcome monitoring, and feedback loops should become part of daily governance. Consult with outside counsel to build an appropriate cadence of audit-related protocol.

Institutionalize Documentation

Establish systems that capture, retain, and preserve ADS-related records including but not limited to: inputs, model parameters, audit logs, and decisions. These records must be maintained for at least the required four years. 

Preserve Human Oversight

Employers should design decision flows that invite human touch, review, challenge, correction, and intervention.

The Bottom Line: Partner with AI, Do Not Defer to It

Ignorance of the law has never been a defense. Now, neither is efficiency. The CRC Regulations make clear that progress in automation must be matched by equal progress in accountability and must not replace human oversight.

Our Labor, Employment, and Workplace Safety practice group lawyers regularly counsel clients on a wide variety of topics related to emerging issues in labor, employment, and workplace safety law, and they are well-positioned to provide guidance and assistance to clients on AI developments.

1 See K&L Gates Legal Alert, 2025 Year-To-Date Review of AI and Employment Law in California, May 29, 2025, https://www.klgates.com/2025-Review-of-AI-and-Employment-Law-in-California-5-29-2025.

Employer is defined as “[a]ny person or individual engaged in any business or enterprise regularly employing five or more individuals, including individuals performing any service under any appointment, contract of hire or apprenticeship, express or implied, oral or written… Employees located inside and outside of California are counted in determining whether employers are covered under the Act. However, employees located outside of California are not themselves covered by the protections of the [California Fair Employment and Housing Act] if the allegedly unlawful conduct did not occur in California or the allegedly unlawful conduct was not ratified by decision makers or participants in unlawful conduct located in California.” Cal. Code Regs. tit. 2, § 11008.1(e).

Cal. Code Regs. tit. 2, § 11008.1(a).

Cal. Code Regs. tit. 2, § 11008.1(c).

This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.

Return to top of page

Email Disclaimer

We welcome your email, but please understand that if you are not already a client of K&L Gates LLP, we cannot represent you until we confirm that doing so would not create a conflict of interest and is otherwise consistent with the policies of our firm. Accordingly, please do not include any confidential information until we verify that the firm is in a position to represent you and our engagement is confirmed in a letter. Prior to that time, there is no assurance that information you send us will be maintained as confidential. Thank you for your consideration.

Accept Cancel