AI in Hiring: 4 Employment Laws Every HR Team Should Know

  • Brandis Anderson

AI is everywhere in the workplace—from resume screening to performance reviews. But as these tools become more common, so do the rules that govern them. In the past few years, a wave of new state and local laws has landed, and they’re reshaping how employers can use AI in hiring and employment decisions. If you’re in HR, Compliance, or People Ops, here’s what you need to know to keep your company ahead of the curve (and out of hot water).

1. California Civil Rights Council AI Employment Regulations

Effective Date: October 1, 2025

What is it?

California’s new regulations update the state’s Fair Employment and Housing Act (FEHA) to directly address how artificial intelligence (AI) and automated decision-making systems (ADS) are used in employment. The rules are designed to prevent discrimination by making employers responsible for the impact of any AI or algorithmic tools used in hiring, promotion, or other job decisions—even if those tools come from a third-party vendor. The regulations reinforce that “the AI did it” is not a defense if bias occurs.

Who’s Covered?

Any employer in California using AI or automated tools for employment decisions, including those using third-party vendors.

What’s Required?

Here’s what employers need to know about the key requirements under the law:

  • Anti-Discrimination: It’s unlawful to use AI or ADS that discriminates against applicants or employees based on protected characteristics (disability, race, gender, etc.), even if the bias is unintentional.
  • Third-Party Liability: Employers are responsible for the actions of vendors and agents who use AI on their behalf.
  • Recordkeeping: Employers must keep detailed records of how AI tools are used and the data they process for at least four years.
  • Notice & Transparency: Employers using automated decision-making technology (ADMT) must provide clear notice to employees and applicants about:
    • The purpose of the technology
    • How it works
    • The right to opt out (where applicable)
    • How to access their data
    • Anti-retaliation rights
    • Note: Employers have until January 1, 2027, to comply with these notice requirements.

What Counts as “AI” or “ADS”?

  • Application screening tools (such as resume ranking)
  • Performance evaluation analytics
  • Individual productivity monitoring software
  • Any system that influences employment decisions (hiring, promotion, discipline, scheduling, compensation, or termination)

Recommended Action:

  • Audit your current and planned AI tools for bias
  • Update contracts with vendors to clarify compliance responsibilities
  • Train HR and hiring managers on the new rules and documentation requirements
  • Document all compliance efforts and keep records for at least four years

2. Colorado Artificial Intelligence Act (SB 24-205)

Effective Date: June 30, 2026 (delayed from February 2026)

What is it?

The Colorado Artificial Intelligence Act is a comprehensive state law that sets new standards for how “high-risk” AI systems can be used in employment and other critical areas. A “high-risk” AI system is defined as any AI that makes or substantially influences consequential decisions in employment, such as hiring, firing, promotion, or compensation.

The law aims to prevent algorithmic discrimination by requiring both those who build and those who use AI for important decisions—like hiring or firing—to take proactive steps to manage risk, ensure transparency, and protect individuals from bias.

Who’s Covered?

Employers and vendors operating in Colorado who develop or use “high-risk” AI for employment decisions. Small employers (fewer than 50 FTEs) may be exempt.

What’s Required?

Here are the key requirements:

  • Vendors/Developers:
    • Must disclose foreseeable uses and risks of their AI
    • Maintain public disclosures, including what type of data is used, and instructions on how to operate the system responsibly
    • Notify the attorney general and deployers within 90 days if discrimination is discovered
  • Employers/Deployers:
    • Implement a risk management policy and program
    • Conduct annual impact assessments and update after any major AI system changes
    • Make public disclosures about AI use and mitigation plans
    • Notify individuals when high-risk AI is used for consequential decisions (e.g., hiring, firing, promotion)
    • If an adverse decision is made, provide documentation and an opportunity to correct errors or appeal

Recommended Action:

  • Start building a risk management plan now, modeled on frameworks like NIST
  • Prepare to document and disclose how you use AI in employment decisions
  • Set up internal processes for reporting and responding to AI-driven discrimination
  • Inventory all high-risk AI systems and document their use

3. Illinois HB 3773 – AI Employment Discrimination Law

Effective Date: January 1, 2026

What is it?

Illinois HB 3773 amends the state’s Human Rights Act to specifically address the risks of discrimination from AI in employment. The law prohibits employers from using AI in ways that could result in bias against protected groups and requires transparency when AI is used in hiring, promotion, or other job decisions. It also bans the use of ZIP codes as a proxy for protected characteristics.

Who’s Covered?

All employers in Illinois using AI for recruitment, hiring, promotion, or other employment decisions. At this stage, the law does not define or limit what counts as “use of” AI–so even minor or indirect uses may be covered until further regulatory guidance is issued.

What’s Required?

Here are the key requirements:

  • Anti-Discrimination: Employers cannot use AI in ways that result in discrimination against any protected class (intentional or unintentional).
  • No “Digital Redlining”: Employers cannot use ZIP codes as a proxy for protected characteristics.
  • Notice: Employers must notify employees and candidates when AI is used in employment decisions, including the specific purposes.
  • Complaints: Individuals can file complaints with the Illinois Department of Human Rights.

Recommended Action:

  • Review your AI tools for potential bias, especially in screening criteria.
  • Update your candidate and employee communications to include required notices.
  • Avoid using location data (like zip codes) as a filter in hiring algorithms.

4. New York City Local Law 144 (Automated Employment Decision Tools Law)

Effective Date: July 5, 2023

What is it?

NYC Local Law 144 was the first local law in the U.S. to regulate the use of Automated Employment Decision Tools (AEDTs) in hiring and promotion. An AEDT is defined as any computational process—using machine learning, statistical modeling, data analytics, or AI—that issues a simplified output (like a score, classification, or recommendation) and is used to substantially assist or replace discretionary decision-making for employment decisions. This includes tools that screen, rank, or recommend candidates for jobs or promotions in NYC. 

The law is all about transparency and fairness: it requires employers using AI-driven tools for NYC roles to conduct annual independent bias audits, publicly post audit results, and notify candidates and employees about how these tools are used and what data is considered.

Who’s Covered?

Any employer or employment agency using AI-driven tools to make employment decisions about candidates or employees in New York City, even if the company is based elsewhere. If the employer is using a vendor-provided AEDT, the vendor would be required to coordinate the required audit and make sufficient data available for the employer to comply.

What’s Required?

Here are the key requirements:

  • Annual Bias Audits: AEDTs must undergo an independent bias audit each year covering race and gender individually, and combined, using specified testing. Results must be posted publicly on the employer’s website.
  • Advance Notice: Candidates and employees must be notified at least 10 business days before being assessed by an AEDT. Notices must include:
    • That an AEDT will be used
    • The job qualifications and characteristics assessed
    • How to request an alternative selection process or accommodation (even if not required to provide one)
  • Data Transparency: Employers must disclose the type and source of data used by the AEDT and their data retention policy upon request.
  • Penalties: Non-compliance can result in significant daily fines.

Recommended Action:

  • Inventory all AI tools used in hiring and promotion for NYC roles.
  • Schedule and document annual bias audits.
  • Prepare clear, accessible notices for candidates and employees.
  • Manage public posts of audit results and AEDT use.

The Bottom Line

Navigating these new AI employment laws can feel overwhelming, but a few focused steps will help your team navigate these laws with confidence. Here’s what you can do right now to set your organization up for success:

  1. Inventory and Audit Your AI Tools: Identify all AI tools used in employment decisions—know what's in use, where, and how. Assess for bias and disparate impact.
  2. Update Policies and Contracts: Clarify compliance responsibilities with vendors. Implement required notices and disclosures.
  3. Train Your Team: Educate HR, recruiters, and managers on new requirements and how to spot potential bias. Document all training and compliance efforts.
  4. Document Everything: Keep records of AI use, audits, notices, and impact assessments. Prepare for regulatory inquiries and potential litigation.
  5. Communicate Clearly: Make sure candidates and employees are clear about any AI use.
  6. Stay Proactive: Monitor legal developments in all jurisdictions where you operate. Regularly review and update your compliance program.

How Ethena Can Help

New laws in California, Colorado, Illinois, and New York make one thing clear: HR teams can’t afford to ignore the compliance risks of AI in hiring. That’s why Ethena built dedicated training on AI in the Workplace, including AI in hiring.

Want to learn more about how our training can help you navigate the new AI landscape? Let’s talk.

—-

Disclaimer: None of the content in this article constitutes legal advice, nor does it contain every detail or requirement of the applicable laws. It is provided solely for informational purposes and is not intended to be relied upon as a standalone resource. If you have questions about these laws or their implications for your organization, please consult your legal counsel.

Articles

View All

7 ways to get your employees to actually take (and enjoy) their required training

Effective compliance training isn't just about checking a regulatory box—it’s a foundational tool for building a successful, ethical business. When companies invest in educating employees on policies, legal expectations, and...

4 min read

Why One-Size-Fits-All Compliance Training Fails

Compliance training is mandatory, but it doesn’t have to be generic. Too many organizations still rely on one-size-fits-all courses. This approach checks the compliance box but overlooks learning. With new...

2 min read

Training Fatigue — How to Spot It Before It Hurts Compliance

Even well-intentioned training can backfire when employees are overloaded. Compliance training fatigue happens when employees receive too many courses, too often — especially when the content feels repetitive or irrelevant....

3 min read

Why Overtraining Backfires in Compliance Training

From harassment prevention and code of conduct to cybersecurity and workplace safety, compliance training is non-negotiable. Organizations must deliver it to meet legal obligations, satisfy frameworks like SOC2, or fulfill...

2 min read