Authors: Christine Zebrowski & Lisa Zolidis
Corporations on both sides of the Atlantic are increasingly utilizing a broad range of artificial intelligence tools to streamline hiring processes, including résumé-screening algorithms and conversational chatbots for interview scheduling. While these tools can efficiently process large volumes of applications and identify qualified candidates, corporations have found that the benefits of speed and efficiency are accompanied by a complex array of legal obligations, particularly regarding privacy, fairness, and transparency.
For in-house counsel and HR professionals, this evolving landscape requires careful navigation. In the U.S., employers must address a growing number of state laws and federal guidance from the EEOC. In the EU, both the General Data Protection Regulation (“GDPR”) and the new Artificial Intelligence Act ("EU AI Act") that initially came into force on August 1, 2024, includes heightened compliance standards, with significant penalties for non-compliance. Portions of the EU AI Act that will impose strict requirements on deployers of high-risk AI Systems used in the hiring and employment process do not fully come into force until August 2, 2026, so now is an opportune time for companies to ensure they build effective compliant controls.
This article synthesizes key lessons that can be drawn from the experiences of early adopters seeking to use these tools in both the U.S. and EU, and provides a practical framework for responsible AI deployment in global recruitment operations that will enable employers to realize the benefits of AI while mitigating regulatory risks.
1. Why AI Governance Matters in Recruiting – Key Pillars on the Regulatory Landscape
Corporate AI recruitment systems used for candidate sourcing, shortlisting, performance prediction, conducting interviews or employment decision-making can significantly impact individuals' careers and livelihoods. If not properly conducted, these tools may perpetuate or exacerbate bias, sometimes in subtle or unintended ways. For instance, AI models trained on historical data that reflect prior hiring biases may inadvertently favor certain groups, even when explicit identifiers such as names or pronouns are removed.
Using AI in the recruitment process may offer efficiencies and opportunities to remove human unconscious bias. However, they are not a panacea and have sometimes been found to embarrassingly add overt distortions and inject unintended discrimination to the process. News reports and recent books on the subject have highlighted AI blunders where AI systems trained on data regarding current employees of a company gave extra points to applicants who mentioned “baseball” among their skills and downgraded applicants who mentioned “softball;” applicants that submitted resumes twice that differed only in dates reflecting their likely age were selected for an interview only for the resume that indicated a younger age; and even one startling case where an AI application filtering tool gave extra points for resumes that included the name “Thomas.”
AI tools that purport to draw conclusions about an applicant’s fitness, reliability and emotional intelligence from video analysis of facial expressions, tonality and choice of words have also drawn fire, including passage of legislation to regulate or bar these practices. The Illinois Artificial Intelligence Video Interview Act, for example, requires consent from candidates for AI analysis of video footage. Maryland’s 2020 Facial Recognition Law requires that employers obtain consent from applicants by having them sign a waiver for the use of facial recognition services to create a facial template during their interview. At least one provider of this kind of analytical AI tool, HireVue, pulled its facial analysis feature in 2021 following a 2019 complaint to the FTC filed by the Electronic Privacy Information Center.
New regulations in a number of U.S. states specifically bar employers from using AI tools that discriminate against job applicants based on protected characteristics. For example, new California Civil Rights Council regulations that went into effect in on October 1, 2025, explicitly prohibit California employers from using AI or other automated decision systems (ADS) if those systems harm applicants or employees based on protected characteristics, such as gender, race, or disability. 2 Cal. Code Regs. § 11008, et seq. The California regulations make clear that unintentional discrimination may still impose liability on employers if the use of the ADS creates a disparate impact on applicants or employees based on a protected characteristic. Anti-bias testing and pro-active corrective measures may mitigate the risks and be used as an affirmative defense. 2 Cal. Code Regs. § 11009(f).
Illinois, Colorado and other states have developed similar laws. In Illinois, amendments to Article 5, Section 2 of the Illinois Human Rights Act (the "IHRA") go into effect January 1, 2026 that will prohibit employers from using AI that subjects employees to discrimination on the basis of a protected class. In Colorado, the Colorado Artificial Intelligence Act (CAIA), that goes into effect February 1, 2026 will require employers with more than 50 employees who deploy "high-risk AI systems"—including those used in hiring —to use reasonable care to prevent algorithmic discrimination against applicants or employees and to conduct annual impact assessments and impact assessments within 90 days of any material change to the AI system they are using to ensure protections are in place to avoid discrimination.
How a corporation sets up, assesses and manages controls for AI recruiting tools will determine not only whether qualified candidates come to human recruiters’ attention but also whether use of the AI tool itself creates risk that the corporation may run afoul of U.S. and EU limitations on automated decision making and the creation of profiles. Regulatory authorities in both the U.S. and the E.U. increasingly classify recruitment-related AI as a "high risk" that may conflict with personal data privacy law and may potentially harm individual rights. This designation subjects organizations to mandatory audits, impact assessments, and record-keeping obligations. Non-compliance with the EU AI Act, for example, can result in fines of up to €35 million or 7% of annual global turnover. Beyond regulatory penalties, a lack of transparency or perceived unfairness in AI-driven processes can undermine employer reputation and hinder talent acquisition. Accordingly, trust, transparency and fairness are essential business considerations.
In the EU, both the GDPR (Article 22 and Recital 71) and the EU AI Act (Article 6) contain limitations that apply to the use of AI in recruiting personnel. Both also apply broadly, including to certain companies not based or established within the EU if they or their AI systems process personal data or otherwise impact individuals who reside in the EU.
Article 22 of the GDPR provides that individuals have the right not to be subjected to solely automated decision-making that have a significant impact on them—like being rejected for a job based solely on an automated tool’s conclusions about the individual. This right also protects individuals the creation of a profile about them “from any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyze or predict aspects concerning the data subject’s performance at work . . . [or] reliability or behavior . . . where it produces legal effects concerning him or her or similarly significantly affects him or her.” GDPR Recital 71. To avoid creating an unlawful automated decision-making process with AI recruitment tools, employers must build in safeguards, such as human intervention and mechanisms that honor the right for job candidates to be informed about how a decision was made.
The EU AI Act specifically identifies AI systems used in recruitment among the “high risk” categories that are allowed but require strict regulation because “such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation,” and when “used to monitor the performance and behavior of such persons may also undermine their fundamental rights to data protection and privacy.” EU AI Act Annex III(4)(a) and Recital 57. High-risk AI systems are subject to heightened quality, transparency, human oversight, record-keeping, audit and safety obligations, and in some cases will require a "Fundamental Rights Impact Assessment" before deployment. EU AI Act Articles 26-27.
In the United States, many state laws, including Connecticut, Minnesota and Colorado, have expanded limitations on profiling and requiring impact assessments to be conducted and documented before using AI to make an automated decision deemed to be “high risk.” As in the EU, U.S. laws classify “high risk” decisions as those that produce legal or similarly significant effects upon an individual data subject. Hiring decisions are frequently explicitly included in this category. Non-state jurisdictions also sometimes impose similar requirements. In New York City, for example, an employer cannot use an automated employment decision tool unless a bias audit has been completed and the results posted publicly. Additionally, the EEOC has made clear that if an employer’s AI tool screens out candidates in protected classes, the employer remains responsible under Title VII—even if the tool was purchased from a vendor. This responsibility cannot be shifted to an AI machine learning tool algorithm.
2. Core Obligations and Practical Deployment
The developing regulatory landscape requires that employers take care to be transparent with candidates about the use of AI in conjunction with human evaluators in the hiring process, keep data clean and representative, monitor systems for bias, keep detailed logs, and make sure humans are always involved in key decisions. How does a corporation remain a good steward of prospective employee personal information while accelerating the hiring process with the efficiencies that AI tools can bring?
Providing AI training for all personnel involved in the hiring process is an essential first step. Article 4 of the EU AI Act required that providers and deployers of AI tools take training or other awareness measures to give their staffs an adequate level of AI literacy While not all U.S. state AI regulations explicitly require a similar baseline of AI training, providing such training is a wise practice for any companies using AI tools in the hiring process. Training should be required of all staff and personnel whose roles use or are informed by the use of AI systems. Training content should cover system capabilities and limitations, how to identify bias, and appropriate escalation procedures, and should be tailored to each role, including recruiters, HR business partners, and IT personnel.
The experiences of early adopters and regulatory guidance provide a number of additional practical protective measures for employers using AI in the hiring process. These include:
- Conduct Impact Assessments before incorporating AI tools into your hiring process. In many cases, both a Privacy Impact Assessment (or DPIA under the GDPR) and an AI Impact Assessment should be performed and documented together to ensure compliance with relevant U.S. and EU regulations. These assessments must include a detailed analysis of the tools, how they are going to be uses, where their data outputs will be stored, how they and the data they used are secured and how any flagged risks will be mitigated.
- Conduct bias audits of all AI tools (and all of their features) that you plan to use for HR recruitment to ensure the tools do not skew results to remove older candidates, women or others in protected classes.
- Provide clear notice to candidates that AI tools assist in your hiring process and for what tasks, and how and where human review occurs.
- Ensure that Human intervention is part of the process. Human review serves as a safeguard to identify errors or contextual factors that the AI may overlook or mis-interpret depending on the scoring or key-word criteria the AI tool is using.
- Keep records and conduct oversight over how the tool is operating, including monitoring and regularly reviewing sample sets of the AI tool’s findings and conclusions to ensure it is operating as expected and keeping detailed logs of how decisions are made, including which data points are used and the resulting outcomes.
- Implement data governance controls to both map your company’s AI footprint for any AI tools that influence hiring decisions and quickly address any unexpected or inappropriate AI tool outputs that may result in unfair or unreasonable assessment of candidates. Be especially careful with AI tools that purport to measure “soft” impressions, such as body language, facial analysis or speech tonalities, as examples in all of these areas have been found to be rife with problematic results.
- Conduct thorough vetting of AI developers, tools and initiatives. Maintain an active cross-functional AI governance committee including HR, legal, data protection, corporate risk and IT to set policies, approve new tools, monitor compliance, and keep leadership informed Contractually require vendors to provide comprehensive technical documentation, risk management records, and evidence of bias testing. Include robust remedies for incomplete or non-compliance.
- Align and coordinate with EU Works Councils and unions, as appropriate for your company, before deploying new AI tools. Seek agreement on the use of AI tools by sharing the results of risk assessments and addressing concerns about privacy or job security.
Looking Ahead
Proactive compliance not only mitigates legal risk but also supports the development of fairer and more effective hiring practices. Recent regulatory changes can serve as an opportunity to integrate more ethical AI principles into talent strategies, enhance candidate experiences, and provide an opportunity with responsible practices to reinforce company reputations as responsible and forward-thinking employers.


