Managing People with Machines: The Hidden Risks of AI in HR Decisions
A recent survey of 1,342 full-time manager-level employees by ResumeBuilder.com revealed that the use of artificial intelligence (AI) tools by managers in making personnel decisions is more extensive than employers realize. That is, roughly 60% of respondents said they relied on AI to make decisions about direct reports, including such consequential decisions as raises, promotions, layoffs, and terminations. Notably, 20% admitted to letting AI make decisions without human input, either “all of the time” or “often.” In fact, many survey respondents appeared to be using AI tools for these purposes with little to no established limits or guidance. Additionally, 24% reported receiving “no training at all” on how to ethically use AI in managing people, and among those who reported receiving some training, a majority described it as “informal.” Furthermore, most respondents reported primarily using general-purpose chatbots, rather than tools specifically designed for managing employees.
Untrained and Unchecked: The Risks of AI in Personnel Decisions
While some employers will no doubt benefit from integrating AI tools into people management, it is critically important for employers to understand the risks associated with this type of use. If not deployed with care, AI tools may create unintended disparate impacts that implicate equal employment opportunity laws (such as Title VII, the Americans with Disabilities Act, the Age Discrimination in Employment Act, and similar state laws), subject the employer to onerous bias audit requirements, or potentially violate other employment laws (such as state laws regarding the use of lie detector tests).
Employers would be well-advised to limit the use of AI for people management only to approved uses that are clearly defined. Potential use cases should be carefully vetted (including with legal counsel), and employees should be trained regarding appropriate uses of AI tools to mitigate these risks. Employers should also establish clear policies and procedures governing the use of AI for HR functions.
FAQs
Q: Can AI legally make personnel decisions?
A: Yes—but with important caveats. AI can be used to assist or even automate personnel decisions, such as hiring, promotions, or terminations. However, employers remain legally responsible for the outcomes of those decisions. If an AI system results in discriminatory practices—whether intentional or not—the employer may be held liable under federal laws like Title VII, the ADA, and the ADEA. Some states, including New York and Illinois, have enacted laws requiring bias audits and transparency when AI is used in employment decisions.
Q: What laws apply to AI in HR?
A: Several legal frameworks govern the use of AI in HR:
- Federal Anti-Discrimination Laws: Title VII, ADA, and ADEA prohibit employment practices that result in disparate impact or intentional discrimination—even if caused by AI.
- State & Local Laws: Jurisdictions like New York City, Illinois, and Colorado require bias audits, disclosures, or consent when AI is used in hiring or other HR functions.
- Other Employment Laws: The Employee Polygraph Protection Act (EPPA) and wage-and-hour laws may be triggered if AI tools are used to monitor integrity or productivity.
Q: How can employers reduce bias when using AI?
A: Employers can take several proactive steps to mitigate bias in AI-driven HR processes:
- Conduct Bias Audits: Regularly test AI tools for disparate impact on protected groups.
- Use Anonymized Data: Remove identifying information (e.g., names, gender) from resumes and profiles to reduce unconscious bias.
- Train Employees: Ensure HR teams understand how AI works and how to use it ethically.
- Vet Vendors Carefully: Choose AI providers who prioritize fairness and transparency in their algorithms.
- Standardize Processes: Use AI to apply consistent criteria across all candidates or employees, minimizing subjective decision-making.