Ryne Simeone
March 18, 2026
What Mature Organizations Are Doing Differently in Hiring, Performance, and People Strategy
AI is already influencing who gets hired, how performance is evaluated, and how workforce decisions are made.
The question is no longer whether your organization is using AI, but is your leadership team is governing it?
Across industries, AI adoption in HR is accelerating. According to SHRM, 51% of organizations using AI in HR apply it in recruiting, 66% use it to draft job descriptions, and 44% use it for resume screening (SHRM, 2025). At the same time, workforce adoption is outpacing formal training, with 74% of full-time workers reporting AI use and only 33% receiving structured training.
Governance is not keeping pace.
Federal agencies have issued joint statements making clear that automated systems remain subject to existing civil rights laws (EEOC et al., 2023). NYC Local Law 144 requires bias audits and candidate notice for certain automated employment decision tools. Globally, the EU AI Act establishes strict obligations for “high-risk” AI systems, including employment-related uses (European Union, 2024).
State-level regulation is increasing as well. Illinois, for example, recently amended the Illinois Human Rights Act (HB 3773) to address discrimination resulting from algorithmic decision-making and AI-assisted employment tools. Additionally, the law prohibits AI tools that result in discrimination and again places accountability on the employer, not the software.
AI risk is no longer a technical issue. It is a leadership responsibility.
Ethical AI Is a Leadership System — Not a Software Feature
Purchasing an AI tool does not transfer accountability.
Under existing employment laws, the employer remains responsible for discriminatory outcomes, even when decisions are influenced by third-party algorithms (EEOC, 2023).
Research consistently shows that organizations recognize ethical concerns surrounding AI but vary widely in how effectively they implement governance structures (Technology in Society, 2024). Awareness is a starting point, but it is not governance (Harvard Professional Development, 2024).
Ethics cannot be reduced to written principles or vendor marketing claims. They must be embedded into systems, oversight, training, and executive accountability.
AI Risk in HR: Hiring, Performance, and Compliance Exposure
AI does not create entirely new categories of risk. It amplifies existing ones.
Hiring & Screening: AI-powered recruiting tools can increase efficiency but may replicate historical bias if not properly monitored. Employers remain responsible for adverse impact (EEOC, 2023). NYC requires independent bias audits for certain AI hiring tools (NYC DCWP, 2023).
Performance & Workforce Analytics: AI-driven monitoring introduces surveillance concerns, disability accommodation conflicts, and over-automation risk. The DOJ warns that poorly designed AI tools may create disability discrimination exposure (U.S. DOJ, 2023).
Privacy & Employee Trust: AI systems often rely on expanded data collection. Data privacy and transparency concerns are central to ethical AI deployment (Harvard Professional Development, 2024). The EU AI Act reinforces documentation and oversight requirements (European Union, 2024).
Let’s Look at an Example: Mobley v. Workday
In this case, the court allowed disparate impact claims tied to algorithmic applicant screening to proceed in 2024. Following that, in 2025, the court granted preliminary collective certification on the ADEA claims. The court found the plaintiff plausibly showed that proposed collective members were subject to a common, unified policy with alleged disparate impact. The court held in the summer of 2025 that the preliminary collective included applicants whose applications were scored, ranked, or screened using Workday’s “HiredScore” AI feature.
Cases like this are only likely to increase, and compliance violations are likely to have broader impact. Leaders must recognize that bias existing in an algorithmic or design sense is likely to impact multitudes of applicants.
The result? Many of these cases will have the potential to be certified for class action treatment, as seen in Mobley v. Workday.
The increased speed and volume capacity that AI tools provide can turn an individual compliance risk into a much larger issue, and quickly.
Executive Ethical AI Governance Framework
- Define Decision Boundaries: AI should inform decisions, not replace accountable human judgment.
- Require Proof Before Scaling: Conduct validation studies, independent bias audits, and pilot testing before enterprise rollout (EEOC, 2023; NYC DCWP, 2023).
- Build Transparency by Design: Maintain documentation, define intended use, and provide appropriate notice.
- Establish Formal Governance Structures: Implement cross-functional oversight committees and structured review workflows (MIT Sloan Management Review, 2024).
- Ensure Human Recourse: Provide meaningful review and correction pathways (European Union, 2024).
- Invest in Ethical AI Literacy: Educate executive teams and HR leaders to oversee systems responsibly (MIT Sloan Management Review, 2024).
Ethical AI Maturity: Where Does Your Organization Stand?
MIT Sloan identifies five stages of AI ethics maturity: Evangelism, Policies, Documentation, Review, and Action (MIT Sloan Management Review, 2024).
Mature organizations embed documentation and review before full implementation. AI action should follow governance, not precede it.
Common Executive Mistakes in AI Governance
- Assuming the vendor assumes the risk — accountability remains with the employer (EEOC et al., 2023).
- Treating written guidelines as sufficient — policy without process does not reduce exposure.
- Scaling without validation and structured oversight.
- Failing to clearly define decision authority.
Ethical AI as a Strategic Advantage
AI will continue to reshape hiring and workforce strategy. Responsible AI governance is rapidly becoming both a regulatory and cultural expectation.
Ethical AI leadership is not about slowing innovation. It is about ensuring innovation strengthens trust, compliance resilience, and long-term performance.
Organizations that treat AI governance as a leadership responsibility, not a software feature, will be better positioned as regulatory frameworks evolve.
