The Automation of Inequality
When organizations began adopting artificial intelligence in their hiring processes, the promise was seductive: algorithms would evaluate candidates purely on merit, stripping away the subjective human biases that have long plagued recruitment. No more resume screening influenced by the sound of a name. No more interview evaluations colored by unconscious affinity bias. Just clean, objective, data-driven decisions.
The reality has been almost exactly the opposite.
In 2018, Amazon scrapped an AI recruiting tool after discovering it systematically penalized resumes containing the word "women's" — as in "women's chess club captain" or "women's studies." The system had been trained on a decade of hiring data that reflected the company's historically male-dominated workforce, and it faithfully replicated those patterns. More recently, a landmark study published in Science found that widely used AI hiring tools disproportionately screen out candidates with disabilities, non-native English speakers, and people from lower socioeconomic backgrounds.
This is not a technology problem. It is a leadership problem. And it demands exactly the kind of systematic, accountable, human-centered framework that I outline in the Big Six Formula.
How AI Bias Works — And Why It Is So Dangerous
To understand why AI amplifies bias, you need to understand one fundamental truth: AI learns from historical data, and historical data reflects historical biases. If your organization has historically hired, promoted, and retained a disproportionately homogeneous workforce, the AI will learn that homogeneity is the pattern to replicate. It does not question whether that pattern is just or effective — it simply optimizes for it.
The danger is compounded by the veneer of objectivity that AI provides. When a human hiring manager makes a biased decision, there is at least the possibility that someone will notice and challenge it. When an algorithm makes the same biased decision, it arrives wrapped in the authority of data science, making it far harder to question. "The algorithm said so" has become the 21st century's most powerful shield against accountability.
The Scale Problem
A biased human hiring manager might review fifty resumes a day. A biased AI system processes thousands. The efficiency that makes AI attractive also means that its biases operate at unprecedented scale, affecting far more candidates than any individual human ever could. One flawed algorithm can systematically exclude qualified diverse candidates across an entire enterprise, and no one may even realize it is happening.
The Proxy Problem
Even when organizations remove overtly discriminatory variables like race and gender from their algorithms, AI is remarkably adept at finding proxies — seemingly neutral data points that correlate with protected characteristics. Zip codes serve as proxies for race and socioeconomic status. Graduation years serve as proxies for age. Specific university names serve as proxies for class. The algorithm does not need to "know" a candidate's race to discriminate — it finds the back door.
The Feedback Loop Problem
Perhaps most insidious, biased AI creates a self-reinforcing feedback loop. If the algorithm screens out diverse candidates, the resulting workforce becomes even more homogeneous. That homogeneous workforce generates the data on which the next iteration of the algorithm is trained, making it even more biased. Left unchecked, AI does not just replicate bias — it compounds it over time.
The Big Six Framework for AI Accountability
The Big Six Formula, which I developed to provide a comprehensive framework for organizational D&I strategy, maps directly onto the challenge of AI bias in hiring. Each of the six dimensions addresses a critical aspect of the problem:
1. Leadership Commitment: Owning the Algorithm
The first dimension of the Big Six is leadership commitment — and it begins with a simple principle: if you deploy an AI system, you own its outcomes. Too many executives treat AI hiring tools as black boxes, delegating responsibility to the technology itself or to the vendors who sell it. This is an abdication of leadership.
CEOs and CHROs must personally ensure that every AI system used in their hiring process has been audited for bias, that audit results are transparent, and that clear accountability exists for corrective action when bias is discovered. This is not a technology function — it is a leadership function.
2. Workforce Diversity: Auditing Outcomes, Not Just Inputs
The second dimension focuses on workforce diversity — and in the AI context, this means rigorously tracking hiring outcomes by demographic group at every stage of the process. If your AI screening tool is advancing 80% of male applicants but only 60% of female applicants to the interview stage, you have a problem — regardless of what the algorithm's designers intended.
Outcome auditing should be conducted quarterly, not annually, because AI biases can emerge quickly as training data evolves. And the audits should be conducted by independent parties, not by the same teams that built or purchased the tools.
3. Inclusive Culture: Bringing Diverse Voices Into AI Design
The teams that design, implement, and oversee AI hiring tools must themselves be diverse. Homogeneous AI teams produce homogeneous algorithms — not because they intend to, but because they lack the lived experience to anticipate how their tools will affect people different from themselves. Including diverse perspectives in the design process is not a nice-to-have; it is an essential safeguard against blind spots.
4. Supplier Diversity: Holding Vendors Accountable
Most organizations purchase AI hiring tools from third-party vendors. The Big Six principle of supplier diversity extends to demanding that those vendors demonstrate their tools have been tested for bias across demographic groups, that they provide transparent documentation of their algorithms' decision-making processes, and that they commit to ongoing bias monitoring and correction.
5. Community Engagement: Listening to Those Affected
The communities most affected by AI hiring bias — communities of color, people with disabilities, older workers, non-traditional candidates — must have a voice in how these tools are designed and deployed. Organizations should establish advisory panels that include representatives from affected communities and civil rights organizations, ensuring that the human impact of AI is never reduced to a statistical abstraction.
6. Accountability and Measurement: What Gets Measured Gets Fixed
The final dimension of the Big Six — accountability and measurement — is where the rubber meets the road. Organizations must establish clear metrics, regular reporting cadences, and genuine consequences for AI systems that produce biased outcomes. This includes publishing annual AI fairness reports, tying executive compensation to equitable hiring outcomes, and maintaining a willingness to decommission tools that cannot be corrected.
Practical Steps You Can Take Today
If you are a leader grappling with AI in your hiring process, here are immediate actions drawn from the Big Six framework:
Conduct a Bias Audit: Commission an independent audit of every AI tool in your hiring pipeline. Demand disaggregated results by race, gender, age, and disability status. If the vendor cannot provide this, that tells you something important.
Establish a Human Override: No candidate should be permanently excluded from consideration by an algorithm alone. Build a human review process for candidates flagged by AI, ensuring that qualified diverse candidates are not lost to algorithmic bias.
Diversify Your AI Teams: Ensure the teams building, buying, and overseeing your AI hiring tools reflect the diversity of the candidate pool. Homogeneous oversight produces homogeneous outcomes.
Demand Transparency: If your AI vendor cannot explain, in plain language, how their algorithm makes decisions, find a vendor who can. "Proprietary algorithm" should never be a shield against accountability.
Invest in Training: Equip your HR leaders and hiring managers to understand AI bias — not at a technical level, but at a conceptual level that allows them to ask the right questions and interpret the right data. My Diversity & Inclusion course provides this kind of practical education in the context of the Big Six framework.
The Stakes Are Too High for Complacency
AI in hiring is not going away. Its efficiency and scalability make it an increasingly essential tool for organizations managing large volumes of candidates. But efficiency without equity is not progress — it is the automation of injustice at industrial scale.
The Big Six Formula was designed for exactly this kind of challenge: a complex, multi-dimensional problem that requires leadership commitment, systematic action, and relentless accountability. The organizations that apply this framework to their AI hiring practices will not only build more diverse and equitable workforces — they will build better ones. Because the candidates your biased algorithms are excluding today may be the innovators, leaders, and difference-makers your organization desperately needs tomorrow.
The algorithm is only as fair as the leaders who govern it. Lead accordingly.
From the Book
Diversity & Inclusion: The Big Six Formula for Success
This article draws on concepts explored in depth in this book by D.A. Abrams.
Explore the Book



