Article

Bias by algorithm: Can AI make you liable for discrimination?


Bias by algorithm: Can AI make you liable for discrimination?

As artificial intelligence (AI) becomes increasingly embedded in recruitment and HR processes, Australian employers face a growing compliance blind spot. What happens when an algorithm discriminates?

In an era where efficiency often trumps human judgment, AI-driven hiring tools, used to screen CVs, rank candidates, or analyse video interviews, are quietly reshaping how workplaces assess talent. While these tools can often promise neutrality and objectivity, their use may actually give rise to unlawful discrimination, particularly if they encode bias or have discriminatory impacts. For employers, the legal risks are real.

AI in recruitment - what’s happening now?

Many Australian companies are already using or considering tools that leverage AI to:

  • Automatically sift through thousands of CVs;
  • Conduct personality assessments through video interviews;
  • Rank candidates based on past hiring data; and/or
  • Flag applicants using predictive analytics based on traits linked to "success”. 

While marketed as removing human bias, these AI systems are trained on existing datasets, which are often drawn from past hiring decisions. That means they may reflect and amplify historic biases, even if they appear neutral on the surface.

Liability risks

Under Australian anti-discrimination laws, including the Equal Opportunity Act 2010 (Vic), Anti-Discrimination Act 1977 (NSW), and federally, the Sex Discrimination Act 1984 (Cth) and Racial Discrimination Act 1975 (Cth), it is unlawful to treat a person less favourably on the basis of protected attributes like race, sex, age, disability, or family responsibilities.

These laws also apply when making recruitment decisions. Importantly:

  • Discriminatory intent is not required: If an algorithm disproportionately excludes, say, older candidates or women returning from parental leave, an employer may still be exposed, even if the bias was unintentional.
  • Employers remain responsible: Even if a third-party vendor provides the AI tool, liability to ensure that anti-discrimination laws are not breached remain with the employer making the recruitment decision.

This means a claim could arise where:

  • an applicant is unfairly screened out due to a characteristic protected under anti-discrimination law, and
  • the employer cannot demonstrate that the decision was based on legitimate, non-discriminatory criteria.

AI and the risk of reinforcing historical bias

One of the most significant concerns with algorithmic decision-making, particularly in hiring and dismissals, is its potential to embed and perpetuate past patterns of bias. Where these systems rely on historical workforce data, they may reflect inequities that already existed, reinforcing rather than remedying them.

This becomes especially problematic when it comes to termination decisions. If an employee is dismissed based on a recommendation generated by an opaque algorithm, one whose internal logic can’t be clearly understood or explained, then the employee’s ability to contest the decision is undermined. Similarly, decision-makers may be unable to articulate a defensible, lawful rationale for the outcome.

Such opacity may also conflict with the principles of procedural fairness central to Australia’s workplace laws. Section 381 of the Fair Work Act 2009 (Cth), for example, enshrines the statutory right to "a fair go all round”. If neither the employer nor the employee can understand how the algorithm arrived at its conclusion, that principle is placed at serious risk.

Indirect discrimination and algorithmic bias

A key legal concept here is indirect discrimination, which occurs when a condition, requirement or practice, although applied equally to everyone, disadvantages, or is likely to disadvantage, people with a particular protected attribute, such as age, sex, race, or disability. For example, a recruitment algorithm trained on past hires may systematically favour candidates who match a certain profile, say, men under 35 with full-time, uninterrupted work histories, effectively excluding older applicants, women, or women with caring responsibilities.

Unless the employer can demonstrate that the use of the recruitment system is reasonable in all the circumstances, which would include taking into account the nature and extent of any disadvantage caused, the feasibility of mitigating that disadvantage, and whether the disadvantage is proportionate to the outcome sought, they may be exposed to a claim of unlawful indirect discrimination.

A cautionary tale: Amazon’s scrapped AI recruiter

A high-profile example of algorithmic bias emerged in 2018 when Amazon abandoned its in-house AI recruiting tool after discovering it was systematically disadvantaging women. The tool, trained on a decade of resumes, mostly from male applicants, began penalising CVs that included words like “women’s” (as in “women’s chess club captain”) and prioritising male-coded language. Although Amazon never deployed the tool in live hiring decisions, the incident highlights a critical risk - when AI is trained on biased historical data, it can learn to replicate and reinforce gender stereotypes. The system's performance couldn’t be reliably fixed, prompting Amazon to shut it down entirely.

AI recruitment under scrutiny in Australia

Recent Australian research led by Dr Natalie Sheard has highlighted the discriminatory risks posed by AI-driven recruitment systems. The study found that these tools, which are used to screen resumes or assess video interviews, can systematically disadvantage certain groups due to biased training data and inaccessible design features. As Dr Sheard explains, the systems pose particular risks for “already disadvantaged groups in the labour market - women, jobseekers with disability or [from] non-English-speaking backgrounds, [and] older candidates”.

While Australia hasn’t yet seen legal action over AI hiring discrimination, a significant warning emerged when the Merit Protection Commissioner overturned 11 promotion decisions at Services Australia during the 2021-2022 financial year. In that case, candidates were filtered solely through a sequence of AI tools including psychometric tests, questionnaires, and self-recorded video responses, with no human oversight. 

The Commissioner found that this process caused meritorious candidates to be unfairly excluded and warned that many commercial AI hiring tools remain untested and cannot be guaranteed to be completely unbiased.

Echoing these concerns, Dr Sheard noted that some groups have called for a complete ban on such systems, especially while Australia lacks proper legal safeguards. While employers often argue that AI tools improve efficiency, she cautioned this must be balanced against “the risks of harming marginalised groups”.

In February 2025, the House Standing Committee on Employment, Education and Training recommended banning AI from making final recruitment decisions without human oversight.

The federal government is also considering broader regulation, including whether to introduce an AI Act similar to the EU model.

Dr Sheard argues that Australia’s anti-discrimination laws need urgent review to ensure they remain “fit for purpose” in the face of emerging technologies, particularly to address issues “around liability” in the use of AI hiring systems.

Five practical steps to manage the risk

Employers embracing AI in hiring should act now to future-proof their compliance and minimise liability. 

Recommendations include:

  1. Conduct an impact assessment: Understand what data your system uses, how it works, and where bias might creep in.
  2. Demand transparency from vendors: Ask: What safeguards are built in? Has the system been independently audited for bias?
  3. Retain human oversight: AI should assist, not replace, human judgment. Final hiring (or frankly, firing) decisions should always involve human review.
  4. Monitor outcomes: Regularly audit recruitment outcomes to check for disproportionate exclusion of protected groups.
  5. Train your HR and recruitment personnel: Ensure those involved in recruitment understand the risks of automated decision-making.

Final thoughts

AI promises efficiency and consistency in recruitment, but it is not always legally or ethically neutral. As the legal landscape evolves, employers cannot simply "blame the algorithm". 

The obligation to comply with anti-discrimination laws remains firmly with the employer.

In the race to adopt new technologies, businesses must ensure that fairness, accountability and legal compliance keep pace.

If this has raised any queries or concerns please contact us at info@ablawyers.com.au or call 1300 565 846.

 

Back to Articles & Downloads

Stay Informed

Join our webinars & get the latest news

Subscribe to our mailing list to get the latest news, webinar invites, & more.