Don’t Trust GPT to Do Your Legal Work: Lessons from Recent Fair Work Commission Cases

September 25, 2025

Artificial intelligence tools like GPT are powerful at generating text, summarising information, and helping with research. But when it comes to legal work, relying on GPT—or any large language model—as if it were a substitute for a qualified lawyer is fraught with danger. In recent years, cases brought before the Fair Work Commission (and related tribunals) have exposed exactly how AI can go wrong: by inventing legal authorities, misplacing deadlines, misunderstanding the law, and leading to claims that get thrown out—or worse.

Here’s why clients should be extremely cautious:


1. AI hallucinates — it fabricates cases, legal authorities, or statutes that don’t exist

One of the well-documented risks of generative AI is the phenomenon known as “hallucination” — where the system confidently presents false or invented information. In legal contexts, this might manifest as invented case law, citations to fictitious judgments, or statutes that never existed.

A cautionary example (outside Australia) is when a lawyer submitted court filings citing six fake cases that ChatGPT had apparently generated. The judge later called this “unprecedented.”

This shows that AI can produce convincing but entirely made-up legal authorities. Without a human legal professional carefully verifying every citation, relying on AI can be disastrous.

In Australia, tribunals like the Fair Work Commission require precise citation and adherence to binding authority. If your submission refers to a non-existent case, it not only undermines credibility but may lead to dismissal.


2. Recent FWC cases demonstrate the real consequences of AI-based missteps

A recent and illustrative case is that of Branden Deysel, who lodged an unfair dismissal claim before the Fair Work Commission after resigning — and admitted to using ChatGPT to draft the application.
Deputy President Tony Slevin rejected the claim, calling it “hopeless” and noting that the AI advice was “baseless.” The Commission found the application failed to meet the statutory formulae and substantive thresholds required under the Fair Work Act.

The judge remarked that following AI’s suggestions led to a defective claim that wasted both tribunal and respondent resources.
In other words: AI drafted a flawed claim, the litigant had no legal counselling to spot the errors, and the case was thrown out.

While that is a relatively extreme example, it is a high-profile warning: if a self-represented litigant is punished for leaning too hard on AI, imagine what would happen in more complex matters (e.g. enterprise agreements, adverse action, general protections).


3. Legal work often involves nuance, strategy, and argument — beyond “pattern matching”

AI models are very good at mimicking form, summarising patterns, and producing plausible prose. But many legal questions turn on fine distinctions: statutory interpretation, conflicting precedents, policy reasons, equitable discretion, or contextual fact analysis.

A tribunal or court doesn’t accept rote regurgitation of authority — they assess whether a line of reasoning is persuasive, whether it respects binding hierarchy, whether the factual context is analogous, and whether contrary cases have been addressed.

Even in the FWC context, recent cases around fixed-term contracts and jurisdictional limits show how fact-sensitive the legal tests can be. For example, in a 2024 decision, a prospective applicant (Ms Louise McCue) filed an unfair dismissal claim but the Commission had to hear a jurisdictional objection because there was a dispute whether the employment had been “dismissed” or whether it simply ended under a non-ongoing contract.

These kinds of issues cannot reliably be resolved by an AI model that is not attuned to the context of your relationship, timing, contract terms, or broader statutory norms.


4. The cost of error is high — reputational, monetary, procedural

When a legal claim based on AI is dismissed, you may:

  • Lose the chance to have a valid claim heard (statutory deadlines or jurisdictional time limits may be lost)
  • Be ordered to pay costs (in Australia, the Commission or courts may penalise frivolous, vexatious, or poorly grounded claims)
  • Undermine your credibility before tribunals or courts
  • Give your opponent grounds to strike out or challenge the application

Especially in industrial relations and employment law, the rules are tight, the precedents are many, and tribunals are unsympathetic to sloppy or speculative arguments.


Conclusion

Using GPT or a similar AI tool to replace qualified legal advice — especially in settings like the Fair Work Commission — is perilous. The risks are not hypothetical: we already see real-world examples where AI-reliant claims floundered, were dismissed, and wasted time and cost for all parties.

If you have an employment dispute, adverse action issue, unfair dismissal claim, or industrial relations matter, rely on legal practitioners who know the law, the Commission, and how to bind all the strands into a good case.