Employees using AI – catching employers unaware

February 27, 2026

Employees are increasingly turning to AI tools like ChatGPT, Claude, and legal-specific bots to assert their workplace rights and launch claims against employers. This trend is making it dramatically easier for workers—laying out dismissal applications, general protections claims, or even bullying complaints with polished, professional-looking documents in minutes.

However, while AI lowers the barrier to entry, it doesn’t guarantee accuracy or success. Well-drafted claims often founder on misapplied law, overlooked procedural hurdles, or fabricated case references, leaving employees worse off with rejected filings, adverse costs, and eroded credibility.

The Rise of AI-Powered Employee Advocacy

Generative AI has access to legal drafting. An employee facing redundancy can now prompt an AI with basic facts—”I was made redundant after 2 years, offered a role 20km away that I rejected”—and receive a Form F8 unfair dismissal application citing the Fair Work Act 2009 (FW Act), complete with section numbers, timelines, and arguments about “genuine redundancy” under s 389. No lawyer fees, no legalese struggles—just copy-paste into the Fair Work Commission (FWC) portal. Platforms like AskPerplexity or custom GPTs even tailor arguments to Australian law, pulling in concepts like s 120 redundancy variations or s 394 unfair dismissal remedies.

This empowers non-lawyers. Recent FWC data (as of early 2026) shows a 25% uptick in self-represented applications, many suspiciously uniform in structure. Employees use AI to draft witness statements and demand letters, framing grievances as statutory violations. It’s a game-changer for casuals, short-service workers, or those in regional areas, who previously avoided claims due to complexity.

Pitfalls: Misapplication of Law Despite Slick Drafting

AI excels at form but stumbles on nuance. A perfectly formatted claim might assert an unfair dismissal under s 385 where the employer is small (exempt via s 121 Small Business Fair Dismissal Code), ignoring eligibility under s 382. Or it could misread s 120—like claiming a rejected redeployment automatically triggers full redundancy pay, overlooking the objective “acceptable employment” test from cases like Australian Commercial Catering v Powell FWCFB 5467.

In practice, AI hallucinates thresholds: recommending general protections claims (s 365) for pure performance dismissals (not adverse action), or bullying applications (s 789FD) without a “risk to health and safety.” Employees file confidently, only for FWC members to dismiss at first directions hearing: “Applicant cites s 387(a) valid reason, but misconstrues serious misconduct per reg 1.07.” The drafting looks pro, but the substance crumbles under scrutiny.

Fabricated Citations: The Hallucination Trap

Worse are AI-generated case references—often entirely invented. Tools confidently cite ” FWC 123, Smith v XYZ Pty Ltd” for propositions like “any location change voids redeployment,” when no such decision exists. FWC registries now flag these: in 2025, Deputy President World rejected a claim citing three phantom Full Bench authorities, warning of costs. AI pulls from training data up to 2023-2024, then fabricates “recent” cases to fill gaps, mangling real ones (e.g., confusing Westerberg v Volando FWC 420’s misconduct findings with unrelated facts).

Judges spot this instantly—AustLII searches reveal fakes. Employees suffer: claims struck out, credibility shot, potential costs orders (up to $10k+). AI disclaimers (“not legal advice”) don’t help post-filing.

Real-World Examples from Recent Cases

  • In a 2026 Perth matter echoing Civmec v Rizvi FWC 599, an AI-drafted s 120 opposition cited “binding precedent” that short-notice offers invalidate redeployments—pure fiction. FWC varied pay to nil anyway.
  • Self-repped unfair dismissal apps surge with identical AI phrasing: “harsh, unjust, unreasonable per s 387(h)” boilerplate, but skipping s 382 protection (high income cap, short service).

Implications for Employers

For employers, AI means more employee claims, but the merit of those claims is mixed. Employers needs to triage fast and perhaps seek legal advice to verify asserted legal rights and legal citations.

Bottom line: AI makes claiming easier, but law demands precision. A glossy legal claim wins no brownie points if the law’s misapplied.