When AI “Makes Up the Law”
Words by Varnell Clay
AI tools can now draft letters, policies and even legal submissions in seconds. But what happens when those confident-sounding documents rely on cases that don’t exist?
That’s exactly what has emerged in some New Zealand Courts. In LMN v STC (No 2) [2025] NZEmpC 46, a self-represented litigant cited a “decision” the Court couldn’t locate anywhere, likely created by an AI system. Similarly, in Wikeley v Kea Investments Ltd [2024] NZCA 609, the Court of Appeal noted that parts of submissions appeared to rely on non-existent authorities.
Both Courts warned that AI outputs must be verified before being filed. These cases show how quickly convenience can turn into legal and reputational risk.
Why “hallucinations” matter in employment disputes
“Hallucination” is a term for when an AI system generates information that sounds right but isn’t – such as a fabricated case citation, a false quote, or an incorrect statutory reference.
As employment lawyers, we understand those errors can have serious consequences. A made-up case in a disciplinary letter or Authority filing wastes time, misleads the other side, and can lead to adverse costs or sanctions. For lawyers, submitting hallucinated material could prompt professional scrutiny or disciplinary action.
Equally important is good faith. Misleading submissions or unverified “law” could fall short of the statutory duty to act in good faith under section 4 of the Employment Relations Act 2000. For example, an employer relying on an AI-generated disciplinary letter that misstates legal authority or facts may be challenged on that basis. The Employment Relations Authority has explicitly warned both lawyers and parties that they remain responsible for verifying any AI-assisted outputs.
Employment disputes often hinge on credibility and fairness. When AI “makes up the law,” it does not just weaken your argument; it undermines trust in the entire process.
The problem in employment law
Employment law depends on accuracy, fairness and context. Those principles don’t sit comfortably with auto-generated text that can misstate the law or skip procedural steps.
For employers, an AI-drafted warning or termination letter that applies the wrong legal test could invalidate an otherwise fair process. For employees, a grievance built on incorrect law could damage credibility and delay resolution.
AI can sound persuasive, but it doesn’t understand precedent, fairness or local nuance. The result? Documents that look professional but do not stand up under scrutiny.
Why human oversight still matters
At Govett Quilliam, we’re seeing more clients bring us AI-generated employment documents. Almost always, what looks minor – a wrong section reference or misplaced test – turns out to be crucial.
That is not because the technology is bad. It is because employment law is human and relational. AI cannot interpret good faith, reasonableness or fairness the way a practitioner can.
A short review by a specialist lawyer can prevent costly mistakes. The cost of that oversight is modest compared to the fallout from a flawed disciplinary process or invalid dismissal.
Practical takeaways
If you’re using AI for HR or employment documents:
- Treat AI as a starting point, never the final word.
- Consider discussing with a lawyer or HR specialist to verify the legal content before sending or filing.
The Courts have made it clear: “The AI made me do it” will not be an excuse.
Generative AI is changing how we work, but it cannot replace professional judgment. In employment law, where fairness and accuracy are everything, relying on unverified AI output is a risk no one should take.
Before you send that AI-drafted warning, policy or grievance, ask “Has this been checked by someone who knows New Zealand employment law?”
If not, that’s where we can come in. If you’re an employer or employee needing legal advice, get in touch.