AI in Litigation – Revolutionary or Risky?

By Eve Panna
10 June 2025
It’s inevitable—the use of artificial intelligence (‘AI’) in the legal profession is becoming increasingly common. Whether it be for legal search or drafting basic legal documents, it is now progressively used as a tool to assist firms at all different points of litigation and disputes.
However, AI is very much like fire—a good servant but a bad master.
So, before you jump on the AI bandwagon and allow Mr AI to prepare your own letters of demand or Writs, it is critical to understand the risks associated with this.
- AI Accuracy and Hallucinations
Whilst AI appears to look like it knows what it is talking about—it often does not.
When AI systems lack sufficient training access to relevant information or a clear understanding of legal principles or legislation, it will often provide you with answers that are either partially correct or totally inaccurate. This is called an AI “hallucination”.
Unfortunately, there have already been some instances where lawyers have unwisely relied on these hallucinations in court.
One of the most notable cases overseas was a New York case called Mata v. Avianca Inc.[1] In this case, a legal representative referred to numerous relevant and favourable authorities in his submissions—all of which would have been useful had they actually existed. This certainly was not well-received by the judge or the client. We also have seen the reliance on AI hallucination authorities occur in Australia, including the cases of Handa v Mallick[2] and Zang v Chen[3].
As amusing as this may seem at first glance, the reality is that the use of AI in legal practice carries significant risks, not only for individual matters, but also the integrity of the legal profession.
- Confidentiality and Privilege Risks
Another key risk associated with AI is the potential breach of confidentiality and legal privilege. When using AI tools such as ChatGPT, it is easy to inadvertently reveal sensitive information about yourself and others.
For instance, if AI is used to redraft an email containing your confidential details, that information may be exposed to a global third-party platform. Likewise, privileged communications—such as those between you and your lawyers or negotiating parties could be unintentionally disclosed. This may ultimately impact the entire outcome of a dispute.
Accordingly, it is important to always remember that AI is not to be trusted with confidential or privileged information, as enticing as a quick “copy and paste” job may seem.
- No exercise of Legal Reasoning or Judgement
Although AI is a useful tool, it cannot replace a qualified legal practitioner who has completed a Bachelor of Laws and obtained practical experience—at least not yet.
AI systems are not the genius scholars that they appear to be—they simply analyse patterns and references from a vast datasets. They cannot exercise genuine legal reasoning or form any independent judgement or thought. Consequently, AI cannot formulate its own opinions or analysis. This limitation is problematic if an AI tool is used in circumstances where human discretion and interpretation is required, such as judicial reasoning.
- Ethical Issues
All Victorian lawyers are bound by specific rules of professional conduct, including the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (‘the Rules’).
These Rules include:
4.1 – Paramount duty to the Court and the administration of justice.
4.1.1 – Duty to act in the best interests of a client in any matter in which the solicitor represents the client.
4.1.2 – Duty to be honest and courteous in all dealings in the course of legal practice.
4.1.3 – Duty to deliver legal services competently, diligently and as promptly as reasonably possible.
4.1.4 – Duty to avoid any compromise to their integrity and professional independence.
4.1.5 – Duty to comply with these Rules and the law.
Although these rules don’t specifically address the use of AI, the principles clearly extend to any technology that lawyers use in their practice. Accordingly, as a legal practitioner, it is important to always consider these duties when using AI tools.
To ensure that legal practitioners are abiding by these rules, many Courts have published guidelines to practitioners utilising AI. For example, the Supreme Court of Victoria has published ‘Guidelines for Litigants – Responsible Use of Artificial Intelligence in Litigation’.
In conclusion, whilst the legal world is embracing AI technology, it is important for practitioners to:
- Treat AI as a tool rather than a substitute for real lawyers;
- Always to scrutinise the information that AI presents; and
- Always consider whether the use of AI is going to have legal or ethical repercussions.
If your business needs an AI policy or strategy , please get in contact with Eve Panna and we would be happy to assist.
[1] 22-cv-1461 (PKC) (S.D.N.Y. Jun. 22, 2023)
[2] [2024] FedCFamC2F 957
[3] 2024 BCSC 285
Related Articles
View AllNew Limitations On Fixed Term Employment Contracts – Employers need to know what they don’t know!

By Amy La Verde
6 December 2023
Do I need Probate? What is it?

By Paul Traianedes
22 November 2023
We Won a Billboard!

By Rob Oxley
20 November 2023
Real Estate Agent Commission Victory shakes up Fair Entitlements Guarantee (FEG) Scheme

By Stefan Chelper
14 July 2023
International Women’s Day 2023: Embrace Equity

By Amy La Verde
7 March 2023
When the trust is gone – Shareholder relief under the Corporations Act 2001 (Cth)
Pitfalls of exercising options

By Samuel McMahon
9 September 2022
Strategy is vital
Recording | TLFC Law Lunchtime Briefing | Commercial Matrimony – Marry/Battle/Kill

By Simon Abraham
22 June 2022