The Supreme Court of New South Wales has introduced Practice Note SC Gen 23 (“Practice Note”), which came into effect on 3 February 2025, setting out strict guidelines for the use of Generative Artificial Intelligence (“AI”) in legal proceedings. This move comes as courts grapple with the challenges and opportunities AI presents in legal practice, particularly in the drafting of affidavits and other court documents. As a legal practitioner, I see this as an important step in maintaining the integrity of court proceedings while acknowledging the growing role AI plays in the profession.

While AI can assist in legal research and drafting, its limitations—such as the risk of AI-generated case law without verification or misrepresentation of evidence—make regulation essential.

The purpose of this article is to understand the limitations of AI and steps lawyers should take when using AI-generated information when drafting or preparing legal documents, particularly in litigation.

The integration of AI into legal practices is rapidly transforming the way lawyers handle data presenting both opportunities and challenges.  One of the most crucial concerns is the potential breach of confidentiality.  Lawyers are bound by strict ethical obligations to maintain client confidentiality, a principle enshrined in legal codes of conduct across jurisdictions.

When lawyers use AI tools that analyse or store client information, especially cloud-based solutions, there is a heightened risk of unauthorised access or data breaches. Unlike traditional manual processes, where human oversight provides an additional layer of security, AI systems might lack the nuanced judgment of a human operator, leading to inadvertent sharing or leakage of sensitive information.

Courts are increasingly scrutinising the use of AI in legal practices due to concerns about fairness, transparency, accountability, and the protection of fundamental legal principles.  The integration of AI into the legal field, while offering significant advantages in terms of efficiency, data processing, and decision-making support, raises several critical issues that Courts are beginning to address more actively.

Key Rules Under Practice Note SC Gen 23

The Practice Note strictly prohibits AI in preparing affidavits and witness statements because these documents must authentically reflect a deponent’s recollection and versions of an event.  A solicitor risks breaching their duties if they: use AI to rephrase or “polish” a witness’s testimony, allow AI-generated content to influence a witness’s recollection or fail to disclose AI involvement in drafting sworn statements.

The Practice Notice delineates specific guidelines for the use of AI in legal proceedings which include but are not limited to the following:

Mandatory Disclosure of AI Use: Any affidavit or witness statement must include a declaration that AI was not used in its preparation.

Written Submissions: Lawyers can use AI for drafting written submissions, summaries, or skeleton arguments, however, caution must be exercised by independently verifying all case citations and legal references.

Expert Reports – Prior Court Approval Required: If an expert intends to use AI in drafting reports, prior approval from the court is required. There is an obligation on experts to disclose the extent of AI assistance, including what prompts were used and what sections were AI-generated. This ensures expert opinions remain genuinely their own, rather than AI-generated interpretations of data.

Data Privacy and Confidentiality Obligations: Lawyers must not input confidential, privileged, or sensitive data into AI platforms unless they are certain that:

  • The data will remain secure.
  • It won’t be used to train the AI model.
  • It won’t breach court suppression orders or client confidentiality.

Many AI platforms process and store data externally, raising risks around data security and unauthorised disclosure.

What This Means for Legal Practitioners

These rules reinforce the responsibilities that legal practitioners already have under professional conduct rules and the Uniform Law:

Due Diligence: Lawyers must personally verify the accuracy of all AI-generated legal references.

Ethical Compliance: The use of AI doesn’t relieve a solicitor of their professional obligations to the court.

Training & Awareness: Firms will need to educate staff on how AI should (and should not) be used in legal practice.

As a practitioner who has seen firsthand how AI can be both a useful tool and a potential liability, the application in the legal field must be carefully controlled to avoid jeopardising the integrity of our justice system and breaching our obligations under the professional conduct rules.

Final Thoughts

Practice Note SC Gen 23 provides much needed clarity on how AI should be used in litigation. While AI can enhance efficiency, its unregulated use in evidence-based documents poses real risks.

Lawyers must ensure that any AI tools they use comply with strict confidentiality standards. It’s essential to evaluate whether the AI system protects sensitive client data and aligns with privacy laws. Unauthorised data access or breaches can lead to legal consequences, loss of client trust, and professional sanctions.

While AI can enhance efficiency, lawyers must recognise that it is not infallible. AI should be used as a support tool rather than a replacement for human judgment. Lawyers must critically assess AI-generated recommendations and always apply their expertise to ensure ethical and legally sound decisions are made.

The legal landscape surrounding AI is evolving rapidly. Lawyers should stay updated on relevant laws, regulations, and ethical guidelines for the use of AI in the legal field. This includes understanding jurisdiction-specific regulations (in particular the Practice Note) and participating in discussions about best practices in the legal tech space.

Lawyers must understand who is accountable when AI tools are used in legal decision-making or analysis. If an AI tool makes an error, it’s important to clarify whether responsibility lies with the lawyer, the AI developer, or another party. Clear contractual agreements with AI providers can help define liability and accountability in case of an error.

Before using AI tools that process client data, lawyers should be transparent with their clients about the use of AI and obtain informed consent.  Clients should understand how their data will be handled, what the AI system will be used for, and the potential risks involved.

Lawyers must ensure that the use of AI does not compromise their ethical obligations, such as duties to act in the best interests of clients, uphold justice, and avoid conflicts of interest. AI should not undermine the integrity of legal processes or decision-making.

As lawyers, we must adapt to this new landscape responsibly, ensuring that technology enhances our work without compromising the fundamental principles of justice.  AI is a tool—not a replacement for human legal expertise.

Menu

Search Paradise Charnock Hing

The Paradise Charnock Hing website is not compatible with Internet Explorer. Please use a modern browser such as Chrome, Firefox, Edge, or Safari for the best experience.