I. Introduction
- This submission responds to the Federal Court’s Notice to the Profession dated 29 April 2025, which flagged that the Court is considering issuing a practice note or guidelines regarding the use of generative AI. The purpose of this submission is twofold:
- To set out the baseline understanding of generative AI reflected in guidance already issued by courts across Australia.
- To explain what can be done differently, showing that AI systems need not be limited to probabilistic text generation but can be designed to reflect law’s discipline: verifiable, auditable, and structured according to legal method.
II. National Baseline Understanding
- Across jurisdictions, courts have converged on a baseline understanding of generative AI:
- LLMs are probabilistic text generators that predict the next word.
- They do not reason in a human or legal sense.
- They are prone to hallucinations (non-existent cases).
- Their processes are opaque (no audit trail).
- They conflate fact, inference, and opinion.
- Human verification of all citations is essential.