MiAI Law

The Human Judgment Behind AI Law: Laina Chan on Keeping Lawyers in the Loop

In a recent Xraised interview, host Myles spoke with barrister and CEO Laina Chan about the intersection of law and artificial intelligence and why judgment, ethics, and storytelling still define great lawyers even in an algorithmic age.

When asked whether AI can now retrieve and rank every precedent in seconds, Chan made an important distinction: ranking is not reasoning. “AI doesn’t decide what matters,” she explained. “It follows whatever algorithm a human gives it.” While technology can process data faster than any human, she reminded listeners that speed is not wisdom. “It’s not instant, and even when it is, you still need to know what to do with the result.”

She described lawyering as an art that transcends technical knowledge. “Being a lawyer is much more than understanding the law theoretically,” she said. “When you come out of law school, you really know nothing. You learn through trial and error, by trying, failing, and understanding what judges accept and what they don’t.” Experience teaches what theory cannot: which arguments have a chance of being accepted, and which are too far a stretch for the court. “Judges don’t want to be overturned on appeal,” she added. “That kind of judgment can’t be programmed.”

Chan emphasized that understanding the law is only half of the craft; applying it in context is what separates good lawyers from great ones. “AI can identify every clever argument, but it can’t tell you which one will work,” she said. “It can’t read the room, gauge the court’s tone, or sense when to hold back. That’s human territory.”

Her company, MiAI Law, is designed with that philosophy in mind. The platform automates research and contract review but deliberately keeps lawyers in control. “I’ve designed MiAI Law not to replace human judgment,” Chan said. “It identifies issues, provides reasoning and case authority, but the lawyer decides what aligns with the client’s goals.” Each contract analysis, for instance, shows clause-by-clause concerns, explains the reasoning, and cites authorities, yet the ultimate decision rests with the lawyer.

To illustrate, Chan shared a recent example from her own work. “I had to draft an agreement and uploaded it into MiAI for review. The tool flagged several clauses and suggested alternatives,” she explained. “But a lawyer’s role is not to accept every suggestion blindly. For example, a restraint of trade clause might not be enforceable unless it’s reasonable. You have to consider the client’s objective, maybe limit it to six months instead of twelve. That’s judgment. The AI can highlight risks, but it’s the lawyer who determines what’s acceptable.”

This approach, she added, preserves accountability and safeguards against professional negligence. “If lawyers hand over all responsibility to AI, we’ll see a massive increase in malpractice cases,” she warned. “A properly trained lawyer must still think critically, review outcomes, and make decisions.”

Ethical responsibility is built directly into MiAI Law’s framework. Each report includes direct citations and hyperlinks to supporting legislation and case law. The system can even detect if users fail to check their references, prompting them to review their sources. “It’s about responsible AI,” Chan said. “We can’t force lawyers to check, but we can encourage them to.

The idea came from a friend at Google Asia, who said, ‘You can make them want to check.’” She took that advice to her developers, creating an accountability feature that reflects her belief in ethical technology.

Chan also addressed the limitations of AI in the courtroom. “Trials depend on human judgment,” she said. “A judge observes witnesses, reads their body language, senses hesitation or confidence. Those are cues no algorithm can fully interpret.” While facial recognition can detect micro-expressions, it cannot comprehend the theatre of the court, the interplay of counsel, evidence, and instinct honed over years of experience. “Maybe one day AI will come close,” she mused, “but it would have to replicate human intuition itself.”

The discussion also touched on the evolving capabilities of large language models. “Every AI model uses a mixture of experts,” she explained. “You never know exactly which sub-model you’re accessing, and some are better than others. The same query can yield slightly different results. That’s why human oversight is essential.”

According to Chan, MiAI Law is not about changing what lawyers do, but how efficiently they can do it. “We’re increasing productivity,” she said. “We help uncover authorities you might have missed under tight deadlines, but the essence of lawyering, reasoning, persuasion, and judgment remains the same.”

For Chan, what separates great advocates from the rest isn’t access to better tools, but the ability to use them wisely. “AI can give you memory,” she said, “but only humans can give it meaning. Memory preserves law; judgment progresses it.”

She cautioned that lawyers who rely entirely on AI risk making themselves obsolete. “If you let AI do your whole job, you’re showing that you bring no additional value,” she said. “The ones who thrive will be those who use AI to amplify their thinking, not replace it.”

Ultimately, Chan sees AI as an assistant, not an authority. “AI can process precedent,” she concluded, “but only humans can persuade. It’s a tool, it doesn’t replace the art of lawyering.”

Recent Posts