Post: Technology in the professional toolkit

Check against delivery

Before I begin, I would like to acknowledge the traditional owners and custodians of the land on which we meet today, the Wurundjeri people of the Kulin nation, and pay my respects to their Elders past, present and future. I extend that respect to Aboriginal and Torres Strait Islander people here today.

Thank you for the opportunity to speak to you this evening.

I commend the University of Melbourne Centre for AI and Digital Ethics, and the Ninian Stephen Law Program, for their ongoing commitment to building the capability of the legal profession to respond to the challenges of rapid sociotechnical change.

Sir Ninian was known for his clarity of thinking and his extraordinarily wide-ranging interests. Over 40 years ago, as far back as 1983, addressing a seminar at the Australian National University, he said that we must be ‘skilful in our use of the electronic tools which the information revolution has given us’[1].

Sir Ninian was right. Let me elaborate.

Introduction
The more technology advances, the more human judgement can be seen as a defining feature of legal practice.

You may have heard the view expressed that technological developments present a challenge to law and its practice.

But I think the view that the legal system struggles to keep pace with the rapid – and, it’s usually assumed, inevitably positive – developments in technology, doesn’t do justice to the dynamic ways in which lawyers have adapted to these changes.

In essence, it’s a matter of lawyers using, and continuing to use, critical human and intellectual skills – including, in particular, judgement – to adapt to new circumstances.

In my view, technology is a necessary tool in a lawyer’s toolkit – but it’s not the only one. To put it in a nutshell: the more technology advances, the more it shows that lawyers are going to be with us for some time yet.

And this evening I’d like to explore what that means.

Law and technology
So first, the relationship between law and technology.

Twenty-five years ago, a company called Napster argued that law can’t keep up with technology, while its users shared songs illegally. Then, copyright infringement suits suggested otherwise.[2]

On the other hand, when law does step in, the argument is then made that law and regulation are preventing innovation. But, given the changes we’ve all seen in our own lifetime, I think it’s pretty clear that innovation still happens.

I think if we look closely at the practice of law, we can see that law and technology have a long history of working together – as lawyers exercise judgement and employ critical thinking to adapt to technology’s developments and advances. Let me give a few examples.

I’ll start with copyright law. The ‘idea-expression’ distinction has accommodated a range of innovations – from the printing press, to peer-to-peer networks, through to large language models or LLMs. Although software stretched copyright – and copyright lawyers – for some time, eventually it was incorporated into the regime as a literary work.

The rise of the open-source movement – originally ‘copyleft’ – was in part a philosophical response to monopolies granted by copyright law. But, the open-source innovation has thrived – Linux is an example of this. The success of the open-source movement can be partly credited to creative copyright lawyers, who had a strong grasp of the underlying technologies, developing innovative licensing regimes – such as the creative commons licence.

In recent times, we are seeing this play out with large language models. Both technologists and lawyers are grappling with questions such as: Does training LLMs infringe copyright? Do the fair dealing defences apply? Are the works generated by LLMs protected by copyright – that is, is there sufficient ‘sweat of the brow’ to receive protection? And so on.

These are examples of people – lawyers – seeking to understand new technology and its ramifications. And they were using the skills we expect of lawyering in general: curiosity, open-mindedness, and most importantly, judgement.

Another example of lawyers applying existing principles and skills in the realm of new technology might be found in anti-spam law, and the concept of trespass to chattels.

The invention of a fence around my property was, at the time, an evolution of technology. It also gave clarity to legal disputes: if you cross my fence, you’ve trespassed. But when you stand on the other side of my fence and scare my sheep, I go to a lawyer, who makes a case that you are sort of trespassing – you’re trespassing to chattels.

A few centuries pass, and the principle is applied to a new kind of ‘property interference’. That is, the concept of ‘trespass to chattels’, which applied originally as a charge against someone who didn’t exactly trespass, but messed with your livestock, is later used in court cases against spammers. The argument is that the spammers aren’t quite trespassing – just as a postman isn’t trespassing when he or she comes to your door to deliver a parcel. But, since they are clogging the servers, they are ‘messing with’ your property, that is, trespassing to chattels.[3]

This is not a story of law and technology – it’s a story of lawyers and technology.

So we can see that lawyers can and do apply longstanding principles and employ critical thinking to help our understanding of new technological experiences. But is the opposite also true? Can technology contribute to the practice of lawyering?

I think it’s very clear that it can. And that it does.

The speed and scale of the progress of technology in the law can easily be overlooked. Why? Because many of us take for granted the technology that was once innovative.

I remember when I started as a lawyer in the early 80s, the most advanced technology in the firm was the telex machine. My firm had one – and, believe it or not, we would share that infrastructure with other firms.

Then, of course, the mighty fax machine turned up, and telex disappeared; and then, the now ubiquitous email arrived. Typewriters were rapidly overtaken by word processors, and all of a sudden, documents became digital.

And now we have the Australasian Legal Information Institute or ‘AUSTLII’. Most of us today couldn’t imagine a world without it. Yet, it was one of the first efforts to digitise and democratise legal resources. It spearheaded the adoption of medium-neutral citation protocols. This enabled the automation of content gathering and retrieval in a way not previously seen, certainly not in any free resource.

And today, many legal practitioners – and law enforcement agencies – increasingly rely on ‘Technology Assisted Review’ or ‘TAR’, particularly in the e-discovery space. TAR processes – such as de-duplication of documents, email threading or a ‘find similar’ function – are as groundbreaking as they are becoming increasingly necessary.

With our ever-growing digital footprints, for lawyers to continue to maintain their duty to their client of due care and skill, it is arguable that they must turn to technological advances such as TAR.

These are all examples of technology evolving and being adopted by lawyers to improve their practice of the law – and lawyers adapting to changing circumstances.

Technology: challenge or opportunity?
But of course, while technology can very clearly be a benefit, like many things, it can also bring challenges.

New technology brings with it risks that have to be managed – ethical considerations, confidentiality issues, and due diligence – to name but a few. As one lawyer in the United States infamously found out last year, using ChatGPT without independently verifying its ‘findings’ can get you in hot water.[4]

Which brings me to artificial intelligence.

We all see that AI is an astonishing development. The potential benefits to businesses and individuals are enormous – with an estimated ‘additional $170 billion to $600 billion a year to Australia’s GDP by 2030.’[5]

And AI can bring both challenges and opportunities to lawyers.

I’ll start with the challenges.

First off, let’s be frank: large language models can’t think. They’re language generators, but they do not think – as human beings usually understand the term. They can create extensive, comprehensible, even – mostly – accurate linguistic output. But they lack judgement.

While LLMs can provide detailed answers about how certain ethical frameworks apply in respect of certain decisions, they are themselves without ethics. So while AI might assist in applying logic, it does not – and cannot – apply ethical judgement about which logic to follow or why.

Nor can generative AI balance ‘best interests’ of the client, or employ critical thinking. For example, if you ask an LLM a question that assumes something untrue, it will take that assumption as true. A human being, on the other hand, may find – through research – that the assumption was false.

This leads us to the much-vaunted issue of ‘AI hallucination’. Recent research from Stanford University has shown that ‘legal hallucinations are pervasive and disturbing’, ranging from ‘69% to 88% in response to specific legal queries for state-of-the-art language models. Moreover, these models often lack self-awareness about their errors and tend to reinforce incorrect legal assumptions and beliefs.’

So, while an unscrupulous lawyer might conceivably try to deceive the court, he or she would do that in full knowledge that they’d made the case up. AI, on the other hand, thinks it’s done its job well, simply because it has provided the answer to the question it was asked.

As we’ve seen in the case of the lawyer in the United States, AI hallucinations can be a major risk for an individual lawyer’s case, career, and firm. But the irresponsible use of AI has a broader, more pervasive, and potentially destructive challenge for legal practitioners.

If stories of lawyers using and trusting ‘AI legal hallucinations’ abound, public trust in the legal system – and in the rule of law – becomes eroded. As professionals, lawyers have a duty to the court, to their profession, to their client, and fundamentally to the rule of law itself.[6]

So AI can be harmful to lawyers and the legal profession when it is used without bringing to bear a healthy scepticism, a critical eye, and the exercise of judgement.

But what of the opportunities that AI presents? Like the telex machine of old, or the use of TAR in e-discovery, can AI contribute positively to a lawyer’s work?

Well, according to Goldman Sachs research from last year, 44% of legal work can or will eventually be replaced by AI.[7] A recent Australian survey also showed a strong belief among Australian law firms that by 2028, AI will be doing at least some legal work.[8]

But – before you rush to consider a career change – let’s look a little closer.

Joint research between Microsoft and the Tech Council of Australia suggests that only 10% of solicitor tasks could be fully automated[9] – the rest would involve AI and human lawyers working together.

So what kind of legal work can AI do? The most obvious is research and review – again, provided there are sufficient checks and balances. In this case, just as a senior lawyer can’t accept blindly whatever a junior gives them, neither should they blindly accept whatever comes out of their LLM of choice.

Nevertheless, AI can be useful for lawyers in research. Another possible use might be in assisting with e-discovery – provided its ‘findings’ aren’t trusted implicitly.

Interestingly, AI can also be used, to an extent, in the assessment of rights. For example, the self-styled ‘AI consumer champion’ donotpay.com, a US-based online legal service and chat-bot, has helped individuals overturn more than 160,000 traffic offences.[10] This might have interesting consequences for improving the decisions of regulators whose job it is to issue such tickets. If those regulators know that there are freely available tools which can readily check the validity of a ticket, it may be that less tickets would be issued which can be appealed.

Technological literacy
It should be clear by now that technology having an influence on the law is a given. But what that influence is, depends on lawyers themselves. Which brings me to my final point – it’s clearly in lawyers’ interests, and in the interests of their clients, for lawyers to stay ‘tech-smart’.

Indeed, some have suggested that legal thinking itself is already moving in this direction. With legal databases becoming more and more a key part of legal practice, it’s argued that ‘lawyers are beginning to think of the law as a collection of facts and principles that can be assembled, disassembled, and reassembled in a variety of ways for different purposes’.[11] This smacks of Thomas Carlyle’s lament 200 years ago that ‘men are grown mechanical in head and in heart, as well as in hand.’[12]

The point is that, to the extent this ‘mechanising’ of the legal mind is real – only to that extent will AI replace it. If, as I’ve shown, there’s space for technology and lawyers to work together, this is not achieved through any kind of routine mechanical thinking. A machine will always beat a human at being mechanical. And so it should.

Now, with the increased use of technology in law, lawyers need to employ some critical skills – skills they have always used – to operate effectively in this environment.

What are those critical skills? To reiterate, they are:

curiosity and willingness to learn
a healthy scepticism, and
application of judgement.
These are skills that mark out a good lawyer irrespective of any questions about technology. But they are needed more than ever now, in our evolving technological environment.

Much of what is routine, linear, or mechanical, can – and will – be replaced by something that excels in the routine, in the linear, in the mechanical. In fact, evidence suggests that LLMs are more efficient and effective at compiling lists of relevant cases than junior lawyers.[13] But, as I’ve said, mechanical research must be tempered by real-world fact-checking.

Ultimately, as AI advances to equal human prediction and analysis, it only highlights the greater need for that uniquely human skill: judgement.

When Steven Schwartz effectively outsourced his research to ChatGPT in Mata v. Avianca, he was exhibiting an open mind about the potential benefits of AI. And that’s good. But he failed to display a healthy curiosity about it. That meant he didn’t fully understand it, and treated it like a search engine, not a generative language model. He also failed to show a healthy scepticism that would have led him to cross-checking the AI’s output.

So wouldn’t it be safer for lawyers to reject technological developments such as AI? I think not.

Lawyers today clearly need a level of technical proficiency.[14] In fact, some have argued failing to take advantage of technology ‘could be considered unethical, even malpractice, because technology-assisted legal practice can yield substantively better legal results.’[15]

As far back as 2018, it was being suggested by an Australian academic, Dr Michael McNamara from Flinders University, that the evolution of technology:

‘should be welcomed as a natural evolution and innovation in the delivery of legal services (just like the emergence of the modern solicitor and barrister) rather than as a cause for concern. Without a wave of legally and technologically literate graduates, who either join existing law firms with fresh ideas, compete with legal practitioners by operating outside the purview of legal practice, or completely disrupt legal services with innovation, the legal profession risks stagnating’.[16]

In essence, the legal profession’s interaction with emerging technologies should be proactive, strategic and bold.

Lawyers need to be able to use technology – including AI – in order to actually do their job and discharge their duties. And they must have a level of data capability – the expertise to find, analyse, understand, and utilise data.

Now, you might be wondering how this aligns with what I’ve said to market participants on other occasions, about the dangers of rushing into innovations such as AI. My response to that is – our warnings have always been grounded in the risks of what happens when there is a lack of appropriate controls, or proper governance. And lawyers have always played a key role in governance. And so, effective supervision and oversight are critical – to ensure technology is used responsibly.

By being proactive, strategic, and bold in their interaction with technology, lawyers can solve problems, not just identify risks.

Conclusion
In conclusion, we must be careful with generative AI but not afraid of it. Technology has, in one form or other, always been here – and the history of law is very often in lockstep with the history of technological development.

Sometimes that means the law helps shape our understanding and experience of technology – other times, it’s technology that changes the legal profession.

In any case, with the ever-increasing use of technology in law, being a lawyer means understanding those changes. Just as the public expects ASIC to be a bold regulator, I believe it also expects the legal profession to be bold in the face of technological advances – particularly if those advances could, eventually, offer greater ‘democratisation’ of access to justice.

At the end of the day, this means all of us in the legal profession asking the right questions, to make sure we understand the benefits – and the harms.

Above all, in a world of AI and big data, being a lawyer means having a range of tools in your toolkit.

So, while technology is very much a necessary tool in that toolkit, being a lawyer also requires responsible oversight of that technology. And it requires – now more than ever – cultivating an open-minded curiosity, a willingness to learn, and a persistent scepticism.

In a word, it requires judgement.

[1] Seminar ANU Faculty of Law 27 May 1983 (austlii.edu.au)

[2] See A&M Record, Inc. v. Napster, Inc., 239 F.3d 1004, 1014 (9th Cir. 2001). Napster argued that it did not perform copyright infringement, its users did. The Court held that Napster was liable for Contributory Infringement and affirmed an injunction against it.

[3] See. School of Visual Arts v. Kuprewicz, 771 N.Y.S.2d 804 (Sup. Ct. 2003); Ebay, Inc. v. Bidder’s Edge, Inc., 100 F. Supp. 2d 1058, 1061 (N.D. Cal. 2000); Ticketmaster Corp. v. Tickets.Com, Inc., No. CV997654HLHVBKX, 2003 WL 21406289, at *3 (C.D. Cal. Mar. 7, 2003).

[4] https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

[5] Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response, p. 4

[6] See Rule 3 of the Australian Solicitors Conduct Rules: “A solicitor’s duty to the court and the administration of justice is paramount and prevails to the extent of inconsistency with any other duty.”

[7] https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf

[8] https://www.macquarie.com.au/assets/bfs/documents/business-banking/bb-legal-industry/legal-benchmarking-report-2024.pdf

[9] https://news.microsoft.com/wp-content/uploads/prod/sites/66/2023/07/230714-Australias-Gen-AI-Opportunity-Final-report.pdf

[10] Ethan Wolff-Mann, “This Chat Bot Lawyer Has Beaten 160,000 Parking Tickets,” money.com June 29, 2016 https://money.com/donotpay-chat-bot-traffic-ticket-new-york-london/

[11] Allan Hanson, Technology and Cultural Tectonics: Shifting Values and Meaning (Palgrave Macmillan 2013). p. 131

[12] Thomas Carlyle, ‘Signs of the Times’ (1829) 49 Edinburgh Review 439; quoted in Crofts, Penny, and Rijswijk, Honni van. Technology : New Trajectories in Law, Taylor & Francis Group, 2021, p. 8

[13] Lauren Martin, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, Rivindu Perera, “Better Call GPT, Comparing Large Language Models Against Lawyers”, https://arxiv.org/html/2401.16212v1

[14] Michael Murphy, ‘Just and Speedy: On Civil Discovery Sanctions for Luddite Lawyers’ (2017) 25 George Mason Law Review p. 36

[15] Crofts, Penny, and Rijswijk, Honni van., op. cit., p. 9

[16] Michael McNamara, “University Legal Education And The Supply Of Law Graduates: A Fresh Look At A Longstanding Issue”, Flinders Law Journal, Vol. 20 (2018), p. 244

Search below to find any information or documents you are interested in.

Categories