The A.I. Revolution - will lawyers repeat the mistakes of others?

Posted on 22 September 2017 by George Michael

Of all the sectors to experience an A.I. revolution, legal is perhaps a surprising candidate given its notoriously traditional tendencies. Nonetheless, legal tech is increasingly being used to take over certain aspects of the work that lawyers do, and like a PG-13 crossover of Terminator and Suits, it is conceivable that some legal roles could soon be replaced by artificial intelligence.

 

However, lessons can be taken from another sector that has already seen similar widespread adoption of machine learning technology to enable automation – the cyber-security industry. Cyber-security teams and businesses around the world have embraced automated detection and response approaches because they can be cheaper, easier to manage, and efficient. So what’s the problem?

 

The implications of entirely automated security solutions were not fully assessed, and now companies are investing in solutions that are not only less effective than human-led approaches, but also inadequate at securing against modern attackers. Machine learning and A.I. led solutions have been proven to fail in the face of targeted attacks [1], as the human adversary will consistently find ways to outsmart an automated opponent.

 

Looking at the prevalence of A.I. adoption, parallels can be drawn between the cybersecurity and legal industries. The lessons learnt in the former should be applied in the later, so the same mistakes don’t have to be made twice. Namely, understanding the pros and cons of A.I. in its current state to supplement a human led approach rather than an automated approach.

 

One key strength in A.I. lies with its ability to process, aggregate and analyse large volumes of data, for both security and legal uses. For example, a recent report [2] found that A.I. in the legal sector is having a “light” effect on areas such as fact-checking, advising clients, and court appearances. Whereas it found that A.I. is having a “strong” impact on document review, which is as expected given the strengths of the technology.

 

This initial observation could partly be driven by clients demanding more value from law firms. Clients don’t want to pay for lawyers to do routine work, such as reviewing documents; they want to see the value in the service through more cognitive tasks such as expert and accessible legal advice, case-building, and court representation. These are all areas that are currently beyond the capabilities of A.I..

 

The security industry’s drive towards automation led to the widespread reliance on technology, pushing people out of the driving seat. While this provided cheap and automated capabilities, it has been applied to functions which  it is incapable of effectively replacing human counterparts. Technology has its limitations, as demonstrated in the legal sector, and these must be considered. A.I. has already proven that it can play a vital role in augmenting the capabilities of human professionals, but it is not a replacement. At least, for now…

 

[1] "Machine Learning-Based Security Tools Fail to Detect new Cerber Ransomware Strain" https://themerkle.com/machine-learning-based-security-tools-fail-to-detect-new-cerber-ransomware-strain/ Apr 2017

[2] “Can robots be lawyers? Computers, lawyers, and the practice of law.” https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2701092 Dec 2015