Share:

Insight from Peter Caldwell - Will Artificial Intelligence result in artificial disclosure?

As we await the results of the disclosure review of criminal cases, intended to address what the outgoing DPP described as “deep-rooted and systemic disclosure issues”, the Serious Fraud Office (SFO) announced that with immediate effect it would permit the use of Artificial Intelligence (AI) in all of its new casework.  For the SFO, the use of AI might be considered a logical response to the pressure of managing a digital data, but it is a step change of approach and brings significant risks of injustice to the trial process.

Fraud cases, have always been “document-heavy”, but the growth in digital data in recent years has been exponential.  It is relatively easy to seize large amounts of data, (regularly measured in terabytes), but more difficult to ensure an effective review of the data for material that may assist the accused.  While the collapse of a series of rape trials highlighted disclosure failures in relation to data stored on personal electronic devices the implications for the review of digital data in corporate investigations is of a wholly different order.

Automated document analysis is nothing new.  Prosecution agencies (as well as defence lawyers) have been using “key word” searches to trawl large volumes of digital material for many years now.  The proposed use of AI however, is a departure.  In the past, decisions on disclosure of relevant material, whether by way of a physical sift of the use of electronic searches, have always been conducted by a lawyer or investigator.  The purpose of deploying AI, is to move beyond defined word searches and permit the robot prosecutor an element of discretion in the conduct of a search; to “learn” from the process and to apply what it has learned in deeper searches of the material.  By this means AI can be used to search for and group data thematically.  This is not merely matching like for like, but making judgment calls.

The use of AI is now commonplace in corporate internal investigations where platforms using natural language and machine learning can assist in identifying fraud and misconduct quickly.  Pressure of time (as well as the cost of resources) can be particularly acute where there is an obligation to make a report to a regulator or to trace and freeze stolen assets.  These are considerations which place a premium on a corporate getting the best answer it can in the shortest time possible. From an investigator’s point of view, the task of finding a needle shouldn’t be deterred by the size of the haystack.

The obligations on a prosecutor, however, are not the same as for a corporate conducting an internal investigation.  The duty of the prosecutor is to get it right, and to err, where the judgment call is close, on the side of disclosure to the accused.

The use of AI had been “piloted” by the SFO in its investigation of Rolls Royce Plc. Some 30 million documents had been submitted for automated document analysis to review for material that might attract Legal Professional Privilege (LPP).    Though hailed as a success by the SFO, it should be noted that the Rolls Royce investigation was not a typical criminal case.  This was not a test of disclosure in the context of adversarial litigation, but collaborative effort by both parties towards a deferred prosecution agreement (DPA).  For those purposes Rolls Royce as the company under investigation had itself provided the keys to its own warehouse of material.  It had access to its own data in a way that most defendants accused of economic crime do not.  The investigation had in very large part been conducted and voluntarily revealed to the SFO by Rolls-Royce itself.   The DPA approved by Sir Brian Leveson PQBD, by its very nature did not involve adversarial proceedings against individual defendants.  The use of AI was restricted to a search for items that might be subject to legal professional privilege, a task that would previously have been conducted by independent disclosure counsel.

Would the use of AI would be considered such an unqualified benefit in a contested prosecution of serious economic crime?  The SFO evidently consider it will benefit the prosecution, but will it assist the defence?  In such cases necessarily an accused’s defence is likely to be derived from material not relied on by the prosecution, but in the surrounding material of emails and notes, which in ordinary business life provide the context and very often the evidence capable of explaining alleged misconduct.  It is questionable whether AI will help or hinder the process of locating relevant material.

The rules underpinning disclosure in document-heavy cases have been revisited on a number of occasions.  Historically, the Courts have expressed concern that the defence should not be given the “keys to warehouse door”, a phrase derived from the physical storage of unused material.  This dicta reflected a policy decision based on cost and manageability; that the defence should not be permitted to make unnecessary use of time and resources.  It cannot have been intended to confer a proprietorial advantage to the prosecution. 

Though usually a step or two behind the development of technology and the growth of data sources, guidance given in the disclosure protocols has emphasised the need for transparency in the process, while allowing prosecutions to be manageable.  There is inevitably a tension between these two principles.

In R. v. R [2016] 1 WLR 1872,  the Court of Appeal reviewed the approach to disclosure that should be taken in such cases, emphasising the principle that “the burden of disclosure should not render the prosecution of economic crime impractical.”  One consequence of this principle is that the prosecution’s obligation to account for its handling of unused material has been reduced.  Thus, the 2013 Guidelines qualify the requirement to keep a “record or log” of all digital material seized as a duty to record only the “strategy and the analytical techniques used to search the data”.  Similarly, the scheduling duty imposed on the disclosure officer separately to list each item of unused material is modified in favour of block listing the search terms used and any items of material which might satisfy the disclosure test.

Central to the decision in R. v. R. was the principle of transparency in the conduct of the prosecution’s review. The prosecution must explain what it is doing and what it will not be doing at this stage, ideally in the form of a “Disclosure Management Document” (DMD). This document, as recommended by the Review and the Protocol, is intended to clarify the prosecution’s approach to disclosure (for example, which search terms have been used and why) and to identify and narrow the issues in dispute.  

Whereas the Court of Appeal contemplated the use of word searches based on defined terms about which the defence could be informed, it did not have in mind the development of technology that would permit autonomous searching.  The change from sift by word-search to sift by algorithm is not merely quantitive (more and faster), but qualitative.  It marks a wholly different approach to reviewing material. 

The fact that the defence is informed that AI is being used does not avoid unfairness or a lack of transparency inherent in the use of AI use.  Although a prosecutor may inform the defence of the nature of the review the AI platform was tasked to perform, the prospect that the defence will have an opportunity to check that process or hold it to account seems remote.

Given the carte blanche the SFO has awarded itself following the Rolls Royce case it is unlikely that the use of AI will be limited to post-charge disclosure reviews, rather it will inform the strategy of the investigation and decisions to prosecute from the very first.  The clear implication is that an AI platform will be tasked to review the material with a particular object in mind.  Prosecutors directing AI platforms in this way, rather than exploring all reasonable lines of enquiry, will be happy to have their worst suspicions confirmed.  These algorithms inevitably (necessarily) look for patterns in the data and serious errors of bias can occur.  Far from offering transparency, the process of AI decision making is likely to be opaque.  There may be an input of data and an output of analysis, but no reasons given for the decision the robot has made.  Informed IT specialists have noted that there is substantial scope for oversight and confirmation bias as part of a decision making process that is not truly accountable.

The use of AI is likely to diminish transparency and increases the risk of injustice in criminal cases.  Moreover, it renders any injustices that may arise so much harder to identify and redress. It must be recalled that while prosecutors and defence lawyers have a duty to manage disclosure issues professionally, the process at heart is not at all collaborative but adversarial.  If however, the use of AI is sanctioned, then fairness surely requires equality of arms.  At the very least this should permit the defence to participate and have access to the AI platforms – if not the keys to the warehouse door, then at least the codes.