Hallucinatory Judgments and Automated Vehicles: AI and the law

Articles, News

07/03/2024

Artificial Intelligence (AI) is not new and is already an integral part of life for most people: it works with Global Positioning Systems (GPS) to show us the best way to get from A to B and powers the countless Alexas and Siris embedded in mobile telephones, TVs and smart speakers. However, more recent developments have brought AI back into the news headlines and raised public awareness of what it can, and cannot, achieve.

Does extreme embarrassment come within the definition of injury? If it does, then AI has already broken the first of the Three Laws of Robotics  (namely, for anyone who is not familiar with Isaac Asimov’s work: ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm’).

In Harber v HMRC[1] Mrs Harber, acting in person, appealed to the First-Tier Tribunal (FTT) of the Tax Chamber against a penalty issued by HMRC. In her written submissions she relied on nine FTT decisions. It transpired that, unbeknownst to her, the cases were not genuine: they had been generated by the AI used either by Mrs Harber or by her ‘friend in a solicitor’s office’. She lost her appeal – not because of the fabricated cases but on the merits. The FTT emphasised that she was not penalised because of the error, but that will not always be the case. Mrs Harber was a litigant in person and did not know how to cross-check the case names that the AI search results yielded. A lawyer will not escape penalty so lightly.

Imagine the toe-curling position in which two American lawyers found themselves in Maya v Avianca[2], a decision of the United States District Court in New York. The lawyers sought to rely on cases which, it transpired, had been generated by ChatGPT. The Judge held that whilst there is nothing inherently improper about using a reliable AI tool for assistance with drafting court submissions, the lawyers had abandoned their responsibilities by submitting the non-existent judgments. The lawyer who had drafted the submissions admitted that ChatGPT had given him the case names and citations and that he had been unable to find the judgments in those cases when he had searched for them. He had cited the cases nonetheless, ‘operating under the false assumption and disbelief that this website could produce completely fabricated cases’. As is now clear from these two cases, AI can and does produce completely fabricated cases. The Solicitors’ Regulation Authority (SRA) published a report on 20.11.23 entitled ‘Risk Outlook report: the use of artificial intelligence in the legal market’, which highlights that AI language models such as ChatGPT can be prone to making mistakes because they can anticipate the text that should follow a particular input, but have no concept of ‘reality’. The report describes the result as ‘hallucination’: where a system produces highly plausible but incorrect results. It is interesting to note that in Maya the lawyer had asked the chatbot to ‘provide case law in support [of his case position]’. Was it this closed question – which suggested the desired answer –  that prompted the obliging AI to hallucinate the required cases? Would more accurate results be achieved by asking open questions? The conclusion must now be that any cases or statements of the law obtained from generative AI must be treated with extreme caution and cross-checked for accuracy and authenticity.

Although the AI-generated cases were initially innocently (if carelessly) used by the lawyers in their submissions, when they were asked to provide further details of those cases they compounded the error by confirming to the court that the cases were authentic. What research did they carry out to enable to give them that reassurance? They asked ChatGPT: “are the cases you provided fake?”. ChatGPT, presumably following Asimov’s Third Law (‘a robot must protect its own existence’) confirmed that it had supplied ‘real’ authorities. It may come as no surprise to learn that at the subsequent show cause hearing, the lawyers were ordered to pay a fine of $5,000 in order to ‘advance the goals of specific and general deterrence’.

One would hope that a further conclusion to be drawn from these cases is that concerns of lawyers’ jobs being replaced by AI are greatly exaggerated. Whilst AI can take data and generate plausible answers, its ability to analyse legal concepts and, for example, prepare an argument that a line of authority has been wrongly decided, cannot (yet) match that of humans. In Maya the judge observed that the AI-generated cases showed stylistic and reasoning flaws, the legal analysis was ‘gibberish’ and much of the content ‘borders on nonsensical’. It is not setting the bar too high to suppose that lawyers aim for a better result than that.

The Judge in Maya highlighted the harm that can flow from the submission of ‘fake’ judgments, including the waste of time and money, the harm to the reputation of judges whose names are falsely attributed to such judgments, promotion of cynicism about the legal profession and the judicial system and the possibility of tempting future litigants to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

Guidance is available to legal practitioners keen to avoid falling into an AI-laid trap. In addition to the SRA report cited above, on 12.12.23 the Lady Chief justice, Master of the Rolls, Senior President of Tribunals and deputy Head of Civil Justice issued guidance to the court and tribunals judiciary concerning their use of AI. The guidance includes how to minimise the inherent risks of using such technology, such as potential bias. The guidance can be found here.

Similarly, on 30th January 2024 the Bar Council issued guidance to members of the Bar on the appropriate use of ChapGPT and other forms of generative artificial intelligence. The guidance (titled ‘Considerations when using ChatGPT and generative artificial intelligence software based on large language models’[3]) highlights problems that might arise from its use (such as hallucinatory judgments) and the need for lawyers to continue to exercise independent judgment when using the technology, particularly so as to maintain confidentiality and legal professional privilege.

Thanks to AI the job security of drivers in California is not so assured. In 2022 General Motors’ driverless car division (Cruise) obtained permission (from the California Public Utilities Commission and the Department of Motor Vehicles, which share regulatory responsibility) to operate its robo-taxis in San Francisco. Their operation was initially limited to carrying Cruise employees and friends but in August 2023 permitted operation was extended to paying passengers at any time of day. The decision was a contentious one, opposed by many including residents with safety concerns, drivers’ unions fearing job losses and fire and police departments concerned about obstruction of emergency vehicles.

The objections were based on experience of the robotaxis to date. Residents had experienced driverless cars malfunctioning and blocking roads and more serious incidents have occurred. In August 2023 a passenger in a Cruise driverless taxi was injured when the taxi was struck as it crossed a road junction by a fire engine on an emergency call-out, with lights and siren activated. The response from Cruise to the accident indicates that its AI programming did cover how to respond to the presence of emergency vehicles but the driverless car was unable to brake and stop in time to avoid the collision. In October 2023 Cruise’s licence to operate was suspended following an accident in which a pedestrian was dragged 20 feet along the street by a driverless car. The pedestrian was thrown into the path of the driverless car after being struck by another vehicle; the driverless car initially stopped but then drove another 20 feet as (according to Cruise) a safety manoeuvre to avoid causing a hazard to other vehicles. This was clearly not the way in which a human driver would have responded to the collision but as the initial accident was not one that had been anticipated by the AI programmers, the driverless car did not have capacity to respond appropriately.

In Great Britain, Part 1 of the Automated and Electric Vehicles Act 2018 (“AEVA”) (which makes provision in relation to automated vehicles) came into force on 21st April 2021 and provides a strict liability regime applicable to automated vehicles. Where an accident is caused by an automated vehicle driving itself on a road or other public place in Great Britain, if the vehicle is insured at the time of the accident the insurer is liable to the insured person or any other person who suffers damage as a result of the accident. The aim of AEVA is to avoid claimants having to run complex and costly product liability claims against manufacturers of automated vehicles. Section 8 defines ‘driving itself’ as ‘operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual’, which leaves room for argument as to whether the driver needed to monitor the car. To aid interpretation of s8, s1 requires the Secretary of State for Transport to maintain a list of self-driving cars. The stated aim of the list is to inform consumers and the insurance industry which vehicles require automated vehicle insurance. It seems likely that insurance premiums for owners of automated vehicles will be higher than for vehicles without that function.

In the same way that the Government website allows you to easily renew road tax or check state pension age, anyone can check if a vehicle is listed as self-driving for use in Great Britain[4]. As at 1st March 2024, the list says: ‘At present, there are no self-driving vehicles listed for use in Great Britain’. AEVA therefore currently has no application (insofar as it relates to automated vehicles). Whilst driverless cars do not yet operate in the UK, there are already cars on the road that use automated systems: parking assist, for example, where the car steers automatically into a parking space. Such systems require no steering input from the driver, but the driver remains in control of the car and operates the accelerator, brake and clutch. Should an accident occur whilst a driver was using such an automated system, AEVA would not assist the claimant. APIL (the Association of Personal Injury Lawyers) is lobbying for a wider scope of strict liability to be included in the Automated Vehicles Bill, which was debated in the House of Lords at the end of 2023.

AEVA does not provide any mechanism for approving which automated vehicles are safe to use: that will be determined by future regulation, most likely based on international standards. Sections 2(7) and 5 reserve the right of insurers to recover from parties responsible for an accident and s3 preserves the application of contributory negligence principles to apportionment of liability. Section 4 allows insurers to exclude or limit their liability to the insured where the vehicle’s software has been altered by that person, or that person has failed to install safety-critical software updates. Section 6 ensures that claims under the Fatal Accidents Act 1976 can be brought against the insurer: accidents involving automated vehicles are deemed to be due to ‘wrongful act, neglect or default’ for the purposes of the FAA.

The Highway Code has already been updated to cover automated vehicles. In respect of technology that already exists and is used on the roads, rule 150 reminds drivers that they must exercise proper control of their vehicle at all times and that driver assistance systems such as motorway assist, lane departure warnings, or remote control parking are there to assist but the driver should not reduce their concentration levels. Reference to ‘self-driving vehicles’ (namely those listed as automated vehicles by the Secretary of State for Transport under AEVA) is also included. The Code states ‘while a self-driving vehicle is driving itself in a valid situation, you are not responsible for how it drives. You may turn your attention away from the road and you may also view content through the vehicle’s built-in infotainment apparatus, if available….If a self-driving vehicle needs to hand control back to the driver, it will give you enough warning to do this safely. You must always be able and ready to take control, and do it when the vehicle prompts you. For example, you should stay in the driving seat and stay awake. When you have taken back control or turned off the self-driving function, you are responsible for all aspects of driving’. It remains the case that the occupant of a self-driving car must still follow all relevant laws: they must be fit to drive (i.e. within the drink-drive legal limits and not under the influence of drugs) and must not do anything illegal ‘like using a handheld mobile phone, or similar hand-held device. This regime is a far cry from the role of a passenger in a driverless taxi in San Fransisco and highlights the difficulty of legislating and making provision for technology that is not yet available in Great Britain. As that technology develops, we are likely to see many further iterations of AEVA.


[1] [2023] UKFTT 01007

[2] 22-cv-1461 (PKC)

[3] https://www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf

[4] https://www.gov.uk/guidance/self-driving-vehicles-listed-for-use-in-great-britain

Featured Counsel

Linda Nelson

Call 2000

Latest News & Events

The Dekagram: 11th November 2024

This week’s Dekagram is all about cancellations and delays – what happens when a package holiday is cancelled, what happens when a flight is delayed due to the behaviour of a passenger. Both sets of circumstances are giving rise to increasing numbers of claims, in…

Eleanor Mawrey and Jennifer Oborne appointed Recorders

Deka Chambers is delighted to announce that the King has appointed Eleanor Mawrey and Jennifer Oborne as Recorders on the advice of the Lord Chancellor, the Right Honourable Shabana Mahmood MP and the Lady Chief Justice of England and Wales, the Right Honourable the Baroness…

Edward Lamb KC interviewed by The Bar Council in pro bono week

Edward Lamb KC spoke to The Bar Council of England and Wales about how his pro bono week helped him in his application to take Silk: Edward Lamb KC’s practice includes the most important private children cases with complex cross jurisdictional issues, inflicted non-accidental catastrophic…

Subscribe to our mailing list

Deka Chambers: 5 Norwich Street, London EC4A 1DR

© Deka Chambers 2024

Search

Portfolio Builder

Select the expertise that you would like to download or add to the portfolio

Download    Add to portfolio   
Portfolio
Title Type CV Email

Remove All

Download


Click here to share this shortlist.
(It will expire after 30 days.)