Will Artificial Intelligence Reduce Clinical Negligence?

Artificial Intelligence (“AI”) has long been tipped to transform our world, and will change the nature of employment roles as machines complement the human workforce. With partial automation of tasks, many job responsibilities will be reconfigured so that a human touch is no longer needed.

Recently, a fully-autonomous robot has successfully performed keyhole surgery on pigs – without the guiding hand of a human surgeon. Apparently, the robot surgeon produced “significantly better” results than its human counterparts. The surgery has been described as a “breakthrough” and is another step towards the day when fully autonomous surgery can be performed on human patients. (https://www.theguardian.com/technology/2022/jan/26/robot-successfully-performs-keyhole-surgery-on-pigs-without-human-help) 

But does AI have the capability to reduce incidences of clinical negligence for the NHS, and will that mean less people being unnecessarily injured/dying in a hospital setting?

A quick overview of the law in relation to clinical negligence: to be able to successfully pursue a claim for clinical negligence, a person must clear two legal hurdles: firstly, the treatment complained of amounted to a “breach of duty” – that it was so poor that no reasonable body of medical opinion would have considered it to be reasonable or normal; and secondly that the breach of duty caused the person to suffer injury (“causation”).

Going back to the pig surgery, the Smart Tissue Autonomous Robot (STAR) carried out laparoscopic surgery to connect two ends of an intestine in four pigs. This process of connecting two ends of an intestine (“anastomosis”) is a highly technical and challenging procedure in gastrointestinal surgery, requiring a surgeon to apply sutures with a high degree of accuracy and consistency. Whilst anastomotic leaks can occur naturally or non-negligently, one misplaced stitch, or poor technique, can result in a leak that could lead to the patient suffering fatal complications. Thus, breaches of duty arising from anastomotic leaks are, sadly, quite commonplace.

In contrast, according to a paper published in Science Robotics, the STAR robot excelled in carrying out the robotic anastomosis, with the resultant suturing being better than anything a human surgeon could do. 

On this basis, it is easy to see that there is the potential to revolutionise surgery, and for robots to reduce the incidences of harm caused by human errors and avoidable complications, such as those caused by a missed stich or an untoward hand tremor. This, naturally, is a good thing, and correspondingly would reduce claims being made against the NHS.

However, a word of caution: we have been here before.

AI has been touted as the saviour to the medical profession before. Back in February 2016, Google’s AI subsidiary, DeepMind, announced it was working with NHS Trusts to analyse patient data. The company intended to combine AI, machine learning with bulk medical data to develop models that could predict or diagnose acute kidney injury. However, issues around patient confidentiality meant that in 2017, DeepMind Health (later a division of Google Health) was found to have not complied with UK data protection laws, according to the UK Information Commissioner’s Office (https://www.cnbc.com/2017/07/03/google-deepmind-nhs-deal-health-data-illegal-ico-says.html).

Similarly, in February 2020, Google Health, the branch of Google focused on health-related research, clinical tools, and partnerships for health care services claimed that its’ AI models could “beat” humans when interpreting mammograms and detecting breast cancer. However, as studies have found, you can show the same early-stage lesions to a group of doctors and get completely different interpretations about whether the lesion is cancerous or not. Even if the doctors do agree as to what a lesion shows — and their diagnoses are actually correct — there’s no way of knowing whether that cancer will prove to be fatal. This leads to over-diagnosis, triggering a chain of painful medical interventions that can be costly and life-changing. In the case of breast cancer, it may lead to radiotherapy, chemotherapy, the removal of breast tissue (a lumpectomy), or the removal of one or both breasts entirely (a mastectomy). These aren’t decisions to be rushed, and ultimately may lead to treatments that, clinically, are not medically necessary and lead to an increase in claims for medical negligence being made. (As an aside, in August 2021, Google’s parent company, Alphabet, said it was shutting down its Google Health Division, so clearly all is not well in the land of AI (https://www.forbes.com/sites/johanmoreno/2021/08/21/google-dismantling-health-division/?sh=71316d9de401)) 

Clearly, there is tremendous potential for AI to help change the provision of care for patients for the better. But it is not a silver bullet or panacea to eradicate human error in the clinical decision making process or during the performance of surgery. It is not designed to remove humans from the equation. Instead, AI should be regarded as a tool which clinicians have at their disposal – just like a scalpel or stethoscope – to help them carry out their clinical duties effectively and, most of all, safely.

Therefore, it may be a little while yet before we see fully autonomous robot doctors roaming the halls of hospitals and GP surgeries across the country…