If you’re a hospital board member and/or senior manager, your duty of care, oath, legal/governance responsibility means this article is worth your time as a guide to avoid the consequences of failing to measure the right things (fines, sackings, scandals, maybe even prison).
The Australian reports that, “the Health Care Complaints Commission responded to a notification by three surgeons who had claimed a fellow surgeon was “not fit to operate” and had alleged a “failure of proper processes” at Royal Prince Alfred Hospital by undertaking the assessment rather than a formal investigation.”
https://www.theaustralian.com.au/nation/politics/watchdog-failed-to-investigate-eight-rpa-surgery-deaths/news-story/762b0d0bcb58326789c0fb7b0b93fd5e
The scary part about this is that there may be far more complications and deaths occurring because hospitals are not using a system that can identify them.
I’m going to tell you about why the problem in Australia is tricky unless you use the right systems for measurement and how we resolve these issues early before they become scandals (with five case studies described below).Firstly, you want your surgeons to take on higher risk patients. You might be one in the future and you don’t want them ducking operations because ‘they are worried it will affect their mortality ratings if things go wrong’ as the Daily Telegraph reported*. Those ratings are simply the percentage of deaths to operations and so you immediately see the problem and how surgeons would (and indeed do) think defensively if measured in this way. Trust me, we’ve been banging the drum about this for years.
Our CMO wrote in HSJ on this topic in 2016 – ‘Why the NHS needs to get better at assessing surgical risk’ (https://www.hsj.co.uk/topics/patient-safety/why-the-nhs-needs-to-get-better-at-assessing-surgical-risk/7014195.article?blocktitle=Comment&contentID=7808)
The only way to properly investigate the situation highlighted by The Australian today is to use a properly risk-adjusted assessment of the observed results to expected. This is what we do for every patient in every hospital that uses our system as well as on request for regulators and authorities. No change to workflows in the hospital or disruption.
Without understanding the physiology of the patient, any co-morbidities and the risk of the prospective operation, you are trying to compare apples and oranges. We look at all those factor so can benchmark what happens (observed results for mortality and complications) against what should happen using our Ai-backed system built on 25 years of research and 120m patient records from 46 countries.455 lives saved in our partner hospitals.
To give a view on how effective our systems are, we looked at a recent 12-month period in our partner hospitals in the UK alone. Improvements made in that period can be equated to saving of 455 lives, nearly 4,000 instances of harm avoided and more than £20m in the direct costs associated with treating those harms. Rolling out across the NHS would save three times the lives lost on the roads each year.
CASE 1 – Study of outcomes of 6 orthopaedic surgeons over 2 years
Identified a ~3-fold variation in incidence of raw mortality & complications between surgeons.However, the case-mix adjusted picture (the observed to expected ratio O/E) shows those with higher incidence were not poorer performers, but operating on higher risk patients. Compare for example Mr A with Mr D. On raw rates, Mr D seems better. But Mr A was dealing with more complex patients and actually getting the same or better results.
Table 1 – Raw Mortality (deaths per 100 cases) and our risk-adjusted Observed to Expected Ratio for the same cases
Table 2 – Raw Morbidity (complications per 100 cases) and our risk-adjusted Observed to Expected Ratio for the same cases
CASE 2 – Comparing one surgeon’s mix of patients to a broader group
Mr. Z was taking on more complex case-mix than his colleagues as shown in the chart below. He took more of his cases (blue) in the higher risk categories than his colleagues (yellow).However, when we looked at the Observed to Expected ratio for Mr. Z’s cases, his performance was good. In this case, the choice of higher risk patients was justified.