Contact
Uncategorized

Let’s share learning more efficiently and effectively by using the same methodologies and improvement pathways

Let’s share learning more efficiently and effectively by using the same methodologies and improvement pathways

At CRAB Clinical Informatics we spend time with medical directors who are increasingly aware that commonly used methods of measuring quality (such as mortality indicators) can be misleading. This is because these measures do not provide the most useful insight into patient specific risk. For example, they may only use the first few diagnostic codes during a patient admission.

This view is now being echoed throughout the sector from regulators to arms-length bodies and other agencies. The result is that each organisation has started to use different outcomes methodologies to measure clinical quality and this in turn is having consequences on the ability to compare trusts and analyse clinician performance within the NHS.

Misleading red flags create a great deal of noise and wasted effort in determining if the highlighted indicators are truly abnormal – rather than looking for underlying causation and devoting limited resources on health improvement.

A good example of this can be seen in our work with a leading hospital trust in the north west of England where the chronic obstructive airways disease (COPD) was red flagged as a comorbidity by SHMI. The trust didn’t believe this was a problem, but wasn’t sure whether it was seeing more patients with an acute exacerbation of COPD, or whether its care of COPD was poor leading to higher than expected mortality, or whether it was an incidental finding. We carried out a bespoke analysis and found that the higher than expected mortality, as shown by SHMI, was down to incidental comorbidity, within the normal range for incidence. This was having no detrimental impact on mortality within the trust. It gave the trust reassurance and prevented lengthy case notes reviews and audits by the stretched clinical staff.

However, different methodologies can work the other way and lead to false assurance. What might be deemed to be a problem in one trust may not be interpreted as a problem in another with similar challenges because the data is being interpreted in different ways and is not being risk adjusted for case mix complexity.

That’s why we need to act now to ensure everyone use the same datasets and health outcomes methodology. By doing so best practice could be translated across Trusts and effort more usefully dedicated to improving outcomes, rather than retrospective analysis.

By using the same set of methodologies and making the improvement pathways explicit learning can be more efficiently and effectively shared across the health system.

 

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound