Variation in the Adoption of Patient Safety Practices among New Zealand District Health Boards

Article excerpt

Abstract

Objective. To investigate the adoption and impact of quality improvement measures in New Zealand hospitals.

Method. Structured interviews with quality and safety managers of District Health Boards (DHBs). Correlation of use of measures with adjusted 30-day mortality data.

Results. Eighteen of New Zealand's 21 DHBs participated in the survey. Structural or policy measures to improve patient safety, such as credentialing and event reporting procedures, had been introduced into all DHBs, whereas changes to general clinical processes such as medicine reconciliation, falls prevention interventions and disease-specific management guidelines were less consistently used. There was no meaningful correlation between risk-adjusted mortality rates for three common medical conditions and related quality measures.

Conclusion. Widespread variation exists among New Zealand DHBs in their adoption of quality and safety practices, especially in relation to clinical processes of care.

What is known about the topic? There are a significant number of adverse events which may affect hospital inpatients. Many of these are preventable. In response, quality and safety processes and measures are being adopted across the sector.

What does this paper add? The paper provides a description of the frequency with which a range of processes and measures have been adopted and demonstrates that adoption of these by New Zealand hospitals is patchy and monitoring is uneven. It suggests that the measures implemented do not appear to have impacted common mortality outcomes, though the findings may reflect the limits of feasible measurement of a probabilistic system.

What are the implications for practitioners? Managers should monitor the implementation of quality and safety measures and evaluate them in terms of their direct effects.

Received 14 December 2010, accepted 12 October 2011, published online 25 May 2012

Introduction

Hospitals work to improve the quality of their services and the safety of their patients in the interests of achieving the best possible outcomes. Variation in hospital processes1 and quality2 remain. Further, results from studies conducted in several western countries, including New Zealand, consistently suggest that there are significant rates of adverse events in hospitals, many of which are preventable.3-6 Although a wide range of effective and practical measures, including structural changes and process improvements, have been found to improve hospital quality and safety (Q&S), there is uncertainty about the extent to which they have been adopted in New Zealand.7,8

This paper presents the results of a survey exploring the adoption of Q&S measures by the District Health Boards (DHBs) that are responsible for planning and funding health services in New Zealand. It is the first publication from a project exploring the application of modern statistical methods to the assessment of hospital performance in New Zealand.9,10 The paper also relates the adoption ofQ&Smeasures to preliminary data on hospital 30-day mortality rates (adjusted for risk factors and comorbidities) for three common acute conditions.

Methods

The Chief Executive Officer (CEO) of each DHB was contacted by mail in 2008 and asked to authorise the research. The person responsible for Q&S in the DHB was then contacted, provided with an information sheet describing the research and asked to provide data by telephone interview. If willing, he or she submitted a consent form.

Interviews were conducted with informants in 18 of the 21 DHBs. The report from Otago included Southland; Tairawhiti and Nelson-Marlborough were unable to participate. In some cases, follow-up interviews were conducted with informants suggested by the Q&S manager. Interviews were completed in 2009. The DHB Q&S managers indicated that the information they supplied applied to all hospitals in the district and independent data were not collected from satellite hospitals.

Questions included in the survey aimed to assess hospital Q&S measures at two levels: organisational level attributes and processes of care related to clinical activities. Organisational attributes were included if there was strong evidence of a beneficial effect on patient outcomes or if they were part of national policy that represented a coherent statement of best practice in New Zealand.8,11,12 Indicators of clinical quality were based on work by the US-based Agency for Healthcare Research and Quality.13 These indicators have been subjected to a robust development process, and empirical analysis has shown that they are valid and reliable measures of clinical performance across a range of settings.14-16 Measures that were included in the survey are listed in Table 1, which summarises the questionnaire.

The questionnaire was designed by the research team and was peer-reviewed by senior Ministry of Health officials, and by DHB and academic staffmembers familiar with Q&S in New Zealand before being piloted in two DHBs. Questions established the presence and characteristics of organisational attributes and clinical processes; in some cases, informants were asked to assess a policy or compliance with it using a Likert scale.

As indicated in Table 1, questions were combined into 17 domains and the domains were combined into three groups. Indices were derived for each domain, giving equal weight to each subquestion. Indices were also derived for each group, giving equal weight to each domain. The first group was hospital structures and policies, the second was general clinical processes and the third was disease-specific protocols.

An overall score for each DHB was calculated, and scores across domains and groups were correlated with the degree to which the use ofQ&Smeasures in one domain or group by aDHB predicted their use in others.

Population standardised 30-day post-admission mortality risk for acute myocardial infarction (AMI), acute stroke and community-acquired pneumonia were derived from analysis of hospital discharge data using a hierarchical model that controlled for age, sex, ethnicity and deprivation level of the area of domicile, as well as for the 30 'Elixhauser' comorbidities. 17,18 The first level of the model related the mortality risks for individual patients to a hospital-specific intercept term and individual, patient-level, demographic and case-mix variables. The second stage of the model related the hospital-specific intercept terms to hospital characteristics. These included hospital type and condition-specific volumes. A more detailed description of this methodology has been published.19

Correlations were sought between indices of the implementation of Q&S measures related to each condition and the point estimates of population-standardised mortality risks.

Results

Results are expressed as the number of DHBs where a particular policy was in force over the number of DHBs for which we have information. There were some missing data and in some cases, a particular question did not apply, so the denominator may be less than 18.

Hospital structure and policies

The majority (15 out of 18) of DHBs had a quality and safety director responsible to the CEO or Chief Operating Officer, and almost all (16 out of 17) had a Q&S board.

Most (16 out of 18) DHBs were accredited and most (13 out of 16) of these had achieved accreditation on first inspection.

Most (16 out of 17) DHBs credentialed new Senior Medical Officers (SMOs) and had an ongoing program of credentialing existing ones (15 out of 16). The credentialing process usually included a review of Continuing Medical Education and peer review (15 out of 17), and often included a clinical audit or a review of clinical records; in nine cases, all four were employed.

All informants indicated that staffmembers were encouraged to 'fully and openly' report 'serious incidents, adverse events and near misses.' On a five-point scale, two said staffwere encouraged reasonably well (score = 3), eight that they were encouraged well (score = 4) and eight that they were very strongly encouraged (score = 5). The mean score was 4.3 out of 5. Only two DHBs had surveyed staffon their perceptions of their hospital's commitment to open disclosure.

Informants were asked how well the orientation of new employees emphasised open disclosure. On a five-point scale, one said not well (score = 1), two said somewhat (score = 2), five said reasonably well (score = 3), seven said strongly (score = 4) and two said very strongly (score = 5). The mean score was 3.4 out of 5.

All 18 DHBs had a formal definition of reportable events and used a standard reporting form. Most (16 out of 18) coded such events and produced reports recording trends (14 out of 18). Most (16 out of 18) also had a formal process of investigation and a policy of informing patients who had been affected by such an event (15 out of 18).

Most (16 out of 18) DHBs produced standardised monitoring reports of patient complaints using a standardised reporting process (15 out of 18).

General clinical processes

A minority of DHBs (8 out of 18) had a formal process of medication reconciliation where current medication was checked against hospital records of medication use; those with such a process reported that it was undertaken in all (2 out of 8), most (5 out of 8) or some (1 out of 8) departments.

Table 2 summarises the response to questions on the specifically nurse-sensitive issues of decubitus ulcers and in-hospital falls. In general, most DHBs monitor both outcomes and make a specified risk assessment tool available for each; monitoring the use of the tool is less common. Many DHBs have a prevention protocol for each issue but are less likely to monitor compliance formally. Compliance with both risk assessment and prevention procedures were considered to be in the mid-range. Scores were slightly higher in connection with falls.

All DHBs had a concerted program of hand-washing and all but one assessed compliance; all provided alcohol rubs in clinical areas. All DHBs for which there was information (n = 17) routinely measured organism-specific infection rates, although there was some uncertainty as to which organisms were included.

Only 10 DHBs had quantified standards on nurse : patient ratios; all but one of these monitored compliance at the shiftlevel. Most of these DHBs reported moderate to good compliance, with a mean value of 3.6 out of 5 (range = 2-4). Standards were sometimes adjusted for case mix (7 out of 10) and, more rarely, for nurse skill level (4 out of 10).

Six of the 18 DHBs had quantified standards for availability of SMOs after hours (time from call to attendance) but performance was monitored in only two cases. Compliance was assessed at 3.4 (range = 2-5) by the five informants who answered the question.

Two-thirds (12 out of 18) of DHBs had a clinical pathway for the detection of physiological instability and, in most of these, nurses were able to make changes to the level of care. In only five cases were stafftested on the process; assessment of the level of compliance had a mean value of 2.8 out of 5 (range = 2-5).

Disease-specific protocols

The frequency with which protocols were adopted is shown in Table 3. Only for AMI did all DHBs have written guidelines; guidelines for community-acquired pneumonia were only used in half the cases. Use of guidelines was not always mandatory and compliance was assessed as mid-range (mean = 3.1).

Grouped scores

Table 4 shows values by DHB of the grouped indices: of a possible score of 10, the mean score was 7.9 (range = 4.8-9.3) for structures and policies, 5.4 (range = 3.4-8.9) for general clinical and 4.1 (range = 1.5-7.0) for disease-specific processes. When all indices were combined, the mean score (out of 10) was 5.8 and the range was 4.3-8.0.

Relationship between scores

When correlations between the indices for the domains within each grouping were examined, no significant correlation was found. Many coefficients were low, and where a correlation was stronger, some were positive and some negative. When correlations between the indices for each group were examined, scores for hospital structure correlated with those for clinical process (r = 0.35) and scores for clinical processes correlated with those for disease protocols (r = 0.51). There was no correlation between scores for hospital structure and scores for disease protocols, nor between DHB size (as number of discharges) and indices of Q&S.

Relationship between Q&S scores and outcomes

The range across hospitals, of the estimated risk of 30-day mortality was 6.8-10.3% for AMI, 19.6-28.4% for stroke and 7.5-11.6% for community-acquired pneumonia. The correlation between the estimated 30-day mortality risk for stroke and the index related to the stroke protocol was -0.32 (95% confidence interval (CI): -0.60 to -0.03). Correlations between the AMI or pneumonia 30-day mortality risk and the relevant protocols were 0.33 (95% CI: 0.04-0.58) and 0.30 (95% CI: 0.01-0.53) respectively. Note that a negative correlation supports the hypothesis that stronger protocol use is related to lower mortality.

Discussion

Although some of the quality and safety measures have been widely adopted, others are implemented in only a minority of DHBs. Almost all hospitals had structures and policies to improve quality and safety such as the Q&S officer reporting to the CEO, credentialing and event reporting. General clinical processes such as a hand-washing program and protocols for decubitus ulcer and fall prevention were commonly in place, but formal medicine reconciliation, and standards for SMO availability and nurse : patient ratios were less widespread. Adoption of protocols for common illnesses was variable: all DHBs had a protocol for AMI but only half had one for community-acquired pneumonia. Although many Q&S measures were in place, their implementation was not always monitored and, in some cases, Q&S managers believed that compliance was only fair.

This accords with findings in the literature of varying adoption of Q&S measures within individual institutions both in the USA20 and in Europe.2

It has been suggested that patient safety is difficult to improve because incidents have low visibility and may be ambiguous, their causes are complex and clinicians wish to retain autonomy.21 These factors may explain the patchy implementation of Q&S measures; in particular, the desire for autonomy may explain the lower level of adoption of measures related to specific diseases. It is likely that compliance with policies is sometimes not monitored because of the demands this places on stafftime.

There is little correlation across DHBs in the use of measures within each domain - scoring high in one area does not predict high scores in others. There is some correlation across DHBs by group (hospital structures and policies, general clinical processes and disease-specific protocols). Despite within-DHB variation, there is significant variation across DHBs in total scores.

Q&S measures related to specific clinical conditions (use of an evidence-based pathway or guideline, and estimates of the intensity of its use) would have been expected to be related to improved outcomes.15 The lack of correlation between relative 30-day mortality risk and the related Q&S measure may be due to the relatively small number of cases and low variation in mortality rates. The measures may also have been adopted in different circumstances; in some cases, high-performing units may have instigated Q&S measures, but in others, Q&S measures may have been instigated in response to less than ideal performance. Further, measures may only recently have been adopted and improvement may not yet have occurred. Finally, it is likely that hospital outcomes are determined not only by defined Q&S measures but also by other factors such as adequate staffnumbers, the availability of experienced people, well thought-out operating processes, and traditions of staffcommunication and cooperation.

In considering the information presented, clinicians and hospital managers may identify Q&S measures that they could institute. These measures have been shown to improve outcomes but may only produce benefit over the long term; they are unlikely to compensate for less than ideal staffnumbers or lack of effective teamwork. Staffengagement in choosing and implementing Q&S measures is essential and compliance should be monitored. It is striking that even where measures were in place, monitoring or audits of compliance were often not carried out.

Future work on hospital quality might include the Q&S measures in place and the strength of their implementation, monitoring of staff: patient ratios and estimates of staffcooperation. Outcomes could include the frequency of adverse events as well as mortality risk.

Conclusion

There was significant variation in the frequency with whichDHBs had implemented the Q&S measures examined in the survey. Although some DHBs had implemented more measures, significant within-DHB variation remained. Measures related to structure and policy are more likely to have been implemented than those related to general clinical processes; measures related to disease protocols are least likely to be used. Managers believed that compliance with the measures was variable. It appears that, at present, there is no correlation between the adoption of Q&S measures and adjusted 30-day mortality risk for three common illnesses.

Competing interests

The authors declare there are no competing interests.

Acknowledgements

The authors appreciate the support of DHB CEO, and the time given by quality and safety managers in responding to our extended telephone survey.

[Reference]

References

1 Wennberg J, Fisher E, Fisher ES, Stukel TA, Skinner JS, Sharp SM, Bronner K. Use of hospitals, physician visits, and hospice care during the last six months of life among cohorts loyal to highly respected hospitals in the United States. BMJ 2004; 328(7440): 607-611. doi:10.1136/bmj.328.7440.607

2 Shaw C, Kutryba B, Crisp H, Vallejo P, Suno R. Do European hospitals have quality and safety governance systems and structures in place? Qual Saf Health Care 2009; 18: i51-i56. doi:10.1136/qshc. 2008.029306

3 Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press; 1999.

4 Leape L, Brennan T, Laird N. The nature of adverse events in hospitalised patients: results of the Harvard Medical Practice Study. NEngl J Med 1991; 324: 377-84. doi:10.1056/NEJM199102073240605

5 Wilson R, Runciman W, Gibberd R, Harrison B, Newby L, Hamilton J. The quality in Australian health care study. Med J Aust 1995; 163: 458-71.

6 Davis P, Lay-Yee R, Briant R, Schug S, Scott A. Adverse events in New Zealand public hospitals. Wellington: Ministry of Health; 2001.

7 Quality Improvement Committee. Scoping the priorities for quality in the health and disability sector. Wellington: Ministry of Health; 2006.

8 Malcolm L, Wright L. Clinical leadership and quality in District Health Boards in New Zealand: report commissioned by the clinical leaders association of New Zealand for the Ministry of Health. Wellington: Ministry of Health; 2002.

9 Goldstein H, Speigelhalter D. Statistical aspects of institutional performance: issues and applications. JR Stat Soc 1996; 159: 385-444. doi:10.2307/2983325

10 Normand S, Glickman M, Gatsonis C. Statistical methods for profiling providers of medical care: issues and applications. J AmStat Assoc 1997; 92: 803-14.

11 Anon. Reportable events guidelines. Wellington: Ministry of Health; 2001.

12 Anon. Credentialling framework for Senior Medical Officers in New Zealand - self-assessment tool. Wellington: Ministry of Health; 2003.

13 US Agency for Healthcare Research and Quality (USAHRQ) : USAHRQ;. http://www.qualityindicators.ahrq.gov/psi_overview.htm [verified 5 May 2012].

14 Henderson KE, Recktenwald A, Reichley RM, Bailey TC, Waterman BM, Dekemper RL, et al. Clinical validation of the AHRQ postoperative thromboembolism patient safety indicator. J Qual Patient Saf 2009; 35(7): 370-6.

15 Romano PS, Mull HL, Rivard PE, Zhao S, Henderson WG, Loveland S, et al. Validity of selected AHRQ patient safety indicators based on VA National Surgical Quality Improvement programme data. Health Serv Res 2009; 44(1): 182-204. doi:10.1111/j.1475-6773.2008. 00905.x

16 Grobman W, Feinglass J, Murthy S. Are the Agency for Healthcare Research and Quality obstetric trauma indicators valid mesures of hospital safety?AmJ Obstet Gynecol 2006; 195(3): 868-74. doi:10.1016/ j.ajog.2006.06.020

17 Elixhauser A, Steiner C, Harris DR, Coffey DR, et al. Cormorbidity measures for use with administrative data. Med Care 1998; 36: 8-27. doi:10.1097/00005650-199801000-00004

18 Quan H, Sundararajan V, Halfon P, Fong A, Rurnard B, Luthy A, et al. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Med Care 2005; 11: 1130-9. doi:10.1097/01. mlr.0000182534.19832.83

19 Graham P, Hider P, Cumming J, Raymont A, Finlayson M. Variation in New Zealand hospital outcomes: combining hierarchical Bayesian modeling and propensity score methods for hospital performance comparisons. Health Serv Outcomes Res Method 2012; 12: 1-28.

20 Jha A, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals - the hospital quality alliance program. N Engl J Med 2005; 353(3): 265-74. doi:10.1056/NEJMsa051249

21 Leistikow I, Kalkman C, Brujin HD. Why patient safety is a tough nut to crack. BMJ 2011; 342(d3447). doi:10.1136/bmj.d3447

[Author Affiliation]

Antony Raymont1,5 MBBS, PhD, Senior Research Fellow

Patrick Graham2 MSc, PhD, Senior Research Fellow

Philip N. Hider2 MB, ChB, MPH, MMedSci, FAFPHM, Senior Lecturer

Mary P. Finlayson3 RN, B Soc Sci(Hons), PhD, Director

John Fraser4 BMS, Manager Implementation Services

Jacqueline M. Cumming1 BA, MA (1st class honours), Dip. in Health Economics, PhD, Director

1Health Services Research Centre, Victoria University of Wellington, PO Box 600, Wellington 6140, New Zealand.

2University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand.

3Research Centre for Health and Wellbeing, Faculty of Engineering, Health, Science and The Environment, School of Health, Charles Darwin University, Charles Darwin University, Darwin, NT 0909, Australia.

4New Zealand Guidelines Group, PO Box 10665, The Terrace, Wellington 6011, New Zealand.

5Corresponding author. Email: raymonts@vodafone.co.nz.

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.