Research Article Volume 5 Issue 5
1The Care Quality Research Group, Chuncheon, Korea
2Department of Medical Affairs and Planning, Taipei Veterans General Hospital & National Yang-Ming University School of Medicine, Taipei, Taiwan
Correspondence: Wui-Chiang Lee, Department of Medical Affairs and Planning, Taipei Veterans General Hospital & National Yang-Ming University School of Medicine, Taipei, Taiwan, Tel 886-2-28757120, Fax 886-2-28757200
Received: April 13, 2017 | Published: April 28, 2017
Citation: Jeong HJ, Lee WC. A novel indicator for how fairly hospitals treat healthcare professionals’ safety improvement efforts: differences in hospital responses to medical error reports depending the type of healthcare professional reporting the event. Biom Biostat Int J. 2017;5(5):205-208. DOI: 10.15406/bbij.2017.05.00147
Error reports and subsequent improvement actions are critical components in patient safety, possibly preventing the same events from occurring again. This study investigates how fairly or differently hospitals respond to healthcare professionals’ medication error reports according to their job types (i.e., physicians, nurses, and pharmacists). Data from a national voluntary medication error reporting system (MEDMARX) in the US from 1997 through 2007 were utilized for this study. A logistic regression model was applied to understand the difference in the odds that an error report leads a hospital to take actions depending on types of reporters. Actions were acknowledged only when solid changes took place following reports. Because harmful errors generally induce more actions than nonharmful errors, we examined the effect of the reporter separately in harmful and nonharmful error reports. In addition, the differences in responses among different types of hospital owners—namely, government hospitals (GH) and nongovernment hospitals (NGH)—were analyzed to see whether they influence responses to error reports. Phases of the medication use processes such as prescribing, transcribing, dispensing, administering, and monitoring medication related to reporter type (e.g., pharmacists tend to detect most prescribing errors) and, thus, are controlled in the analysis. In addition, to address any clustering of reports within hospitals, the model included a random intercept for hospitals. Ultimately, 605,422 medication error reports were collected from 574 hospitals (153 GH and 421 NGH) for analysis. For nonharmful error reports, compared to hospital responses to reports from physicians, reports from nurses induced significantly fewer actions in both GH (OR=0.63, 95% CI: 0.52–0.77) and NGH (OR=0.62, 95% CI: 0.53–0.72). The ORs for pharmacists (compared to physicians’ reports) were 0.25 (95% CI: 0.20–0.30) in GH and 0.56 (95% CI: 0.48–0.66) in NGH. For harmful errors, in GH, reports from nurses and pharmacists were also less likely to lead to action compared to physicians’ reports (nurses: OR=0.58, 95% CI: 0.42–0.79; pharmacists: OR=0.60, 95% CI: 0.41–0.87). Ideally, error reports should initiate improvement actions regardless of types of reporters. However, this study shows that hospitals are less likely to respond to reports from nurses and pharmacists compared to those from physicians. When healthcare professionals’ reports are not addressed well, it can decrease the professionals’ morale. Healthcare organizations with error reporting systems need to take full advantage of these underutilized error reports, not only because they are a rich potential source of information, but also because they propel healthcare providers to pursue safety improvements.
Patient safety culture is well acknowledged as one of the most important factors for keeping patients safe from preventable adverse events.1–5 To manage or even improve safety culture, the first step is to be able to measure it precisely. To this end, several tools for quantifying safety culture have been developed and are being used, and most of them utilize self-administering survey questionnaires targeting healthcare professionals.6–11 Methodologies analyzing the survey responses have also evolved to get the most out of collected responses. For example, Jeong et al., in their series of articles, proposed several novel approaches to process and analyze the already collected safety culture questionnaire data and how to present them most effectively.12–21 These efforts certainly helped us depict how various aspects of safety culture are intertwined in healthcare professionals’ minds.22
Yet such a survey questionnaire-based safety culture measurement has fundamental limitations. First, the measurement unit is almost always the individual employee; thus, these instruments may not effectively capture how much value hospitals, especially their senior management, places on safety and each employee’s endeavors to improve safety. Some might argue that most such questionnaires include perception of management as one of the survey domains,7 so that we can obtain such information by aggregating all individual healthcare professionals’ responses to a survey questionnaire in a hospital. However, we should still be cautious. Evaluating one’s superior is not always easy or even honest; indeed, the evaluation of management in terms of safety culture is a survey section healthcare professionals frequently leave empty, resulting in missing values. Simply put, more often than not, seniors are deceived or at least under-informed about how they behave and appear.
Realizing this issue, we looked for a different approach to measure how healthcare professionals are treated by hospitals and their seniors in terms of their safety efforts. We found an answer in medical error reporting systems. To illustrate, if we know what kinds of actions a hospital takes in response to a healthcare professional’s medical error reports, we can use such information as a surrogate measure of the senior-level safety culture and attitudes.
We first briefly describe how a medical error reporting system works. The primary purpose of these systems is clear: collecting error data and using them to prevent the same events from occurring again.23–27 Collected reports not only provide in-depth understanding of incidents, but also serve as a signal that actions should be taken to prevent the same error. Thus, the success of reporting systems depends on two factors: i) how actively errors are reported by healthcare professionals and ii) how appropriately these reports lead a hospital to take an action to fix the problem.
The first premise is obvious. In a hospital, where medical errors can be discussed more freely and openly—without the fear of reprimand or blame—many more errors would be reported to the system. How easily a healthcare professional can report an error has been widely studied; even the differences in reporting across various job types have been examined.28 However, most studies have overlooked the second part—namely, how appropriately a hospital take actions in response to submitted reports—and, more importantly, if there is difference in rate of taking actions over attributes of a reporter such as one’s job type or seniority. Why is this important? We will share a real story that sparked this study.
An exemplary case of how a nurse felt ignored |
We cannot even imagine how frustrated the nurse in the case was, but we fully anticipate that she will not report any errors in the future. From the patient safety perspectives, this example highlights a huge loss of opportunity to prevent future events. Given this situation, we wanted to develop a new method to measure and analyze how a hospital and its decision-makers value healthcare professionals’ time and efforts to prevent future accidents. In addition, we pushed the envelope to examine any discrimination occurred when responding to error reports across job types, as the above vignette depicts. In other words, how many of these efforts are being ignored due to the job type in the hospital? In this study, we looked into the most frequently used treatment modality: the medication use process, where the key players are physicians, nurses, and pharmacists.
We utilized data from a national voluntary medication error reporting system in the US, MEDMARX, collecting errors from 1997 through 2007. A logistic regression model was developed to examine the difference in the odds that a medication error report causes a hospital to take actions depending on the reporter’s job type (i.e., physicians, nurses, and pharmacists).
Actions were acknowledged only when solid changes followed the reports,29 such as:
Any softer actions, such as informing staff who made the initial error, were disregarded.
Harmful errors are known to attract more attention and may lead hospitals to take more actions; thus, we examined the effect of reporter separately in harmful and nonharmful errors. In addition, we tried to understand if any differences emerged in taking action in response to errors among different types of hospital owners—namely, government hospitals (GH) and nongovernment hospitals (NGH).
Errors from each of the medication-use process phases (i.e., prescribing, transcribing, dispensing, administering, and monitoring) are highly related to reporter type. For example, many prescribing errors are detected by pharmacists. Thus, such a relationship was controlled in the analysis; to control for organization (hospital)-level clustering effects, we added a random intercept to the logistic model, obtaining a two-level model.
A total of 605,422 medication error reports collected from 574 hospitals (153 GH and 421 NGH) were analyzed (Table 1).
Hospital Type |
Count |
% |
Government |
153 |
26.70% |
Nongovernment |
421 |
73.30% |
Total |
574 |
100.00% |
Table 1 Characteristics of Hospitals Where Data were Collected
As described in Table 2, among those errors, 594,232 (98.2%) were nonharmful and 11,190 (1.8%) were harmful errors. Nurses reported 58.1% of the errors, followed by pharmacists (39.7%) and physicians (2.2%).
Error Characteristics |
Count |
% |
Reported by |
||
Physicians |
13,487 |
2.20% |
Nurses |
351,473 |
58.10% |
Pharmacists |
240,462 |
39.70% |
Total |
605,422 |
100.0% |
Harm Level |
||
Harmful |
11,190 |
1.80% |
Nonharmful |
594,232 |
98.20% |
Total |
605,422 |
100.00% |
Table 2 Error Reports by Reporter Types and Harm Level
Table 3 shows the main findings of this study: For nonharmful error reports, reports from nurses and pharmacists were less likely to lead a hospital to take actions than those from physicians. Compared to hospital responses to reports from physicians, reports from nurses induced significantly fewer actions in both GH (Odds Ratio(OR)=0.63, 95% Confidence Interval(CI): 0.52–0.77) and NGH (OR=0.62, 95% CI: 0.53–0.72). The ORs for pharmacists (compared to physicians’ reports) were 0.25 (95% CI: 0.20–0.30) in GH and 0.56 (95% CI: 0.48–0.66) in NGH.
Government |
Nongovernment |
|||
Nonharmful |
Harmful |
Nonharmful |
Harmful |
|
Physician |
1 |
1 |
1 |
1 |
Nurse |
0.63 |
0.58 |
0.62 |
0.56 |
Pharmacist |
0.25 |
0.6 |
0.56 |
1.36 |
Table 3 Odds Ratio of Actions Following Error Reporting
Note. Bold: p<.05; Reference Category: Physician in each Column
For harmful errors, in GH, reports from nurses and pharmacists were also less likely to lead to action compared to physicians’ reports (nurses: OR=0.58, 95% CI: 0.42–0.79; pharmacists: OR=0.60, 95% CI: 0.41–0.87). In NGH, no statistically significant difference was observed in hospital responses to error reports from physicians and pharmacists, but reports from nurses were still significantly less attended to by hospitals compared to those from physicians (OR=0.56, 95% CI: 0.43–0.74).
As our fellow safety researchers, you may already understand that the null hypothesis of this study was that there was no difference in hospitals’ responses to different healthcare providers based on job type. Quite honestly, we sincerely wanted to fail to reject the hypothesis, although publication bias might have hindered this article from being published. However, the fact that you are reading this article now suggests we had the unhappy success of rejecting the null hypothesis. There was indeed a difference in responses to error reports to different job types; rather, we might be able to call it discrimination, period.
Despite this gloomy result, we had written a longer than 2,000-word discussion section, but we suddenly realized the result itself explained everything. So we removed most of what we had written except for a quote by Eugene O’Neill: “Man is born broken. He lives by mending. The grace of God is glue”.30
We all know that, regardless of how well a system is designed, there must be faults. What we should do is accept reality and fix such flaws as soon as possible. The faster we can, the more lives we can save. Thanks to God, we have received the gift of glue, the tool necessary for fixing the flaws. Maybe we always need more glue, which is not possible given the scarcity of resources. At least the available glue should be distributed fairly to those who are eager to fix the system. The distribution should be free from job type, seniority, or power gradient in an organization. In this way, we can fully propel our healthcare to the world of safety as fast as possible, although we need everybody’s devotion to succeed, and that means we never have the right to disregard anybody’s efforts.
In this current study, we unfortunately successfully demonstrated that the different job types of error reporters led hospitals to act differently when responding to such reports. From our extensive experience, we guarantee such job type-based prioritization or discrimination will eventually and undoubtedly push our hospitals into a spiral of silence,31 where words like mending and glue do not even exist. That is not the kind of hospital we want taking care of our family members.
None.
None.
©2017 Jeong, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.
2 7