Submit manuscript...
eISSN: 2378-315X

Biometrics & Biostatistics International Journal

Editorial Volume 1 Issue 1

Exposure misclassification in observational studies: we can do better

Victor A Kiri1,2

1FV & JK Consulting Limited, UK
2Centre for Biostatistics, University of Limerick, Ireland

Correspondence: Victor A Kiri, FV & JK Consulting Limited, Guildford, Surrey GU1 1NX, UK, Tel +441483838717, Fax +441483838717

Received: September 23, 2014 | Published: September 23, 2014

Citation: Kiri VA. Exposure misclassification in observational studies: we can do better. Biom Biostat Int J. 2014;1(1):11-12. DOI: 10.15406/bbij.2014.01.00003

Download PDF

Editorial

Observational studies are increasing in number among research activities on the effectiveness and safety of pharmaceutical products as the focus in the post-marketing phase shifts from evidence on efficacy to evidence from the real world of routine clinical practice free of study-based restrictions. There is good reason for this shift. Although randomized controlled clinical trials (RCTs) remain the gold standard for assessing the efficacy of pharmaceutical products, they are grossly inadequate for addressing questions about the effectiveness and safety of these interventions, largely due to inadequate power, unrepresentative patient population and the controlled processes involved which make for unrealistic clinical settings.1,2 Real world evidence mostly comes from observational studies involving demographic as well as healthcare data on patients outside of the randomized controlled trial environment.3 In these studies, the investigators merely collect data on the treatments (i.e. exposure type) received by the patients in routine clinical practice as well as information about their health status. In other words, reflection of real-life clinical practice is without doubt the most compelling strength of observational studies. And yet, as is common with RCTs, a vast number of observational studies are based on the intention-to-treat (ITT) assumption for exposure classification.

Whilst ITT may be suitable in an efficacy trial setting where randomization and control mechanisms on exposure ensure its validity, the assumption may not be appropriate in the observational setting because of the need to reflect real-life (i.e. actual) drug use. Problems such as treatment compliance and intermittent exposure are more common in real-life medical practice than generally acknowledged in observational studies which if not adequately addressed can result in exposure misclassification- an important measurement error which may exert more impact on causal inference than any other sources of bias in a given observational study. Indeed, time is chiefly responsible for much of the exposure misclassifications we encounter in observational cohort studies where we are compelled to make simplistic assumptions about actual versus intended duration of treatment, number of times the drug has been used and the time of last use. The problem can also be induced from inappropriate use of time windows for exposure classification. However, some of the problems associated with such simplistic assumptions about the influence of time on exposure classification in these studies have already been described elsewhere.4,5 Indeed, inappropriate adoption of the ITT approach can result in non-differential exposure misclassification (i.e. similar error in both patients with and without the event) which can lead to bias towards the null. In other words, we may end up with exposure effect estimates suggesting little or no association when infact an association exists.6–10

Exposure is seldom continuous in real-life and intermittent drug intake is common practice and exposure misclassification can result in biased effect estimation, wrong conclusions and contradictory irreproducible results. This is a problem we ignore when we design a simple registry and analyse the resulting data as a cohort using the time-to-event approach to assess the safety or effectiveness of treatment. To reduce the effect of selection bias which is primarily because of the absence of treatment randomization, it is usual practice to select from the study population, only exposed and unexposed patients that are comparable on key confounding factors. To minimize the impact of exposure misclassification, we often also match each person who experienced the event of interest (i.e. cases) to one or more of those who are free of the event at the time (i.e. controls) from within the same cohort- thus ensuring that the cases and matched controls are nested within the cohort. Consequently, we often end up discarding much of the collected data- which in the case of a registry would mean a less than optimal use of resources. Actually, as with the cohort design, the nested case-control design is also similarly liable to the problem of selection bias in addition to the problem of sample size which can result from the less optimum utilization of the available information (i.e. the full study sample- the cohort). Never the less, the design offers a simple method for avoiding unreasonable assumptions in the evaluation of time-dependent treatment effect.6 We know that although in general, matching controls to cases on a confounding variable can improve the precision of the comparison of exposure groups, we cannot estimate the exposure effect, if in the matched data, either all the cases or all the controls are of the same exposure status or if indeed, the exposure status is the same in each matched set. This is because effect estimation is based entirely on the off-diagonal data in the resulting stratum-based 2x2 tables of the conditional logistic regression. Consequently in theory, the more such matched pairs our sampling can generate, the more improvement we can expect on efficiency. For this reason, the counter matched version of the nested case control design has been proposed as a more viable alternative to both the cohort and the classical nested case control designs and even more so if an appropriate group balancing instrument can also be incorporated. Thus offering the opportunity for reducing the likely impact of both exposure misclassification and selection bias.11 Indeed, according to evidence from simulated data, for situations involving an important effect modifier (i.e. a risk factor), the classical nested case-controls can be spectacularly less efficient when compared with that of the counter-matched version- even to the extent of failing to yield any meaningful effect estimates.12

However, despite being more efficient than the classical approach, the counter-matched nested case-control design remains less popular in practice- indicating an apparent, widespread reluctance by researchers to take advantage of its many potential benefits. On both theoretical and practical grounds, we could think of no conceivable explanation for this very low uptake or the preference for the classical approach. It seems likely that concerns about the additional computational requirements associated with counter matching might be responsible for this situation-a major handicap associated with the older versions of the leading statistical packages, but which has since been resolved in virtually all their more recent versions. In other words, for observational settings where exposure misclassification is likely, we can indeed do better by adopting the counter-matched nested case-control design in preference over the classical approach and that of the simple cohort.

Acknowledgments

None.

Conflict of interest

The author provides consultancy services on research methods to both the pharmaceutical and biotechnology industries.

References

  1. Guidance for Industry–Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment. USA: Food and Drug Administration; 2005.
  2. Dreyer NA, Schneeweiss S, McNeil BJ, et al. GRACE Principles: Recognizing high–quality observational studies in comparative effectiveness. Am J Manag Care. 2010;16(6):467–471.
  3. Ray WA. Evaluating medication effects outside of clinical trials: new–user designs. Am J Epidemiol. 2003;158(9):915–920.
  4. Kiri VA, MacKenzie G. How real is intention–to–treat (ITT) analysis in non–interventional PASS? We can do better. Curr Drug Saf. 2009;4(2):137–142.
  5. Stampfer MJ. ITT for observational data: worst of both worlds? Epidemiology. 2008;19(6):783–784.
  6. Copeland KT, Checkoway H, McMichael AJ, et al. Bias due to misclassification in the estimation of relative risk. Am J Epidemiol. 1997;105(5):488–495.
  7. Gullen WH, Bearmnn JE, Johnson EA. Effect of misclassification in epidemiologic studies. Public Health Rep. 1968;83(11):914–918.
  8. Newell DJ. Errors in the interpretation of errors in epidemiology. Am J Public Health Nations Health. 1962;52:1925–1928.
  9. Ancel K, Kihlberg JK. Effects of misclassification on estimated relative prevalence of a characteristic. Part 1: two populations infallibly distinguished. Part II: errors in two variables. Am J Public Health. 1963;53(19):1656–1665.
  10. Kiri VA. A pathway to improved prospective observational post–authorization safety studies. Drug Saf. 2012;35(9):711–724.
  11. Suissa S. Novel Approaches to Pharmacoepidemiology Study Design and Statistical Analysis, in Pharmacoepidemiology. 3rd edn. UK: John Wiley & Sons, Chichester; 2000.
  12. Kiri VA, Tepie–Feudjo M, Rodham A, et al. Utility of the nested case control design for risk assessment in the presence of an important risk modifier in pharmacoepidemiological studies: Evidence from Simulated Data. Pharmacoepidemiology and Drug Safety. 2012;21(3):S758.
Creative Commons Attribution License

©2014 Kiri. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.