Submit manuscript...
eISSN: 2378-315X

Biometrics & Biostatistics International Journal

Editorial Volume 6 Issue 2

Considerations on survey validation: focusing on international survey adaptation

Wui Chiang Lee,1 Hsun Hsiang Liao,2 Heon Jae Jeong3

1Department of Medical Affairs and Planning, Taipei Veterans General Hospital & National Yang-Ming University School of Medicine, Taipei, Taiwan
2Deputy Executive Officer, Joint Commission of Taiwan, Taiwan
3The Care Quality Research Group, Chuncheon, Korea

Correspondence: Hsun-Hsiang Liao, Deputy Executive Officer, Joint Commission of Taiwan, 5F, No.31,sec. 2, Sanmin Rd., Banqiao District., New Taipei City, Taiwan, Tel 886-2-89643902

Received: June 28, 2017 | Published: July 3, 2017

Citation: Lee WC, Liao HH, Jeong HJ. Considerations on survey validation: focusing on international survey adaptation. Biom Biostat Int J. 2017;6(2):308-309. DOI: 10.15406/bbij.2017.06.00162

Download PDF

Disclaimer

As a short editorial, this article does not contain detailed methods of survey adaptation. The authors’ sole intention is to call readers’ attention to a fundamental rule that should be considered, which too often is ignored or misunderstood.

No one doubts that the survey is one of the most frequently used methodologies to collect information about how the circuit in respondents’ minds is wired for a specific topic. Thus, if there is even a slight inconsistency between the topic of interest and the developed instrument, we should modify the instrument, sometimes significantly. This is a rule that should not be broken, although it is frequently violated. Let us expand the rule to an international level. If we adopt a survey developed in another country, especially in a different language, we can safely say the new survey can never convey the original meaning with mere translation. First, it is a groundless hope that the topic or construct measured in one language has a one-to-one word match with the very same concept in a different language. Describing the working definition of the construct at the beginning of an instrument might be a work-around, but a hard-wired concept consolidated over a person’s lifetime certainly hinders respondents from answering for the target construct as we wished. Besides the problems arising from the language difference, the unique combination and interaction between the instrument and original population where it was developed and tested cannot be quantified; thus, we cannot assure that the instrument will work as intended for the topic under investigation either.

The above difficulty naturally begs a question that certainly scares us: Is developing our own survey questionnaire for every study the only answer? We do not think so. The resources needed to develop a questionnaire from scratch almost always exceed by far too much the resources needed to adopt an existing survey instrument. Meanwhile, when it comes to international studies, developing a completely new survey for each country would hamper us from comparing or benchmarking countries. Thus, under the name of practicality, we make a leap of faith: We translate the original version into the language of our target population, while trying to minimize semantic changes in questionnaire items. Of course, whether the constructs between the two versions are the same or au moins very similar should be ensured first, although this is sensu stricto an impossible task.

This is essentially what we call survey adaptation, more frequently called instrument validation, of an adopted questionnaire. Almost everyone who deals with a survey questionnaire might have heard this term. However, if someone asks a pointblank question, (“What exactly should I do to validate a survey?”), we may not have many answers on the brain. One might come up with confirmatory factor analysis (CFA), but qui d'autre?

Ruminating on our modus operandi would help: If you search for academic articles about instrument adaptation, you may be quite astonished by their surprisingly similar structure and contents. To illustrate, the articles generally begin with background information, then, in the methods section, they describe the translation process—maybe including back-translation as a safeguard, although not always. Then some reliability coefficients are displayed—mostly Cronbach’s alpha. After that, the articles hit the gas and just dive into CFA, probably structural equation modeling (SEM), as a venue for presenting factor loadings of the questionnaire items. Finally, as the main dish, the alphabet stew of model fit indices is put on the table, including GFI, RMSA, NFI, and TLI, among others (introducing the full terminology of these is not our purpose). Stating that the indices are good or acceptable based on some external reference values, the adopted instrument is judged unbiased for field use; case closed.

We are worried about this approach; developing a tool to look into people’s minds can never be that simple. Most importantly, the indices introduced above only show that the translated instrument works as is; it does not suggest that it works as its original version did. Some might think this issue is so fundamental that it cannot be resolved by any means. That might be true, but it does not justify our mode of operation introduced above. Those indices are not the Holy Grail. Cronbach’s alpha and CFA are not synonyms of validation. Cronbach’s alpha is just one reliability measure with its own built-in weakness: Its value is easily inflated just by increasing the number of items, which leads us to rush too easily to the conclusion that internal consistency is good. On the other hand, CFA is used as a means to determine construct validity, and actually ensuring construct validity requires several other pieces of evidence, such as convergent, discriminant, and, of course, factorial evidence. Furthermore, besides construct validity, other types of validity should be checked (e.g., face, content, criterion-related). Some of these are quite qualitative, which might be why researchers try to avoid or less prefer them. However, these types must be considered. Especially when we conduct international adaptation of instruments, we must remember that no coefficients mentioned above provide the evidence that the concept of interest is precisely transplanted into a different culture.

No circumstances can preclude wrongfulness that would vindicate us in survey adaptation and validation. Once the instrument is validated and rolled out, hundreds or thousands of people will spend their time to complete it. We are obliged to guarantee that any questionnaire that respondents fill out is scientifically sound and not tainted with our mistakes and negligence. After all, we adopt the survey to use information from it; thus, incorrect information misleads us to design incorrect or incomplete programs and eventually waste our resources even at the international scale. So, the bottom line of this editorial is loud and clear. It is time to open our textbooks. Without a doubt, we will find a chapter not only on instrument validation, but also on international adaption, where so many important considerations that we have forgotten or ignored reside. Remind yourself of them and execute them: At the end of the day, we will obtain much richer and more precise information out of administering our instruments, from which we understand the human mind more thoroughly.   

Acknowledgments

None.

Conflicts of interest

None.

Creative Commons Attribution License

©2017 Lee, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.