Submit manuscript...
eISSN: 2576-4470

Sociology International Journal

Research Article Volume 3 Issue 1

Influence of the questionnaire design in self-administered surveys

Vidal Diaz de Rada

Department of Sociology, Public University of Navarre, Spain

Correspondence: Vidal Diaz de Rada, Department of Sociology, Public University of Navarre, Campus of Arrosadía s/n, 31006 Pamplona, Spain

Received: June 12, 2018 | Published: January 30, 2019

Citation: Rada VD. Influence of the questionnaire design in self-administered surveys. Sociol Int J. 2019;3(1):115-121. DOI: 10.15406/sij.2019.03.00163

Download PDF

Abstract

A review of the international literature about self-administered instruments talks about the poorer responses in the completion of the questionnaire. In order to find out the better or poorer responses to a questionnaire (made up of questions of different types and formats) it will be necessary to consider the impact of the questions on the responses. Is it feasible to think that questions which provide high-quality data in web surveys do not work in mail questionnaires? The purpose of this paper is to assess how question format and question type influences the number of non-responses in traditional self-administered (pen and paper) questionnaires and in Web questionnaires. To do so, a general population survey will be used, which combines four question types and five questions formats.

Keywords: self, administered, questionnaire, mail survey, web, survey, question, format, question, type, response, quality

Introduction

Half way into the second decade of the 21st century, predictions made prior to the year 2000 have been confirmed in terms of the prevalence of self-administered questionnaires, as opposed to those administered by an interviewer. Couper1 explained it as being due to the increasing time and cost necessary to locate, select, persuade and interview an individual.2,3 He also reported the greater honesty of respondents due to the lower level of social desirability bias in their answers.4–7

The second point made by Couper when predicting the future expansion of self-administered interviews is related to the major development of audio-visual media, which has made communication considerably easier, by combining oral, high-quality visual and even sensory stimuli.8,3 All of these contribute to improve the collection of data, as they make it possible to record the responses in digital format and facilitate sound manipulation. It is also equally important to have metadata to report on how the information was 'collected', the way in which the questionnaire was responded to (time spent, completed in one sitting or at different times, etc.). All these advancements obviously refer to a specific type of self-administered interviews, such as online surveys.8

A review of the international literature about self-administered instruments (both traditional and modern) accounts for these advantages of online surveys, as well as speed and economy, but these benefits ʻpay a priceʼ, such as the low rate of total response.9–12 The traditional mail survey, despite obtaining a higher (total) response rate, presents a more ʻawkwardʼ, more primitive, system of communication, which may explain the poorer responses in the completion of the questionnaire. In order to find out the better or poorer responses to a questionnaire (made up of questions of different types and formats) it will be necessary to consider the impact of the questions on the responses. Is it feasible to think that questions which provide high-quality data in web surveys ʻdo not workʼ in mail questionnaires? This is the purpose of this paper: to assess how question format and question type influences the number of non-responses in traditional self-administered (pen and paper) questionnaires and in Web questionnaires. To do so, a general population survey will be used, which combines four question types and five questions formats. More specifically, the objectives of this study are, firstly, to discover the influence of the method of administration (web-based or paper-based) on item non-response rates and, secondly, to analyse how item non-response rates vary depending on the format and type of questions used. The differences with respect to other studies previously carried out are, on the one hand, the use of a general population and, on the other, the wide scope of questions considered (in terms of type and format): demographic, factual, attitudinal and behavioural, considering the type; and with respect to format, dichotomical, nominal and ordinal forced-choice with different number of categories (from 3 to 8), multi-item (forced-choice, check-all-that-apply and grid questions) and open-ended (numerical and text).

Background

Item non-response is an important source of errors in surveys.13,14 This is why one of the major problems in self-administered surveys has been poor questionnaire completion. The meta-analysis carried out by De Leeuw15 clearly showed the higher number of unanswered questions in self-administered surveys than in face-to-face and telephone surveys. Although the item non-response rate of self-administered questionnaires may increase when certain aspects of the questionnaire layout are changed,16,17 this situation persisted in the study conducted by De Leeuw15 in The Netherlands and in more recent research studies.18

Some experts consider that item non-response rates vary according to the format of questions used,19 whether the questions require qualitative or quantitative answers,20–22 or whether the questionnaire contains open or closed questions.23,24 The type of question is not less important, and differences have been established between demographic, factual, attitudinal or behavioural questions. 18,13,25–27 Concerning this point, a large number of researchers think that too little attention has been paid to the quality of responses of Web questionnaires and, more specifically, to ʻthe way that Web questionnaires might affect respondents’ willingness to complete individual items within a questionnaireʼ.24

Starting with the format of the questions, the most important difference to be made is between open-ended and closed questions.10 Open-ended questions can be classified according to the type of response requested: numerical, list of items and description.10 The first ones show the lowest number of item non-response, which increases for those requesting a list of items, and increases even further for open-ended questions requesting a description. Practically all research studies have shown that open-ended questions in Web questionnaires show lower item non-response rates than in mail surveys. Open-ended questions can be classified according to the type of response requested: numerical, list of items and description.28,29 The first ones show the lowest number of item non-response, which increases for those requesting a list of items, and increases even further for open-ended questions requesting a description. Practically all research studies have shown that open-ended questions generate lower item non-response rates in Web questionnaires than in postal surveys.24,26,30,31

Closed questions are classified according to their scope, scale (nominal or ordinal) and number of questions. Those with a narrower scope, with two options, obtain a lower response rate, although there are also few differences between modes.26,27 Nominal questions receive poorer responses than ordinal questions, and this difference is greater in Web surveys, due to the lower item non-response rate in the latter.27 Multiple answer questions may be asked in two formats: check-all-that-apply or forced-choice, the latter receiving more responses than the former. Several studies19,32,33 recommend using forced-choice questions because a higher number of responses are obtained and also, because they prevent a more frequent choice of the first response options that occurs in check-all-that-apply questions (primacy effect). Multiple answer questions are the questions with the highest item non-response rate, although it is noticeably reduced in Web surveys.27,34

Sometimes grid questions are considered to be included within multiple answer questions. Although few research studies have been conducted on this issue to date, Israel35 found higher item non-response rates in Web surveys in two of his three studies, and the differences are not significant. Regarding the type of question used, Dillman,10 differentiated between factual, attitudinal and behavioural questions. Denscombe24 found few differences between factual and attitudinal questions, the Web survey being the one showing lower non-response rates in opinion questions. The study conducted by Messer27 is, however, more exhaustive, as it was based on three general population studies. Additionally, in this work four types of questions were used: demographic, attitudinal, behavioural and other factual. The use of a Multivariate OLS regression revealed the lack of influence of the question type, except for the other factual ones, which showed the highest non-response rate. In the specific context of self-administered questionnaires, the dilemma currently lies in considering to what extent Web surveys involve an increase in questionnaire response rate. Some studies have reported higher rates of item non-response in Web surveys, 21,11,36 others have not identified any differences,37 whereas others have suggested that Web surveys attract lower item non-response rates. 25, 38,39,31,40,41 Consequently, there is no unanimous position on this issue.

Methods

The data used in this paper came from a research study commissioned and funded by the Regional Government of Andalusia, whose objective was to understand the situation of people from Andalusia who were resident in other countries.

The questionnaire

Question format: More than half of the questionnaire (30 questions) was made up of forced-choice questions, the majority having five or two response choices, using round radio buttons in the Web questionnaire. 13 of these questions were ordinal, 11 were nominal and 6 presented two response categories (Yes/No). There were no filter questions. There were also seven multiple-choice questions: Three were check-all-that-apply questions and the remaining four had a forced-choice format with a yes/no choice for each item.10,33

Lastly, the questionnaire had three grid questions (6, 6 and 8 items respectively) that used rating scales with a polar-point labelled scale (0-10), with verbal labels only for the endpoints, and also seven open questions:

In the four open-ended numerical questions, space was left blank in the postal questionnaire that was large enough to write several-digit numbers. Next to this space a template was placed to specify the unit of measurement. 10,42 This template provided a guide for the respondent without the need for long explanations: ‘People’ in the question referred to the number of people who lived together in the household, and ‘rooms’ in the question referred to the number of rooms in the house. In the question concerning monthly income the respondent was asked to indicate the amount, followed by the currency of the country. In the Web-based questionnaire, two of these questions were responded to with drop-boxes or a select-list following Couper42 recommendations, specifically, the number of people living in the household and the number of rooms in the dwelling. Free space was left for the rest of the questions, with no digit restriction. The three open questions that required a phrase were designed to collect up to three answers. Each answer had two lines available in the postal questionnaire, bearing in mind that the space for the open question is considered by respondents as an indicator of the ‘expected’ length of their response. 10, 42–44 In the Web-based questionnaire, a 1-line text area10, 42 was made available with no indication of space. The space to write the answer was intended to be visually the same size in both questionnaires. One difference between the mail and Web surveys was that responses to open-ended items in the Web version were limited to 255 characters, while there is no such limit on the paper version. Because none of the Web respondents entered responses this long, this difference does not affect the comparisons made in this paper.

Question type

The questionnaire included 6 demographic questions (marital status, level of education, occupation, level of income, nationality of the respondents and nationality of their children) and 9 defined as other factual, following the categories established by Messer.27 The 12 questions about attitudes dealt with satisfaction with their home and neighbourhood, opinion about the environmental situation in Andalusia, assessment of their family relating to the level of income, main problems and needs, degree of identification with their Andalusian roots, opinion about political and economic changes occurred in the last 10 years and opinion about the degree of modernisation in recent years. The three grid questions were also concerned with attitudes, more precisely, with the degree of trust in six institutions in their country of residence and in Spain (12 items) and a third one (8 items) was related to satisfaction with work, politics, family, friends, leisure time, religion and education. The group with the largest number of questions concerned behaviour, considering aspects such as occupation, changes in expenses (in the last year) due to economic problems, demand/use of social support institutions, contact with public services in their country of residence, enjoyment of benefits from the Spanish Government of the regional Government of Andalusia (Junta de Andalucía), type of health cover, payment patterns when buying medicines and number of times that they have travelled to Andalusia. The different number of questions of each type is balanced out when considering the number of items (Chart 1): 35 (29+6) factual items, 37 attitudinal items and 39 behavioural items. All these 47 numbered questions with 111 variables were included in a 5-page questionnaire, which was designed based on recommendations by experts in self-administered questionnaires,10,32,45,46 following unified mode construction principles.10

With the aim of avoiding effects caused by the different ways of viewing the questionnaire, it was decided that all the respondents would ‘view the same’, regardless of the mode chosen to respond. In other words, from the different existing options to present the questions and answers in Web-based surveys10,42 in this study, several questions appeared at the same time on each computer screen, a division that - in broad terms- mirrored each of the pages of the paper-based questionnaire. The Web-based questionnaire did not use pop-ups, help screens, or any form of automated keyboard response that was not also available in the paper-based questionnaire, and therefore, after answering each question, they had to scroll down to answer the next question. The Web-based questionnaire allowed the possibility of moving from one question to another without having to answer the previous question.47,48

We are aware that this is not the most common way of asking questions in a Web-based survey, since various pieces of research have identified significant correlations between the questions appearing on the same page3,49 as a consequence of context effects.45 However, we consider that, in this way, the response process for the Web-based questionnaire is as similar as possible to that for the mail survey.42 Lastly, it should be noted that the response options were shown in the same way in all the questionnaires, that they were not randomised, and that the Web questionnaire did not use the ‘forced answer’ option. This is fundamental for a study wishing to compare the quality of the response to two questionnaires. It is important to note that, as in the postal survey, the Web-based questionnaire allowed the option of going back, reading the full questionnaire first and then answering it afterwards, correcting errors, etc. It was also possible to interrupt the response process. If this happened, the questionnaire would then be re-opened at the last question answered, the same as a paper-based questionnaire which- for example - the respondent had left at question 12. That is, the procedure of answering the postal survey was reproduced exactly, except in the ‘instrument’ used, where the pen was replaced by the computer.

No ‘don’t know’ and ‘no answer’ categories were included for fear that they would be used as a ‘refuge’ for respondents who did not want to give their opinion.50 Abundant research into the attitudes concerning questionnaires has shown that including these questions does not in any way reduce random error in responses,51,52 and that their inclusion sometimes leads to significant differences in the reliability and validity of the answers.53,54 Based on these findings, and taking into account that the majority of the questions in the questionnaire were factual questions, it was decided that no questions –whether open or closed– would have the options ‘don’t know’ or ‘no answer’, in a similar way as in previous pieces of research with similar aims.2 The object of study is the number of partial item non-responses to the questions in the questionnaire (items left blank), but the item non-response rate for each question is used: that is, the number of missing responses divided by the number of possible complete responses, multiplied by 100. We used the z test with the Bonferroni-Holm correction for differences in proportions to determine if there were any statistically significant differences in the rates of item non-response across modes. At a later stage OLS regression will be used to find out the influence of the mode of administration, question type and question format on item non-responses. The implementation procedures resemble those used in similar research. However, no monetary incentive was used, the contact method was not changed and no successive reminders were sent (only a single reminder). Financial issues, related to the cost involved in sending letters to 20 countries, and time concerns (given the need to allow time for the questionnaires to be returned) prevented the use of any further reminders. Specifically, each of those selected received by ordinary mail a package with a covering letter customized with an identification (ID) number, a questionnaire and a pre-paid envelope with which to reply. The covering letter gave assurance that the data would remain confidential, and explained how to answer the questionnaire. With the aim of highlighting the importance of the research, the heading of the paper-based questionnaire included the coat of arms of the Government of Spain and the logo of the Spanish High Council for Scientific Research (Consejo Superior de Investigaciones Científicas), as well as that of the Regional Government of Andalusia. These three symbols also appeared on each of the pages of the Web questionnaire. The request for ‘help’ made by these institutions to people originally from Andalusia who live away from their place of birth should increase their wish to collaborate, thus increasing the response rate.

During the first days of September 2009 we sent 11,959 questionnaires, which yielded 1,873 responses. Three months later we sent 7,168 reminder letters. These letters, of which 1,867 never reached their destination, generated 2,197 responses, which represented a cooperation rate (COOP2) of 21.9%. This is reduced to 20% when the Response Rate formula 3,4,27 is used.21 These are similar figures to those obtained in comparable general population surveys conducted in other contexts.11,12,40,55 The analysis of the response pattern obtained by each survey mode showed that the large majority of the questionnaires (1,835) were received by post (83.5%), 321 by Internet (14.6%) and 41 (1.9%) by CATI. The latter will not be considered in this paper.

Results

Item non-response in the questionnaires

Table 1 shows the percentage of respondents who left items unanswered, by mode of administration. 9% of respondents to the Web survey replied to ALL the items in the questionnaire, and 63% left less than 5% of the items unanswered; these percentages were reduced to 4% and 38% in the ordinary mail survey. If these figures are added, it can be seen that 73% of respondents to the Web survey replied to 95% of the questions: 200 respondents failed to reply to an average of 3 questions, most of which were concerned with the level of income. 1.3% of the respondents who replied to less than half of the questionnaire were dropped from the study, applying the criterion used by Messer27 The small size of this group (24 people) involved a reduction in the sample from 2,197 to 2,173 respondents (Table 1).

Percentage distribution of non-responses

% of unanswered questions

Mail

Web

Total

None

4.00%

9.00%

4.70%

Less than 5% of the questions

38.00%

62.90%

41.70%

Between 5.1 and 10% “

30.80%

20.90%

29.40%

Between 11% and 20%

17.80%

4.00%

15.70%

Between 21% and 30%

5.40%

0.90%

4.70%

Between 31% and 50%

2.90%

0.30%

2.50%

Over half of the questionnaire

1.10%

1.90%

1.30%

1836

321

2157

Table 1 Response quality, considering the % of respondents who left questions unanswered (vertical percentages)

Table 2 shows the item non-response rate considering the question format and question type by survey mode, either Web-based or paper-based (mail) survey. An analysis of the overall table shows that there is an average of 19 unanswered questions. This average is noticeably higher in paper-based questionnaires than in electronic questionnaires, with a difference of almost four percentage points. A detailed analysis of the question format reveals that the poorer responses are those given to check-all-that-apply questions, followed by open-ended questions and grid questions. Not considering these three types of questions involves a decrease in the average of answered items to less than 2. Lower item non-response rates are shown for ordinal questions and dichotomical questions. The values in ordinal questions can be accounted for by the fact that more than half had five categories (chart 1), although it is difficult to explain the poorer response obtained from dichotomical questions (with only two categories), compared to nominal questions, most of which have five categories (two questions had more than six categories). The analysis itemised by survey modes should shed some light on these results. By looking at the second and third columns of the table, substantial differences can be observed between modes in almost every single forced-choice question, and these differences increase when they are part of a multiple response question. In the mail survey item non-response in grid questions and nominal questions doubles the rate of Web surveys, and this difference increases even further for ordinal questions. Bearing in mind that the reading pace of questionnaires on a computer is significantly faster than on paper,56,57 this better response can be explained by the fact that ordinal questions have ordered categories (graded from high to low). Nevertheless, the most considerable differences were found in the forced-choice questions included in a multiple response question, where the differences between the survey modes were multiplied by five. More similarities between modes were found in check-all-that-apply questions, dichotomical questions, and open-ended questions (Table 2).

Number of items

Item non-response rate in…

Web

Mail

Both

Total

111

17,114

21,018

19,081

Question Format

Yes/no (dichotomical)

6

2,017

2,783

2,400

Nominal (forced-choice)

11

0,727

2,038

1,382

*

Ordinal (forced-choice)

13

2,200

4,938

3,569

**

Multi-item

Forced-choice (4 q, 31 items)

31

1,242

6,694

3,968

***

Check-all-that-apply (3 q, 17 items)

17

80,918

81,353

81,135

Grid question (3 q, 20 items)

20

5,340

10,525

7,933

***

Open-ended

13

25,646

32,977

29,312

Question Type

Demographic

6

1,650

3,450

2,550

Other factual

29

21,493

23,966

22,729

Attitudinal

37

9,689

14,441

12,065

Behavioural

39

23,367

27,769

25,568

(*)

Significant at 0.10.

(**)

Significant at 0.05.

(***)

Significant at 0.01.

Table 2 Item non-response rate by question format and question type

Regarding question type, behavioural questions had the poorest response, followed by other factual questions. It must be noted that both types of questions make up 68% (39+29/111) of the items in the questionnaire. The values of the sociodemographic items only 6 questions - can be accounted for by the high percentage of people who left the question regarding level of income unanswered. The analysis itemised by modes makes it possible to observe that the most important differences were identified in demographic and attitudinal questions, and that these differences are not significant. Without underestimating these findings, a multivariate analysis will permit an accurate assessment of the effect of each of the aspects considered, regardless of the rest. To do so, an OLS regression was performed, using as dependent variable the item non-response rate for each question by mode and, as independent variables, question type, question format and mode. All of the dependent variables have been coded as dummy variables, using the coding system shown at the bottom of Table 3.

Model 1

Model 2

Unstandardized

Standard

Unstandardized

Standard

coefficients

errors

coefficients

errors

MODE (1 = mail)

0.064

4.093

0.064

**

1.687

Question Type

Demographic

-0.12

**

5.631

Other factual

-0.182

**

3.921

Behavioural

-0.17

**

3.576

Attitudinal

Reference

Question Format

Nominal (forced choice)

-0.003

3.792

Yes / No

-0.027

4.01

Ordinal (forced-choice)

-0.101

**

4.046

Open-ended (numerical and text)

0.214

***

3.41

Multi-item

Check-all-that-apply

0.913

***

2.683

Grid question

-0.097

**

4.384

Forced-choice

Reference

 R2

0.004

0.838

 N

222

222

(*)

Significant at 0.10.

(**)

Significant at 0.05.

(***)

Significant at 0.01.

Table 3 Multivariate OLS Regression Models predicting Item Non-response Rates by Question Type and Format by mode

Model 1 is a bivariate OLS regression with mode as an independent variable, and in Model 2 we included the survey mode controlled for question characteristics. In Model 1 the mode of administration of the questionnaire is statistically significant, which indicates that, it constitutes, in itself, a noticeable change in item non-response rate for each respondent? Its coding system indicates that the mail survey increases the item non-response rate. Despite this being remarkable, this model has low explanatory value, with an R2 of 0.004. When introducing question type and question format in the regression, the explanatory power of the model becomes substantially higher, since R2 of 0.84 is obtained, while also making it possible to assess the influence of the mode compared to the rest of the aspects considered here. The mode coefficient does not change with respect to the previous model, although it shows a lower value to those found in question type and in some components of question format, which indicates the high explanatory value of both variables. Starting with question type, attitudinal questions have higher item non-responses than the rest, whereas other factual and behavioural questions show the opposite results. The influence of the question format reveals a decrease in item non-response rate for ordinal questions and grid questions with respect to multi-item forced-choice questions; but also a considerable increase in item non-response rate produced by check-all-that-apply questions and open-ended questions.

Conclusion

This study has shown the great influence that the mode of administration has on item non-response rate, thus confirming the results obtained for specific populations (e.g. students, teachers/faculty, and counsellors). When inviting respondents to use their mode of choice, considerable preference was observed for traditional mail (83.5%), to the detriment of the Web survey (14.6%) and CATI (1.4%). The analysis of item non-response rates showed that less than half (exactly 42%) of the respondents who used the mail survey to answer the questionnaire completed most of the questions , and the percentage increased to 72% in the Web survey. One in every three mail respondents failed to respond to between 5% and 10% of the questions, and almost 20% of them left between 11% and 20% of the questions unanswered. In line with the criterion proposed by Teclaw et al. (2010), which defined as ‘partial’ any surveys with between 51% and 80% of the questions answered, with the recommendation not to analyse any which exceeded that percentage, this would involve discarding 27.2% of the mail surveys (500) and only 7.1% of the Web surveys. The analysis of the question format shows that the poorest responses were those given to check-all-that-apply questions, open-ended questions and grid questions, although only grid questions were found to have a significant difference between modes. These conclusions are consistent with the findings in other contexts, 27,35,34,58 which showed that multi-item questions obtained poorer response rates, although the breakdown of the general concept ‘multi-item’ into specific question formats constitutes a substantial enhancement when designing the questionnaire. The questions which present the most substantial differences between survey modes are forced-choice questions, where the item non-response rate in web surveys is 1/5 with respect to mail surveys, as well as grid questions and nominal questions, whose item non-response rate doubles in mail questionnaires. The analysis of the question types indicates that behavioural questions obtained the poorest response rates, followed by other factual questions. In terms of question type, the most substantial differences between survey modes are associated with forced-choice type questions, where the item non-response rate in web surveys is 1/5 that found in postal surveys, and grid and nominal questions, whose item non-response rates double in postal questionnaires. Analysis by question type indicates that the poorest response rates correspond to behavioural questions: other factual and behavioural, in that order.

The use of OLS regression has shown the great relevance of survey mode, question format and question type for item non-response rates, although the last two have a greater influence than the first one. Attitudinal questions presented a higher rate of item non-response when compared to other factual, behavioural and demographic questions, whereas ordinal questions and grid questions showed a decrease in item non-response rates when compared to forced-choice questions which are part of a multiple response question. The questions that obtained the poorest response rates were check-all-that-apply questions and open-ended questions. The high non-response percentages for open-ended questions requires taking into account that, in free-text open questions, respondents were offered three possible answers, but not all respondents had had three occupations, nor did they all declare the three most important problems, nor did they state three reasons for staying in their country of residence. In fact, 55% of those interviewed did not state a second problem, and 75% did not express a third. In other words, only 25% gave three answers to this question. Something similar happened with the question regarding the reasons for staying in the country, with 68% giving two answers and 42% three. In fact, considering the answers to the first option involves a decrease in non-response rates to 8.23%, which ranges between 6.5% in Web survey and 11.5% in mail surveys.

Acknowledgments

None.

Conflict of interest

The author declares there are no conflicts of interest.

References

  1. Couper MP. Technology Trends in Survey Data Collection. Social Science Computer Review. 2005;23(4):486–501.
  2. Fricker S, Galesic M, Tourangeau R, et al. An Experimental Comparison of Web and Telephone Surveys. Public Opinion Quarterly. 2005;69(3):370–392.
  3. Couper MP. The Future of Modes of Data Collection. Public Opinion Quarterly. 2011;75(5):889–908.
  4. Dillman, Don A. et al. Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. USA: Paper presented at the 57th annual conference of the American Association for Public Opinion Research (AAPOR); 2011. 18 p.
  5. Currivan DB, Nyman AL, Turner Ch, et al. Does Telephone Audio Computer-Assisted Self-Interviewing Improve the Accuracy of Prevalence Estimates of Youth Smoking? Evidence from the UMass Tobacco Study. Public Opinion Quarterly. 2004;68(4):542–564.
  6. Kreuter F, Presser S, Tourangeau R. Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity. Public Opinion Quarterly. 2008;72:847–865.
  7. Yang Y, Callegaro M, Dillman DA. IVR and web administration in structured interviews utilizing rating scales: exploring the role of motivation as a moderator to mode. International Journal of Social Research Methodology. 2011;14(1):1–5.
  8. Couper MP, Bosnjak M. Internet Surveys. In: Marsden PV, Wright JD, editors. Handbook of Survey Research. Bingel: Emerald Group Publishing Limited; 2010. Pp. 527–550.
  9. Saunders MN. Web versus Mail: The Influence of Survey Distribution Mode on Employees’ Response. Field Methods. 2012;24(1):56–73.
  10. Dillman D, Smyth JD, Christian LM. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method. 3rd ed. New York: John Wiley; 2009. 528 p.
  11. Lozar Manfreda K. Web surveys versus other survey modes: a meta-analysis comparing response rates. International Journal of Market Research. 2008;50(1):79–104.
  12. Shih TH, Fan X. Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis. Field Methods. 2008;20(3):249–271.
  13. Dillman DA, et al. Survey nonresponse in design, data collection, and analysis. In: Groves RM, Dillman DM, Eltinge JL, editors. Survey non response. New York: Wiley; 2002. 520 p.
  14. Dixon J, Tucker C. Survey nonresponse. In: Marsden PV, Wright JD, editors. Handbook of Survey Research. 2nd Ed. Bingley, UK: Emerald Group Publishing Ltd; 2010. 250 p.
  15. De Leeuw ED. Data Quality in Mail, Telephone and Face to Face Surveys. Amsterdam: TT-Publikaties; 1992. 177 p.
  16. Jenkins CR, Dillman DA. Towards a Theory of Self-Administered Questionnaire Design. In: Lyberg LE, editors. Survey Measurement and Process Quality. New York: Wiley; 1997. 26 p.
  17. Redline CD, Dillman DA. The Influence of Alternative Visual Design o n Respondent’s Performance with Branching Instructions in Self-Administered Questionnaires. In: Groves Robert M, Don A Dillman, et al. editors. Survey non response. New York: Wiley; 2002. 193 p.
  18. De Leeuw ED, Hox J, Huisman M. Prevention and treatment of item nonresponse. Journal of Official Statistics. 2003;19(2):153–156.
  19. Smyth JD, Dillman DA, Christian LM, et al. Comparing check-all and forced-choice question formats in web survey: The role of satisficing, depth of processing, and acquiescence in explaining differences. SESRC Technical Report 05-029, Pullman, Washington. 2005. 30 p.
  20. Etter R. Do response rates and responses differ between mail and internet answer in a mixed mode survey?. Zurich: Swiss Institute for Business Cycle Research, Swiss Federal Institute of Technology; 2002. 85 p.
  21. Lozar Manfreda K, Vehovar V. Do mail and web surveys provide the same results?. In: Ferlijow A, Mrvar A, editors. Development in social research methodology. 2002;18(2):1–21.
  22. Stangl A. Mode effects on data quality: Benefits and drawbacks of mixed mode surveys. 27th Centre for International Research on Economic Tendency Surveys Conference. Survey Research Methods. 2004;7(3):157–168.
  23. Reja U, Manfreda KL, Hlebec V, et al. Open-ended vs. close-ended questions in web questionnaires. Metodoloski zvezki. 2003;19:159–177.
  24. Denscombe M. Item non-response rates: a comparison of online and paper questionnaires. International Journal of Social Research Methodology. 2009;12(4):281–291.
  25. Denscombe M. Web-based questionnaires and the mode effect. An evaluation based on item non-response rates and data contents of near-identical questionnaires delivered in different modes. Social Science Computer Review. 2006;24(2):246–254.
  26. Israel GD, Lamm AJ. Item Non-Response in a Client Survey of the General Public. Survey Practice. 2012.
  27. Messer BL, Edwards ML, Dillman DA. Determinants of Item Nonresponse to Web and Mail Respondents in Three Address-Bases Mixed-Mode Surveys of the General Public. Washington State University: Social & Economic Sciences Research Center; 2012. 33 p.
  28. Dillman DA, Smyth JD, Christian LH. Internet, Phone, Mail and Mixed-Mode Surveys. 4th ed. New York: John Wiley; 2014. 824 p.
  29. Toepoel V. Doing Surveys Online. SAGE; 2015.
  30. Lesser VM, Newton L, Yang D.Item Non-Response in Web and Mail Responses to General Public Surveys. Survey Practice. 2012;5(2):1–9.
  31. Kwak N, Radler B. A Comparison between mail and web surveys: response pattern, respondent profile and data quality. Journal of Official Statistics. 2002;18(2):257–273.
  32. Dillman DA. The logic and psychology of constructing questionnaires. In: de Leeuw ED, Hox JJ, Dillman DA, editors. International Handbook of Survey Methodology. New York: Lawrence Erlbaum Associates and European Association of Methodology; 2008. 558 p.
  33. Smyth JD, Dillman DA, Christian LM, et al. Comparing check-all and forced-choice question formats in web surveys. Public Opinion Quarterly. 2006;70(1):66–77.
  34. Millar MM, Dillman DA. Do Mail and Internet Surveys Produce Different Item Nonresponse Rates? Survey Practice. 2012;5(2):35–48.
  35. Israel GD. Item Non-Response in a Client Survey of the General Public. Paper presented at the 66th annual conference of the American Association for Public Opinion Research-AAPOR, Phoenix, AZ. 2011;5(2):1–4.
  36. Breko B Neza, Carstens R. Online data collection in SITES 2006: Papers versus web survey–Do they provide comparable results?. Washington, DC: Proceedings of the 2nd IEA International Research Conference; 2006.
  37. Wolfe EW, Converse PD, Airen O, et al. Unit and item nonresponse and ancillary information in Web-and paper-based questionnaires administered to school counselors. Measurement and Evaluation in Counseling and Development. 2009;21(2):92–103.
  38. Kiesler S, Sproull LS. Response effects in the electronic survey. Public Opinion Quarterly. 1986;50(3):402–413.
  39. Boyer KK, Olson JR, Calantone RJ, et al. Print versus electronic surveys: a comparison of two data collection methodologies. Journal of Operations Management. 2002;20(4):357–373.
  40. Bech M, Bo Kristensen M. Differential response rates in postal and web-based surveys among older respondents. Survey Research Methods. 2009;3(1):1–6.
  41. Schaefer DR, Dillman DA. Development of a standard e-mail methodology. Public Opinion Quarterly. 1998;62(3):387-397.
  42. Couper MP. Designing effective web surveys. New York: Cambridge University Press; 2008. 398 p.
  43. Smyth JD, Dillman DA, Christian LM, et al. Open-Ended questions in web surveys. Public Opinion Quarterly. 2009;73(2):325–337.
  44. Stern MJ, Smyth JD, Mendez J. The Effects of Item Saliency and Question Design on Measurement Error in a Self-Administered Survey. Field Methods. 2012;24(1):3–27.
  45. Dillman DA. Mail and telephone surveys. New York: Wiley. 1978;(6):37–85.
  46. Manglione TW. Mail Surveys. Thousand-Oaks: Sage; 1995.
  47. Albaum G, Roster CA, Smith SM. A Cross National Study of Topic Sensitivity: Implications for Web-Based Surveys. Journal of Marketing Development and Competitiveness. 2012;22(1):71–82.
  48. Decieux JP. Implementation of the forced answering option within online surveys: Do higher item response rates come at the expense of participation and answer quality?. Psihologija. 2006;48(2):311–326.
  49. Tourangeau R, Couper MP, Conrad R. Spacing, position and order: Interpretive heuristic for visual features of survey questions. Public Opinion Quarterly. 2004;68(3):368–393.
  50. Krosnick JA, Hollbrook AL, Berent MK, et al. The Impact of "No Opinion" Response Options on Data Quality: Non-Attitude Reduction or an Invitation to Satisfice?. Public Opinion Quarterly. 2002;66(3):371–403.
  51. Alwin D, Krosnick JA. The Reliability of Survey Attitude Measurement. Sociological Methods and Research. 1991;20(1):139–181.
  52. Gilljam M, Granberg D. Should we take don't know for an answer?. Public Opinion Quarterly. 1993;57(3):348–357.
  53. Scherpenzeel AC, Saris WE. The validity and reliability of survey questions: A meta-analysis of MTMM studies. Sociological Methods & Research. 1997;25(3):341–383.
  54. Wikman A, Wärneryd B. Measurement errors in survey questions: Explaining response variability. Social Indicators Research. 1990;22(2):199–212.
  55. Stephenson LB, Crete J. Studying political behavior: a comparison of internet and telephone surveys. International Journal of Public Opinion Research. 2010;23(1):25–55.
  56. Heerwegh D. Mode differences between face-to-face and web surveys: An experimental investigation of data quality and social desirability effects. International Journal of Public Opinion Research. 2009;21(1):111–120.
  57. Heerwegh D, Loosveldt G. Face to face versus web surveying in a high internet coverage population. Public Opinion Quarterly. 2008;72(5):836–846.
  58. Couper MP, Kreuter F. Using paradata to explore item level response times in surveys. Journal of the Royal Statistical Society Series a-Statistics in Society. 2004;176(1):271–286.
Creative Commons Attribution License

©2019 Rada. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.