Can an online poll be used as a valid alternative to a traditional paper based survey?

Mr Philip Sefton, Director of Client and Online Services, PO Box 789, Charles Sturt University [HREF1], Albury NSW, 2640. psefton@csu.edu.au

Dr John Atkinson [HREF2], Senior Lecturer, PO Box 789, Charles Sturt University [HREF1], Albury NSW, 2640. jatkinson@csu.edu.au

Abstract

The traditional paper based survey has long been used as a means to gather both qualitative and quantitative data. Such surveys can be quick to develop and easy to administer however there are a number of problems that can affect the validity of the results. In particular it can be difficult to obtain a large enough cohort willing to participate in such surveys but more disturbingly that the typical response rate may be low making it difficult to perform worthwhile statistical analysis on the data.

An alternative to the paper based survey is the use of an online survey. The online survey has the advantage of reaching a much wider cohort of people and as a result obtaining a significantly higher numbers of responses.

Similar to online surveys is the online poll. Rather than replicate a paper based survey in an online environment, a poll presents individual questions to users giving them the flexibility to respond to as many or as few questions as they choose. In comparison to online surveys, by their nature online polls have the ability to survey much larger numbers of respondents but have the disadvantage of being unable to correlate responses.

This research used the Doll and Torkzadeh (1988) measure of end user computer satisfaction (EUCS) as the instrument to compare the results obtained when administered as a paper based survey to those results obtained when administered as an online poll. The findings found that online polls do not closely compare to results obtained in a paper based survey when using continuous dependent data.

Introduction

Advantages and disadvantages of online surveys.

Surveys have been a dominant research technique for social research methodologies for at least the past thirty years (Fontana & Frey 2000). Paper surveys have been used as a self-reporting measurement technique to question people about their attitudes, behaviour, personality, and demographics (Cozby et al 1989).

Online surveys and polls have a number of advantages and disadvantages. The advantages include being able to administer the survey to a potentially wider audience than a paper based survey. In reality any person with access to the internet can complete an online survey; this is not the case for a paper based survey. Online surveys are also cheaper to administer in that the cost of printing and postage is removed from this process. With a large paper based survey there is not only the cost of printing and distributing, but also to the costs associated with returning the materials to the originator for processing; this is not the case for online surveys although it is acknowledged there are costs associated with online support. Finally web based surveys can offer the option for the results to be processed electronically and in real time once they have been submitted by the respondent. This online processing of the data can have many advantages for the timely dissemination of the research results.

Disadvantages include the problem of being unable to accurately monitor the people completing the survey. Paper based surveys tend to be more controlled in that the cohort for the survey is normally invited to participate in the research after which time the instrument sent to them to complete. On the other hand, an online survey is potentially available to anyone who has access to a particular internet site. The online survey also has the disadvantage in that it can only be completed by those people who have computer access, although with increasing internet availability this is not such an issue as in the past. Further, some online surveys may not be appropriate when specific cohorts of respondents are required, for example; in surveying lower socio-economic groups or geographically isolated communities. There is also the assumption that people who complete online surveys have a certainly level of computer proficiency and confidence in being able to firstly locate the required web site and then the skills to complete it. Another potential problem lies is identifying an appropriate cohort and the most appropriate way to advise respondents on how to complete it. In these days of spam email people can become cynical about 'another survey' and therefore disregard it, potentially biasing the results obtained.

Online polls / online surveys.

In an online environment two different survey methods are considered for this research. The first, an online survey, is where a whole survey is placed online and the same respondent completes it in the same time frame. In effect this type of survey replicates a paper based survey in an online environment. Typically such surveys will require anything from a few minutes up to one hour to complete. Another technique for collecting online data is an 'online poll'. An online poll normally asks one or perhaps two questions in one time frame usually taking less than one minute to complete. This type of survey can be used on an organisation's web portal to gain quick responses to single issue questions. Once the poll question has been completed cumulative results can be electronically processed and the outcomes instantly provided back to the respondent. This real time feedback can act as an incentive for the respondent to complete the survey however the validity of online polls in being able to collect more sophisticated data is problematic. This research will attempt to compare the results obtained in a paper based survey and those obtained through a series of online polls.

Figure 1 - Example of poll question


poll tool example

Internet portal

Internet portals have been developed as the main entry point into an organisation's intranet site gathering useful information resources into a single, 'one-stop' Web page (von Allmen, et al, 2002). Many organisations are now providing such portals to simplify the navigation to the various online resources at their sites and also to provide additional resources to encourage people to return to them (Green 1995). For example, at Charles Sturt University (CSU), one of the features available to users of their portal is an option to complete an online poll. This poll consists of a bank of questions that are presented to the respondent sequentially. Once a question is completed and the 'vote' button clicked, the results to-date are displayed and another question is offered to them to answer. A bank of questions remains available to respondents for a two week period before a new bank of quest ions is made available. In the case of Charles Sturt University poll questions are being asked to both stimulate interest in the site and to collect descriptive quantitative research data.

In the past an online poll has not been used as a valid surveying tool due to the fact that such polls can only prompt the respondent to answer one question in a particular time frame. Where multiple questions need to be answered, it is difficult to correlate the results back to a particular user. That is, the answers obtained from an online poll cannot be associated to a particular respondent, making it challenging to perform valid statistical analysis between the respective respondents. Each question in the online poll effectively becomes a standalone survey.

Thus we are led to the research question for this paper: Is it possible to correlate the survey data obtained from an online polling tool to the same data collected using a paper based survey?

Research methodology

Doll and Torkzadeh's (1988) instrument for the measurement of end-user computer satisfaction (EUCS) has become one of the more popular tools to measure IT success. Through survey and factor analysis they developed a 12 item instrument measuring end-user satisfaction in terms of content, accuracy, format, ease of use and timeliness.

The same research instrument was used by Xiao and Dasgupta (2002) to evaluate the success of an internet portal at a US university. They found that all but one question (C4 - Appendix A) continued to be valid of web-based systems.

This research takes the Doll and Torkzadeh (1988) instrument and the work by Xiao and Dasgupta (2002) in an attempt to compare the data obtained from a paper based survey to that obtained through an online poll. The aim of this research is to determine whether it is possible to use such instruments interchangeably as tools to survey large numbers of respondents. Using their instrument, this research compared the results obtained from administering the same questions to respondents in both a paper based survey and in an online poll environment.

A pilot evaluation was completed by surveying 32 university students with no significant problems identified with the instrument other than minor formatting errors. The final survey (Appendix A) consisted of 16 multiple choice questions. Twelve of these multiple choice questions were identical in wording to the original Doll and Torkzadeh (1998) instrument except for the replacement of the words 'the system' with 'my.csu'. Such changes have been common in previous research where the Doll and Torkzadeh (1988) instrument has been customised to make it more appropriate to a particular research setting (Seddon & Yip 2002).

Of the four additional questions that were asked, two related to obtaining global variables of IS success and thus replicated the research by Doll and Torkzadeh (1988) to validate their survey. The two remaining questions were used to determine which faculty the student belonged and their frequency of use of the my.csu portal.

The cohort for the paper based survey was a random sample of students from the five faculties within Charles Sturt University using a non-probability sampling method (Doherty 1994). The survey was administered to a total of 136 students between the dates 1st August 2003 to 30th September 2003. The survey was not able to personally identify any respondent.

In administering the online poll, the same questions were used as in the paper based survey. The questions were placed into the online poll in the same order as they were positioned in the paper based survey. However of significant difference in using the poll tool compared with the paper survey is the inability to administer the questions to the same respondents.

Due to the design of the polling tool, as discussed above, the questions used for this research were mixed with other non-relevant questions. This was done to ensure that the research questions did not dominate the poll turning it into a totally online survey. The 16 questions were made available to students during the period 30th July 2003 to 30th September 2003. At any one time period, only 2 survey poll questions specifically on this research were available to students (figure 2).

Figure 2 - Presentation sequence of poll questions


time frame

Unlike paper based multiple choice questions where a respondent makes a mark on paper to signify a response, the CSU polling tool defaults to the first choice for each question. To avoid the potential of bias and a respondent simply selecting this default answer, which would affect the validity of the results, an additional first option was provided for all questions which was "No comment / I do not want to answer". Thus respondents who took no action by default selected this response which was subsequently considered as a skipped question.

The poll tool as designed is unable to track and collate responses from individuals. No data was thus recordable from a single respondent for all questions. Therefore, the poll data is unable to be validated using the same analysis methods of Doll and Torkzadeh (1988). Nevertheless, some basic analysis techniques are used to compare the results and findings presented in this paper.

The research methodology process is graphically presented below:

Figure 3 - Research methodology


research-model (9K)

Data Collection

Following the pilot survey a modified paper based paper survey (Appendix B) was subsequently developed consisting of 16 questions. It was administered to students at Charles Sturt University over the period 1 August 2003 to 30 August 2003. To ensure a more random cohort of students, the survey was administered to different groups of students from each of the 5 faculties within Charles Sturt University. In total 136 respondents were surveyed using the paper based survey.

As indicated the poll version of the survey was made available online from 30 July 2003 to 30 September 2003. During that time any student or staff member who had access to the CSU web portal could elect to answer the poll questions. However due to the nature of the poll each question was only placed 'online' for a 2 week period before it was removed. That is, a respondent had the option of answering a particular question at anytime during this period. There was a one week overlap of each poll question so that at any given time, respondents had the opportunity to answer two poll questions. (figure 2).

Excluding non-validated questions, the maximum number of responses received for each poll question varied from 807 ("Is 'my.csu' successful ?:") to 450 for ("Does 'my.csu' provide up to date information?"). This compared to the 136 responses received for the paper based survey.

Data Analysis

As previously discussed, this poll tool can not associate responses to specific individual respondents. Therefore the original statistical analysis carried out by Doll and Torkzadeh (1988), namely factor analysis and item-correlations are not valid for the analysis of this online poll data. As a consequence only descriptive statistical was performed including t-test which is appropriate for continuous dependent and categorical independent data (Alreck & Settle 1985).

The main descriptive statistics for both the online poll responses and the paper survey are as follows. G1, G2 etc represent the questions that were asked in the surveys (Appendix B) and which correspond to the Doll and Torkzadeh (1988) survey instrument (Appendix A).

Table 1 - Statistics for online poll responses


Online poll responses 1 (13K)

Table 2 - Statistics for paper responses


Online poll responses 2

In comparing the online and paper surveys, an independent samples t-test was conducted. This test is an appropriate statistical measure where the independent variable is categorical (type of survey) and the dependent variable is continuous (individual responses) (Alreck and Settle 1985). That is, the independent samples t-test is used 'to compare the mean scores of two different groups of people' identifying whether the two sets of scores (type of survey) come from the same population (Pallant 2001) The differences between the groups is evaluated by considering the Sig (2-tailed) column, that is, the p value, provided through the statistical package SPSS, where p values lower than 0.05 were considered to be considered significantly different.

Table 3 - Independent-samples t-test


Independent sampes t-test (32K)

Discussion

The descriptive statistics show that the number of responses for the online poll was much higher compared to those obtained with the paper based survey. This is not surprising in light of an online poll being a more effective and cost efficient means to collect survey data compared to a paper based survey. Therefore if the results for the two survey techniques could be said to be comparable then an online poll could offer a researcher many advantages over administering a traditional paper based survey. The problem is that it is not possible to directly compare the poll and paper based surveys as the poll questions are not necessarily completed by the same people (compared to the paper based survey). However it is possible to compare the means for each of the answers obtained by the respective surveys to determine if there is a level of significance between them. The statistical test chosen was the independent-samples t-test.

Given a threshold of 0.05, eleven of the fourteen items have p values indicating there is significant differences between the paper and online responses (table 3). These low p values for the majority of the dependent variables suggests that it is not possible to directly compare questions asked through paper based surveys and online polls (table 3). These results imply that questions in the survey instrument relating to 'content', 'accuracy' and 'timeliness', which have low p values, can not be compared (Appendix A).

Interestingly two of the questions (E1 and E2) that did show a high level of significance related to 'ease of use'. Both questions had p values of 0.353 and 0.749 respectively and therefore a high probability that they could be compared. These findings are surprising in light of the low p values of the others questions indicating that certain questions of a particular topic may be appropriate to ask in an online poll environment. This anomaly needs to be investigated in future research in this area.

Another apparent anomaly was identified between questions G1 and G2 (Appendix A). It was observed that the G1 question showed significance between the two groups (p = 0.052) compared to the similarly worded question G2 (p = 0.00). G1 is a question directly on success while the G2 question relates to satisfaction (a surrogate measure of success). The relationship between success and satisfaction measurement in an IT environment is well documented (Powers & Dickson 1973, Seddon & Kiew 2003, Bailey & Pearson 1983) yet the results for such questions when asked in an online poll environment as against a paper based survey are not necessarily comparable. This result is disturbing in terms of what this research is attempting to address and therefore future research needs to ascertain whether the type of question can affect whether an online poll is appropriate for ascertaining such results.

Implications of the results for people conducting surveys.

While there is good preliminary evidence to suggest that the online poll environment done not have the potential to provide valid survey data this research must be replicated, using a different cohort to revalidate these findings. Nonetheless, the implications of this research are important and need to be disseminated for further discussion.

These findings are significant where a researcher may wish to utilise polls either as a means to minimise costs or increase the number of responses. This may not be an issue where the research is fully funded however having such costs removed from conducting research would have been an incentive to survey using the online poll approach.

The fact that online polls appear to be an invalid tool to replace the paper based surveys suggests that the online polls cannot be considered as a replacement tool when carrying out data collection. Therefore the opportunity to survey large numbers of respondents is lost.

Another interesting finding relates to a trend that was observed through the descriptive statistics where a gradual decrease in the number of people who completed the 16 question online poll was observed. (table 1) That is, the first question in the survey received 807 responses, however by the end of the survey period, the last question only received 450 responses. For similar poll questions being asked through the CSU portal, the number of responses remained consistently high. The question is, why did the number of responses drop for the online survey? Anecdotal evidence suggests that questions placed on the poll of the same genre loose the interest of the respondents particularly when placed one after the other on the poll, as was the case in this survey. This suggests that respondents to polls questions expect greater variety in the type of question being asked. This finding is different compared to traditional surveys where respondents have a mindset to answer the complete survey once they commence it. Therefore we recommend that poll questions are mixed with other unrelated poll questions to maintain consistently high responses.

Conclusion

This research shows that online polls cannot be considered as an alternative to using paper based surveys. The independent sample t-tests results obtained for the questions administered using a paper based survey and those through an online poll showed that in the majority of cases that there was significant difference between the means. The implication is that online polls cannot be used to survey a cohort of people replacing the more costly paper based survey. The findings of this research need to be further replicated to validate the findings obtained.

References

Alreck, P. L & Settle, R. B. (1985) The survey research handbook. Irwin : Illinois, USA

Bailey, J.E. and Pearson, S.W. (1983) "Development of a tool for measuring an analysing computer user satisfaction" in Management Science v.29, p. 530-545.

Cozby, P. C., Worden, P. E. and Kee, D. W. (1989). Research methods in human development. Mayfield Publishing : California, USA.

Doll, W.J. and Torkzadeh, G. (1988) "The Measurement of End-User Computing Satisfaction" in MIS Quarterly v.12, p.259-274.

Doherty, M. (1994) "Probability versus Non-Probability Sampling in Sample Surveys" in The New Zealand Statistics Review March issue 1994, p.21-28.

Fontana, A and Frey, J. H. (2000). From structured questions to negotiated text. in Handbook of Qualitative Research. (Eds.) N. K. Denzin and Y.S. Lincoln. pp. 645 - 672. Sage: Thousand Oaks, CA, USA

Green, D. G. (1995). Conference and Exhibition. "From honeypots to a web sin - building the world-wide information system" in Proceedings of AUUG '95 & Asia-Pacific World Wide Web '95. Sydney

Pallant, J. (2001). SPSS survival guide: a step by step guide to data analysis using SPSS. Allen & Unwin : Crows Nest, Australia

Powers, R.F. and Dickson, G.W. (1973) "MIS Project Management: Myths, Opinions, and Reality" in California Management Review v.15, p.147-156.

Seddon, P. and Kiew, M.Y. (2003) "A partial Test and Development of DeLone and McLean's Model of IS success" in Australian Journal of Information Systems

Seddon, P. and Yip Siew-Kee (2002) "An Empirical Evaluation of User Information Satisfaction (UIS) Measures for Use with General Ledger Accounting Software". p.1-32.

Von Allmen, S., Deans, K.R. and Bartosiewicz, I (2002) "Portals - Are we going in or out?" in Ausweb 2002 Conference proceedings.

Xiao, Li. and Dasgupta, S. (2002) "Measurement of User Satisfaction with Web-Based Information Systems: An empirical Study" in Eight Americas Conference on Information Systems.

Appendix A

G1 (global 1)     Is the system successful?

G2 (global 2)     Are you satisfied with the system?

C1 (content 1)     Does the system provide the precise information you need?

C2 (content 2)     Does the information content meet your needs?

C3 (content 2)     Does the system provide reports that seem to be just about exactly what you need?

A1 (accuracy 1)     Is the system accurate?

A2 (accuracy 2)     Are you satisfied with the accuracy of the system?

F1 (format 1)     Do you think the output is presented in a useful format?

F2 (format 2)     Is the information clear?

E1 (ease of use 1)     Is the system user friendly?

E2 (ease of use 2)     Is the system easy to use?

T1 (timeliness 1)     Do you get the information you need in time?

T1 (timeliness 2)     Does the system provide up-to-date information?

Appendix B

Click here [HREF3] for details of the survey questions used

Hypertext References

HREF1
http://www.csu.edu.au
HREF2
http://csusap.csu.edu.au/~jatkinso
HREF3
http://csusap.csu.edu.au/~jatkinso/AUSWeb04/AppendixB.html

Copyright

Philip Sefton and John Atkinson 2004. The authors assign to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.