Mathew Parackal, Lecturer, Department of Marketing, University of Otago [HREF1], PO Box 56, University of Otago, Duniden, New Zealand. Email: mparackal@business.otago.ac.nz
The Internet has become a popular survey medium among marketing researchers. The current coverage of the Internet however prevents it from being used as the sole medium in probability surveys. This paper explains a hybrid survey approach that used the Internet and the postal system to collect data from a probability sample. The paper presents the rationale of the approach and reports the results of a study that implemented the approach on two sub-groups. The paper also reports on the representativeness of the survey participants, the response rate received for the two survey media, and the overall response rate.
It is estimated that over 498 million people around the world have Internet access from their homes (Nielsen//NetRatings, 2002 [HREF9]). The Internet penetration in many European and Asia-Pacific countries has crossed the 50% mark. For instance, in New Zealand 52% of those aged 16 and above have Internet access from home (Nielsen/Netratings, 2001a [HREF10]; Nielsen/Netratings, 2001b [HREF11]). This is over half the target population of most marketing surveys. The prospect of reaching a large audience in many countries at a low cost has led researchers to develop the Internet for survey purposes (Aoki & Elasmar, 2000; Askew, Craighill, & Zukin, 2000; Nathan & Brennan, 1998; Parackal & Brennan, 1999; Vasja, Zenel, & Katja, 1999; Schillewaert, Langerak, & Duhamel, 1998; Kottler, 1997a; Kottler, 1997b; Knoth, 1997). The studies cited above used non-probability surveys and their results were restricted to the particular sample (Berrens et al., 2001 [HREF2]; Dillman & Bowker, 2001 [HREF5]; Bradley, 1999b; Vasja, Zenel, & Katja, 1999; Batagelj & Vehovar 1998; Coomber, 1997 [HREF4]). In order to extend the results to the population, researchers need to use probability surveys, characterised by probability samples selected from appropriate sampling frames (Couper, 2000; Bradley, 1999b).
Researchers have employed probability surveys over the Internet with reasonable success (Dillman et al., 2001 [HREF6]; Couper, 2000; Jones & Pitt, 1999). However, these were carried out on samples drawn from populations in which everyone had Internet access (e.g. students of the Michigan University). The approach, however, may not be viable for populations that comprise of both Internet and non-Internet users (e.g. general population). With the current Internet coverage, which is about 50% in many countries (Nielsen//NetRatings, 2002 [HREF9]; Nielsen/Netratings, 2001a [HREF10]), probability surveys implemented over the Internet will cover only half the population in these countries. Results of such surveys invariably would be biased towards those who participated (Internet users) and therefore may not be representative of the population. This paper reports the results of a study that employed an Internet-based survey and a mail survey to collect data from a probability sample selected from the general population.
Research syndicates have developed their own methods to employ the Internet to collect survey data for making inferences about the target population. Gordon Black of Harris Interactive (HI) developed a large panel of willing respondents to conduct Internet-based surveys (Berrens et al., 2001 [HREF2]; Rademacher & Smith, 2001; RFL Communications, 2000). Panel members were recruited by running advertisements and sweepstakes, the Harris/Excite poll, telephone surveys, and product registrations on the Excite and Netscape Web sites (Taylor et al., 2001). The Harris Interactive panel consisted of about seven million adults from which samples were drawn for survey purposes.
Harris Interactive came into the limelight after accurately predicting the closely contested 2000 presidential election between George W Bush and Al Gore (RFL Communications, 2000). The National Council on Public Polling (NCPP) in the United States commended the company and placed the Harris Interactive Internet polls ahead of the traditional telephone polls and an innovative method that used an automated telephone, pioneered by Rasmussen Research (Rademacher & Smith, 2001).
Knowledge Networks (KN), another research syndicate founded by political scientists Norman Nie and Douglas Rivers in 1998 pioneered an approach that generated a panel of randomly selected willing respondents (Berrens et al., 2001 [HREF2]; Forsman & Varedian, 2002 [HREF7]; Huggins & Eyerman, 2001[HREF8]; Krotki & Dennis, 2001). Telephone numbers were selected using the RDD method from geographical areas covered by the Web-TV network. Postal addresses of the selected telephone numbers were identified from a list and advance letters together with incentives of $5 and $10 were sent prior to contacting the households. Up to 15 attempts were made to contact an adult in the household before abandoning a telephone number. Individuals who agreed to participate as panel members were provided with free Web-TV units, Web access, email addresses and ongoing technical support. Panel members also received separate incentives for participating in surveys and the occasional reward for remaining on the panel. In return, panel members were required to participate in at least one survey every week for a period of two to three years. Survey notices were emailed to panel members who met the screening criteria of each survey. The email message included a hyperlink that when clicked triggered a multimedia questionnaire on the TV screen. Panel members returned the completed questionnaires via the Internet and the responses were collated into a database on the host server. The KN-panel consisted of 250,000 members at the end of 2001 (Berrens et al., 2001 [HREF2]).
Krotki & Dennis (2001) compared a random sample selected from the KN-panel with the US population. The authors observed that the sample was representative of the US population on gender and Hispanic population. The sample, however, under-represented the elderly and low-income households (see Table 1). Knowledge Networks rectified such discrepancies by including in the sample additional panel members who matched the survey criteria. They also regularly updated the panel to mirror the US population by age, gender, region, ethnicity, and education.
|
KN (%)
|
US (%)
|
|
|
Female
|
51
|
51
|
|
Black
|
11
|
12
|
|
Hispanic
|
09
|
11
|
|
Over 55 years
|
19
|
28
|
|
Low income (<$25,000)
|
16
|
28
|
|
Source: Krotki & Dennis (2001)
|
||
Berrens et al. (2001) [HREF2] compared the demographic make-up of samples selected from the two proprietary (HI and KN-panel) databases of willing respondents with a standard probability sample. The authors provided the results of comparisons made between a telephone-survey that used the RDD method and three Internet-based surveys. Of the three Internet-based surveys, two drew samples from the panel maintained by Harris Interactive (HI) and one from the panel maintained by Knowledge Networks (KN). The telephone survey (RDD) and the first Internet-based survey (HI1) were carried out at the same time in January 2000. The second Internet-based survey (HI2) was carried out in July 2000 and the third (KN) in November 2000. All four surveys fielded the same questionnaire.
The proportion of male respondents in the four surveys was identical (RDD = 48; HI1 = 48; HI2 = 48; KN = 48). The mean age of the respondents in the four surveys was also comparable (RDD = 42; HI1 = 44; HI2 = 44; KN = 45). The proportion of respondents with at least a college degree in the three Internet-based surveys was very close to each other (HI1 = 22%, HI2 = 23%, KN = 21%); this proportion however was comparatively high (41%) in the telephone survey. Figures obtained for this variable (at least a college degree) in the Internet-based surveys however, were closer to that of the US Census Bureau (23%). The four surveys underestimated the Hispanic (RDD = 7; HI1 = 9; HI2 = 10; KN = 10) and African-American (RDD = 8; HI1 = 12; HI2 = 12; KN = 11) populations. The figures obtained in the three Internet-based surveys once again were closer to that of the US Census Bureau (both being 13%). Berrens et al. (2001) [HREF2] concluded that the Internet-based surveys produced similar results to the telephone survey. In instances where differences occurred, figures produced in the Internet-based surveys were more comparable to those of the US Census Bureau.
Academic researchers have tested the Internet as part of hybrid survey approaches to collect survey data from probabilistic samples. Quigley et al. (2000) compared response rates of a conventional mail survey (Treatment One in Table 2) with two hybrid survey approaches. One used a mail survey with an added option of completing the survey via the Internet (Treatment Two in Table 2) and the other used an Internet-based survey with an added option of completing a paper version of the questionnaire by mail (Treatment Three in Table 2). The test was built into the 2000 Information Services Survey of the Defence Manpower Data Centre, USA that surveyed military personnel. Respondents were approached via their postal address with a request to participate in the 2000 Information Services Survey. Results of this study are summarised below in Table 2.
|
Phase 1
|
Phase 2
|
Total
|
||||
|
n
|
Mode
|
%
|
Mode
|
%
|
%
|
|
| Treatment One |
7279
|
Mail
|
-
|
-
|
-
|
40
|
| Treatment Two |
21805
|
Mail
|
77
|
Internet
|
23
|
42
|
| Treatment Three |
7209
|
Internet
|
73
|
Mail
|
27
|
37
|
|
Source: Quigley et al. (2000)
|
||||||
Response rates obtained in the three treatments were comparable (40%, 42% and 37%). In Treatment Two (mail survey with Internet option), 77% chose to complete the survey by mail while the remaining 23% did so via the Internet. In Treatment Three (Internet-based survey with mail option) 73% chose to complete the survey via the Internet while 27% did so by mail. The pattern of response rates obtained in Treatment Two and Three suggests that the survey mode that was offered at the start received the highest response rate. In which case, the approach of Treatment Three (Internet-based survey with mail option) could be considered as being the best of the three because of the over-all efficiency and cost effectiveness of the Internet-based survey.
Schonlau, Fricker, Jr., & Elliott (2002) [HREF13] reported the results of a hybrid survey approach that implemented an Internet-based survey with a mail survey option, similar to that tested by Quigley et al. (2000). This approach was used in a survey designed to ascertain intentions of high school graduates to enrol in military services. Respondents were contacted by mail via their parents' addresses with a request to participate in a survey on the Internet. A paper version of the questionnaire was included in the reminder letter, thereby, offering non-respondents the option of completing the survey either via the Internet or by post. The survey produced 2583 valid responses (21% response rate), of which 976 were completed via the Internet (38%) and the remaining 1607 by post (62%). The authors attributed the overall low response rate to the fact that the majority of respondents were not contactable via their parents' addresses. All the same, they reported a saving of $2000 by way of eliminating the cost of editing, data entering, and questionnaire printing for those who participated via the Internet.
Dillman et al. (2001) [HREF6] provided further support for hybrid survey approaches in their study that tested five approaches, which provided alternative survey modes to non-respondents (see Table 3). In all five approaches the response rate improved when non-respondents were offered an alternative survey mode. The best overall response rate was obtained for the approach that offered a mail and telephone survey options (83%). In this case, the response rate obtained in Phase One (75%) was sufficient, not warranting Phase Two (see Table 3). Of the three comparable approaches (Treatment 1, 4, and 5 in Table 3), the Internet-based survey produced the lowest response rate of all (12.7%) in Phase One. Offering an alternative telephone survey mode to non-respondents in this treatment (Treatment 5) increased the response rate to 48%. The increase in response rate by the alternative survey option was highest for this approach (increased by 73%).
|
Groups
|
Original sample
|
Phase One
|
Phase Two
|
Overall *Rr
|
@Increase in *Rr
|
||||
|
Mode
|
n
|
*Rr (%)
|
Mode
|
**n
|
*Rr (%)
|
(%)
|
(%)
|
||
| Treatment 1 |
2000
|
1499
|
75
|
Phone |
157
|
32
|
83
|
10
|
|
| Treatment 2 # |
1500
|
Phone |
651
|
43
|
1094
|
66
|
80
|
46
|
|
| Treatment 3 # |
1499
|
Phone |
667
|
44
|
1094
|
66
|
80
|
45
|
|
| Treatment 4 |
2000
|
IVR |
569
|
29
|
Phone |
438
|
36
|
50
|
42
|
| Treatment 5 |
2000
|
Internet |
253
|
13
|
Phone |
700
|
45
|
48
|
73
|
|
# Treatment 2 and 3 cannot be compared in Phase 2 because of different assignment method used * Rr = Response rate ** Includes non respondents and refusals of Phase One @ Change in response rate = [(Overall response rate - Response rate of Phase 1)/Overall response rate]*100 |
|||||||||
|
Dillman et al. (2001) [HREF6]
|
|||||||||
In any population there are individuals who are characterised as being innovative and individuals who are not so innovative (Foxall & Goldsmith, 1994; Roger, 1983; Robertson, 1971). The sophistication of the Internet attracts the innovative individuals in the population towards this medium (Mai & Mai, 2002; Citrin et al., 2000; Well & Chen 1999 [HREF12]). Survey data could be collected from these individuals by requesting them to participate in an Internet-based survey (Parackal & Brennan 1999; Brennan, Rae, & Parackal 1999). Survey data from individuals who are less innovative could be collected through a mail survey. The innovative theory thus provides the rationale for employing a hybrid survey approach to collect survey data from a probabilistic sample.
Results obtained so far for the hybrid survey approach have been encouraging. All the same, the approach may not be appropriate for all types of research. For example, in a population in which non-Internet users displayed the same degree of innovativeness towards a product as Internet users, surveying either one of these sub-populations would be sufficient to draw inferences about the population. In such instance, the hybrid survey approach would have very little value. On the contrary, if the two groups displayed different degrees of innovativeness then it would be imperative to collect data from both groups to make inferences about the population. Literature indicates that Internet users (innovators) tend to be different to non-Internet users (non-innovators) on both demographic and non-demographic characteristics (McDonald & Adam, 2003; Carini et al., 2003). In such instance, the hybrid survey approach would have immense value to collect survey data that would be representative of the population.
The current research aimed to find out how useful the hybrid survey approach was to collect survey data from two sub-groups (non-mobile phone users and mobile phone users) that displayed different degrees of innovativeness. The research achieved this objective by comparing purchase probability data of Wireless Application Protocol (WAP)-capable mobile phones collected from the two sub-groups for two time periods (twelve and six months). WAP-capable mobile phones were being launched in the market when the current study was carried out. For this study, interest in this new technology was seen as the individual's innovativeness.
The research used purchase probability data to gauge respondents' interest in the WAP-capable mobile phone. Purchase probability data were collected on an 11-point probability scale popularly known as the Juster Scale (Juster, 1966). The mean probability score obtained on this scale was treated as the proportion of respondents who would adopt the product (Day et al., 1991). The mean probability score of Internet users was compared with that of non-Internet users for the two time periods in each of the groups.
Data required for the comparisons were obtained by implementing a hybrid survey approach on a random sample of 3000 respondents, selected from the 2001 New Zealand electoral roll. Respondents were contacted via their postal addresses with a request to participate in a survey about WAP-capable mobile phones. The letter provided each respondent with a unique username and an access code. All respondents were encouraged to complete the survey on the Internet. For those who did not have Internet access, a reply paid postcard was included so that they could request a hard copy of the questionnaire. Coincidently, the initial contact letter of the survey was despatched on September 11th, 2001. To distance respondents from the events of September 11th, a reminder letter was mailed out to non-participants after a gap of one month. A second reminder letter was sent to non-respondents after three weeks.
The remainder of this paper reports the results of comparisons between Internet users and non-Internet users in the two groups. The paper also reports on the representativeness of the sample and the response rate obtained by using the hybrid survey approach.
Non-mobile phone users were asked to indicate on the Juster Scale their probability to purchase WAP-capable mobile phones. Mean purchase probability scores for Internet users and non-Internet users were compared using T-tests for the two time periods.
|
n
|
Mean Probability score
|
T-test
|
Sig
|
|
| Twelve months prediction | ||||
| Non-Internet users |
126
|
0.08
|
0.74
|
0.47
|
| Internet users |
81
|
0.10
|
||
| Six months prediction | ||||
| Non-Internet users |
127
|
0.07
|
-0.73
|
0.46 |
| Internet users |
81
|
0.06
|
||
Results shown in Table 4 indicate that there were no significant differences in the mean purchase probability scores between Internet users and non-Internet users for both time periods. This analysis suggests that Internet users and non-Internet users amongst non-mobile phone users were homogenous in their degree of innovativeness expressed towards WAP-capable mobile phones. Adoption rate based on the combined probability data of Internet users and non-Internet users was the same as those obtained in these groups separately. In this case, survey data from either one of these groups would be sufficient. The hybrid survey approach would have very little value for collecting survey data from such a group.
Mobile phone users were asked to indicate on the Juster Scale their probability to replace their current mobile phones with WAP-capable ones. The mean purchase probability score for doing this was compared between Internet users and non-Internet users for both time periods using T-tests. Results of this comparison showed that the mean probability scores obtained were statistically different for Internet and non-Internet users for both time periods. The mean probability scores of Internet users (0.18 and 0.29) were significantly greater than those of non-Internet users (0.18 and 0.10) (p = 0.000, p = 0.001 respectively) (see Table 5).
|
n
|
Mean Probability score
|
T-test
|
Sig
|
|
| Twelve months prediction | ||||
| Non-Internet users |
189
|
0.18
|
-3.94
|
0.000*
|
| Internet users |
294
|
0.29
|
||
| Six months prediction | ||||
| Non-Internet users |
189
|
0.10
|
-3.48
|
0.001* |
| Internet users |
294
|
0.18
|
||
| *Significant at the 1% level | ||||
Internet users and non-Internet users in this group were heterogenous in their degree of innovativeness expressed towards WAP-capable mobile phones. As such, adoption rate based on just one group (Internet users or non-Internet users) would not be representative of the population. In such instances it would be obligatory to obtain purchase probability data from both Internet users and non-Internet users. The hybrid survey approach was successful in collecting purchase probability data from Internet users and non-Internet users.
Literature suggests that Internet users tend to be different to non-Internet users on both demographic and non-demographic characters. One study reported that when information about computing and information technology was sought, responses obtained via the Internet were different to those obtained by the traditional paper and pencil method (Carini et al., 2003). McDonald & Adam (2003) reported that Internet users were different to non-Internet users in their demographic make-up (income, age, membership, occupation and lifestyle; p < 0.01). The same study compared the responses to non-demographic questions and found that for 26 out of the 65 questions (40%) the two groups were different. The difference persisted for 22 out of the 65 questions (34%) even after the two groups were made demographically equivalent. This study showed that while for over 60% of questions the responses of Internet and non-Internet users were the same, they were different for 40% of questions. The above observations provide further support for using the hybrid survey approach for collecting survey data from samples in which Internet and non-Internet users are typically different.
Studies that have compared different survey media (Internet versus traditional survey methods) by keeping the demographic variables constant have found only marginal differences in the data collected (Forsman & Varedian, 2002 [HREF7]; Chatman, 2002 [HREF3]; Burr, Levin, & Becher, 2001; Daly, Thomson, & Cross, 2000). These studies suggest that data collected over the Internet could be safely combined with data collected by the traditional method to form a single data set without introducing bias from the mode of participation. This observation provides support for the practice of merging the data from the Internet and mail survey into a single dataset, employed in the hybrid survey approaches.
The crucial test of this survey approach was the extent to which respondents who participated in the survey resembled the New Zealand population. To gauge this, comparisons were made using descriptive statistics relating to two demographic variables (age and gender) between respondents who participated in the survey, the original sample (the random selection from the electoral roll), and the 2001 census data. Comparison with the electoral roll itself was not possible as the list from which the original sample was selected no longer existed. Instead, the original sample was included in the comparison, as it theoretically resembled the list (electoral roll). The reason for using the age and gender variables in the comparison was purely because of the availability of the information in the three data sets (see Table 6 & 7).
Comparing the gender-split of survey participants with the original sample showed that the proportion of female participants was higher by 3% and that of male participants was lower by 3%. Comparison with the 2001 census data showed that the proportion of female participants was higher by 4% and the proportion of male participants lower by 4%. The ratio of female and male participants was marginally higher for survey participants when compared to the original sample and the 2001 census data. In both comparisons, differences were in the same direction. The proportion of female participants was over 50% in all three datasets.
|
Survey Participants(%) (A)
|
Original sample(%) (B)
|
Difference (A-B)
|
2001 Census(%) (C)
|
Difference (A-C)
|
|
| Female |
55
|
52
|
3
|
51
|
4
|
| Male |
45
|
48
|
-3
|
49
|
-4
|
| Total |
100
|
100
|
100
|
To investigate if the survey participants were similar to the original sample and the 2001 census data by age, the actual ages obtained from the electoral roll were categorised as young adults (20-24 yrs), adults (25-44 yrs), older adults (45-59 yrs) and elderly (60 yrs and above) (see Table 6). Comparing the age categories between survey participants and the original sample showed that the proportions of young adult and adult participants were lower by 3% and 2% respectively. The proportion of older adult participants was higher by 4% and that of elderly participants was higher by 1%.
Comparison between survey participants and the 2001 census data showed that the proportions of young adult and adult participants were lower by 4% and 3% respectively. Proportions of older adults and elderly participants were higher by 4% and 3% respectively.
In both the above comparisons, differences were in the same direction for all age categories. Rank ordering the age categories according to proportion showed that it was consistent across the three data sets (see Table 7). While there were marginal differences between age categories, the above observations (rank ordering and direction of differences) suggest that the overall make-up by age was comparable across the three dataset.
|
Survey Participants
|
Original Sample
|
2001 Census
|
||||
|
Percent
|
Rank
|
Percent |
Rank
|
Percent
|
Rank
|
|
| Young adults (20-24 yrs) |
7
|
4
|
10
|
4
|
11
|
4
|
| Adults (25-44 yrs) |
39
|
1
|
41
|
1
|
42
|
1
|
| Old adults (45-59 yrs) |
29
|
2
|
25
|
2
|
25
|
2
|
| Elderly (60 yrs and above) |
25
|
3
|
24
|
3
|
22
|
3
|
|
Total
|
100
|
100
|
100
|
|||
The number of survey participants after two reminder letters was 729. The response rate calculated after removing refusals and GNAs (gone no address) was 30%. This was low when compared to other survey modes (Brennan, 1992). One reason for the low response rate could be attributed to the fact that the timing of the survey coincided with the September 11th disaster in the United States. The response rate, however, was within the range (9% to 44%) reported in the literature for studies that have used similar hybrid survey approaches (Schonlau et al., 2002 [HREF13]). The response rate of 30% obtained in the current survey was an improvement over previous studies by the author of this paper (22% in Parackal & Brennan, 1999; 15% in Parackal & Parackal, 2002).
An obvious advantage of the hybrid approach used in the current research was its affect on the response rate. The number of respondents who completed the survey via the Internet and by filling-in a hard copy of the questionnaire was 403 (55%) and 326 (45%) respectively. The response rate obtained for the Internet-based survey was 16%, offering the alternative mail survey mode pushed the overall response rate to a modest 30%. The proportion of Internet participants in the current study (55%) was comparable to the Internet population of New Zealand (52%) during the time of this study (Nielsen/Netratings, 2001b [HREF11]).
The study also assessed the best method of providing a hard copy of the questionnaire to non-Internet users. The third wave included a split-test, designed to test two ways of achieving this. In one group the reminder letter included the free postcard that respondents had to return to receive a hard copy of the questionnaire. In the other group, the hard copy of the questionnaire was included with the reminder letter. The latter method returned an overwhelming 120 (89%) completed questionnaires compared to only 15 (11%) postcards, requesting hard copies of the questionnaire (p = 0.000). On the positive side, including a hard copy of the questionnaire with the reminder letter increased the overall response rate. The disadvantage of this method is that it could prompt respondents who otherwise would have completed the survey on the Internet to fill-in and return the hard copy of the questionnaire.
In this paper, a hybrid survey approach employed to survey probabilistic samples was explained. The rationale for suggesting and developing this approach was based on the innovative theory. This theory suggests that in most populations there are individuals who are innovative and those who are not so innovative. The hybrid survey approach enabled the collection of survey data from these individuals separately by employing an Internet-based survey and a mail survey. The data obtained in the two surveys were merged into a single dataset that was representative of the population.
The paper also reported the results of a study that aimed to find out how useful the hybrid survey approach was to collect survey data from two groups (mobile phone users and non-mobile phone users), characterised as having different degree of innovativeness. Mean purchase probability scores for WAP-capable mobile phones were compared between Internet users and non-Internet users in the two groups. Results showed that the mean purchase probability scores were the same for Internet and non-Internet users amongst non-mobile phone users but different amongst mobile phone users. The observations suggest that the approach was useful for collecting data from samples that were heterogenous with respect to the respondents' innovativeness towards purchasing WAP-capable mobile phones.
The hybrid survey approach adopted in the current study secured a response rate of 30%. This response rate was comparable to other similar hybrid survey approaches reported in the literature. The approach in the current study produced a sample that closely resembled the New Zealand population by age and gender.
To date, results of the hybrid survey approaches have been encouraging (Schonlau et al., 2002 [HREF13]; Dillman et al., 2001 [HREF6]; Quigley et al., 2000). The improved efficiency achieved (Parackal & Parackal, 2002) and its cost effectiveness (Schonlau et al., 2002 [HREF13]) will make this survey approach popular amongst marketing researchers. All the same, the approach is weighed down by reports of low response rates. More research is required to improve this to make the hybrid survey approach as robust as any other established survey methods.
Aoki, K. and Elasmar, M. (2000). "Opportunities and challenges of a Web survey: A field experiment'. Proceedings of the 55th Annual Conference of the American Association for Public Opinion Research, Portland, Oregon, May 18-21.
Askew, R., Craighill, P.M., and Zukin, C. (2000). "Internet surveys: Fast, easy, cheap, and representative of whom?" Proceedings of the 55th Annual Conference of American Association for Public Opinion Research, Portland, Oregon, May 18-21, 2000.
Batagelj, Z., and Vehovar, V., (1998). "Technical and methodological issues in WWW surveys" in Software and methods for conducting Internet surveys. Proceedings of the 52nd Annual Conference of the American Association for Public Opinion Research 1998, St. Louis.
Bradley, N. (1999b). "Sampling for Internet surveys: An examination of respondent selection for Internet research". International Journal of Market Research, 41, 4, 387-395.
Brennan, M. (1992). "Techniques for improving mail survey response rates". Marketing Bulletin, 1992, 3, 24-37.
Brennan, M., Rae, N., and Parackal, M. (1999). "Survey-Based Experimental Research via the Web: Some Observations". Marketing Bulletin, 1999, 10, 83 – 92
Burr, M.A., Levin, K.Y., and Becher, A. (2001) "Examining Web versus paper mode: Effects in a federal government customer satisfaction study". Proceedings of the 56th Annual Conference of the American Association for Public Opinion Research, 2001, Montreal, Quebec, May 17-20, 2001.
Carini, R.M., Hayek, J.H., Kuh, G.D., Kennedy, J.M.,. and Ouimet, J.A. (2003). "College student responses to web and paper surveys: Does mode matter?" Research in Higher Education, 44, 1, 1-19.
Citrin, A.V., Sprott, D.E., Silverman, S.N., and Stem, D.E. Jr.(2000). "Adoption of Internet shopping: the role of consumer innovativeness". Industrial Management & Data Systems, 100, 294-300.
Couper, M.P. (2000). "Web survey: A review of issues and approaches". Public Opinion Quarterly, 64, 464-494.
Daly, B., Thomson, G., and Cross, J.(2000). "Web vs. paper surveys: Lessons from a direct large-scale comparison". Proceedings of the 25th Annual Conference of California Association for Institutional Research (CAIR), 2000, November 1 - 3, 2000 Pasadena, California.
Day, D., Gan, B., Gendall, P., and Esslemont, D. (1991). "Predicting purchase behaviour". Marketing Bulletin 1991(2), 18-30.
Foxall, G.R., and Goldsmith, R.E. (1994). Consumer Psychology for Marketing, New York: Routledge
Jones, R., and Pitt, N. (1999). "Health surveys in the workplace: Comparison of postal, email and World Wide Web methods". Occupational Medicine, 49, 1999, 556-558.
Juster, F.T. (1966). Consumer buying intention and purchase probability. National Bureau of Economic Research, Columbia University Press.
Knoth, J. (1997). "Web survey prompts worldwide response". Computer Aided Engineering, 16, 10, 18-23.
Kottler, R.E. (1997a). "Exploiting the research potential of the World Wide Web". Paper presented at Research '97, London, October, 1997.
Kottler, R.E. (1997b) "Web surveys, the professional way". Paper presented at ARF Conference, New York, April, 1997.
Krotki, K., and Dennis, J.M. (2001). "Probability-based survey research on the Internet". Proceedings of the 53rd Conference of the International Statistical Institute, Seoul Korea, Aug 22-29, 2001.
Mai, L.W., and Mai, L.C. (2002). "The personality attributes and leisure activities of Taiwanese Internet users". The International Journal of Applied Marketing, 1, 1, 69-82.
McDonald, H., and Adam, S. (2003). "A comparison of online and postal data collection methods in marketing research". Marketing Intelligence and Planning, 21 /2, 85-95.
Nathan, R., and Brennan, M. (1998). "The relative effectiveness of sound and animation in Web banner advertisements". Marketing Bulletin, 1998, 9, 76-82.
Parackal, M., and Brennan, M. (1999) 'Obtaining purchase probabilities via a Web base survey: The Juster Scale and the Verbal Probability Scale'. Marketing Bulletin, 9, 67-75.
Parackal, M., and Parackal, S. (2002). "Database driven Web-based survey approach for forecasting adoption of new technology based products". Paper presented at the 26th CIRET Conference, Taipei, October 2002.
Quigley, B. R. A., Riemer, R. A., Cruzen, D. E., and Rosen, S. (2000). "Internet versus paper survey administration: Preliminary finding on response rates". Proceedings of the 42nd Annual Conference of the International Military Testing Association, Edinburgh, Scotland, 2000.
Rademacher, E.W., and Smith, A.E. (2001). "Poll Call". Public Perspective, March/April, 36-37.
Robertson, T.S. (1971). Innovative Behaviour and Communication, New York: Holt, Rnehart and Winston
Rogers, E.M. (1983). The Diffusion of Innovations, New York: Free Press
Forsman, G., and Varedian, M. (2002). "Mail and Web Surveys: A Cost and Response Rate Comparison in a Study on Students Housing Conditions". Proceedings of the ICIS 2002 - International Conference on Improving Surveys, Copenhagen, Denmark, August 25-28, 2002.
RFL Communications (2000). "Harris Interactive uses election 2000 to prove its online MR efficacy and accuracy". Research Business Report, November, 1-2.
Schillewaert, N., Langerak, F., and Duhamel, T. (1998). "Non-probability sampling for WWW surveys: A comparison of methods".Journal of the Market Research Society, 40, 307-323.
Taylor, H., Brenner, J., Overmeyer, G., Siegel, J.W., and Terhanian, G. (2001). "Touchdown! Online polling scores big in November 2000". Public Perspective, March/April, 38-39.
Vasja V., Zenel B., and Katja L. (1999) "Web surveys: Can weighting solve the problem?" Proceedings of the American Statistical Association, Alexandria, 1999.
HREF1
Department of Marketing, University of Otago, Dunedin, New Zealand
HREF2
Berrens, R.P., Bohara, A.K., Jenkins-Smith, H., Sivia, C., and Weimer, D.L.
(2001). "The advent of Internet survey for political research: A comparison
of telephone and Internet samples", <www.lafollette.wisc.edu/FacultyStaff/Faculty/Weimer/Resources/tellnet.pdf>
HREF3
Chatman, S. (2002). "Going beyond the conversion of paper survey forms to Web
surveys". Student Affairs Online [Online], 3, 1. <www.studentaffairs.com/ejournal/Winter
2002/surveys.htmls>
HREF4
Coomber, R. (1997). "Using the Internet for survey research". Sociological
Research Online [Online], 2, 2. <www.socresonline.org.uk/socresonline/2/2/2.html>
Dillman, D.A., and Bowker D.K. (2001). "The Web questionnaire challenge to
survey methodologists in dimensions of Internet science" Edited by Ulf-Dietrich
Reips & Michael Bosnjak. <survey.sesrc.wsu.edu/dillman/zuma_paper_dillman_bowker.pdf>
Dillman, D.A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., and Berck,J.
(2001). "Response rate and measurement differences in mixed mode surveys using
mail, telephone, interactive voice response and the Internet". Proceedings
of the 56th Annual Conference of the American Association for Public Opinion
Research, Montreal, Quebec, May 17-20, 2001. <survey.sesrc.wsu.edu/dillman/paper/Mixed
Mode Oppr _with Gallup POQ.pdf>
Forsman, G., and Varedian, M. (2002). "Mail and Web Surveys: A Cost and Response
Rate Comparison in a Study of Students Housing Conditions". Proceedings of
the International Conference on Improving Surveys (ICIS), August 25-28,
2002. <www.icis.dk/ICIS_papers/C2_4_3.pdf>
Huggins, V., and Eyerman, J. (2001). "Probability based Internet surveys: A
synopsis of early methods and survey research results". Federal Committee
of Statistical Methods Conference 2001. <www.fcsm.gov/01papers/Huggins.pdf>
Nielsen//NetRatings (2002) "A record half billion people worldwide now have home Internet access". < www.eratings.com/news/2002/20020306.htm>
Nielsen/Netratings (2001a) "Home Internet access dominates; Asian mobile penetration
booms; Aussies and Kiwis keen online shoppers". <asiapacific.acnielsen.com.au/news.asp?newsID=38>
Nielsen/Netratings (2001b). "Nearly 15 million people worldwide gained Internet
access in Q3".
Wells, W. D., and Chen, Q. (1999). "Surf's up-differences between Web surfers
and non-Surfers: Theoretical and practical implications". Proceedings of
the 1999 Conference of the American Academy of Advertising, pp. 115-26.
<www.cba.hawaii.edu/qchen/Research/publications/Wells&ChenAAA99.pdf>
Schonlau, M., Fricker, R.D.Jr., and Elliott, M.N. (2002). "Conducting research
surveys via email and the Web". RAND publication. <www.rand.org/publications/MR/MR1480/>
Mathew Parackal, © 2000. The author assign to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.