A Technical Response to Online Marketing Research Issues

Hossein S. Zadeh, Lecturer, School of Business Information Technology, RMIT University, Level 8, 239 Bourke Street, Melbourne, 3000.
Email: Hossein.Zadeh@rmit.edu.au

Stewart Adam, Senior Lecturer, School of Marketing, RMIT University, Level 14, 239 Bourke Street, Melbourne, 3000. Email: stewart.adam@rmit.edu.au

Kenneth R Deans, Senior Lecturer, Department of Marketing, University of Otago, P.O. Box 56, Dunedin, New Zealand. Email: kdeans@commerce.otago.ac.nz


Abstract

Many marketing research agencies in Australia and New Zealand and their clients are now turning to online marketing research and as a consequence are facing a number of issues.   While the Internet (Net) and its graphical interface the World Wide Web (Web) are well accepted in the United States of America, there is a slower uptake for commercial purposes in Australia and New Zealand.  It is in the context that business needs to undertake marketing research into the growing Web user base—both business and personal—using online methodologies that this paper is written.   The paper examines methodological issues concerning electronic marketing research in consumer and business markets raised elsewhere, and provides a response to the issues involved.   It is noted that online marketing research may take the form of experiments, focus groups, observational techniques and  surveys using Internet technology. This paper concentrates on the latter methodology.  A review of the business and technical literature is provided so as to more deeply examine both the issues and the suggested solutions provided by the technology. Future directions in online marketing research are also examined.


Introduction

As part of a broader multi-stage study into business and government use of the Internet, Adam and Deans [HREF 1]  initiated an online study in 1999, involving a sampling frame randomly drawn from the population of Australian and New Zealand domain names using email and Web form technology.  The process of literature review, pretesting and the final study itself highlighted a number of issues which marketing researchers wishing to scrutinise their growing online target markets may need to overcome (Deans and Adam 1999).  There are a number of claimed advantages for studying such markets using online techniques: lower costs; faster turnaround; higher response rates; lower refusal rates; lower respondent error; broader stimuli potential; flexibility in the form of adaptive questioning; and even greater enjoyment (Smee, Brennan, Hoek and Macpherson 1998; Forrest 1999; and Kehoe & Pitkow 1996).   The claimed benefits are treated in outline only in this paper.   However, there are also issues facing online market researchers in that not all of these claimed advantages are available in all instances.  Moreover, where business decides to seek the mentioned real or illusory benefits using the Web, they may face a range of hardware and software issues.  In addition they may face misunderstandings with Web administrators and even lack of certainty over which direction the Web is taking when choosing technological solutions.    The intent of the paper is to make a technological response to some of the online researchers' concerns rather than appear to hold a pro-technology bias by espousing the benefits of online market research data collection.  Thus, the paper concentrates on the matter of practicality, and to a lesser extent deals with validity and reliability.   In Weible and Wallace's terms, practicality, or efficiency, "concerns the complexity of the data collection process, and includes cost, ease of administration, and the ease of analyzing and interpreting the data" (1998, p.20). 

 

Background

Many firms are turning to the Web as a means of conducting market research at a lower cost than traditional methods.  Traditional methods include personal interviews and postal surveys to name but two of a number of well-known survey contact methods—experiments such as virtual buying simulations (Hanson 2000), online focus groups conducted in real-time with geographically dispersed respondents (Montoya-Weiss, Massey and Clapper 1998), and observational techniques such as content analysis of commercial Websites (Dholakia and Rego 1998; Adam and Deans 2000).  Commercial and academic researchers alike seek these benefits, but also recognise that they must also achieve validity and reliability.  They realise they must also achieve the greater practicalities that online research purports to offer (Weible and Wallace 1998).   Market research is likely to increase as greater use is made of what Hoffman and Novak (1996) termed 'computer-mediated' marketing.  The issues discussed in this section fall into four areas:

Server and software

The first important decision to be made in designing an online survey is the Web server. This decision is often dictated by legacy software, the network itself, and perhaps a Web administrator's preferences.  Certainly the organisation's Web development platform and Internet trends come into play. While some may argue that an online survey is not as mission critical as an e-Commerce (transaction) Website, the reality is that online surveys rely on the goodwill of respondents and must have as close to 100 percent uptime as possible.  If this uptime is not achieved incomplete responses or, worse still, low response rates may result. If a respondent is unable to connect to the server, she or he is unlikely to try again later. Such a situation means that a self-selection bias in favour of more persistent users is introduced. Perhaps the worst case scenario occurs when a server goes down while respondents are answering the questionnaire.   Thus, Web site availability must have a high priority, making it unwise to use any Web server that has an underlying operating system capable of causing lengthy interruptions. This matter is addressed further when examining the details of the system used for the WebQUAL Audit.

Online survey costs

Lower costs may eventuate where development, or fixed, costs are negligible, such as when using publicly available cgi-bin script or its equivalent on an existing computer system.  However, when developing commercial software for this purpose, the fixed cost can be much higher, and may need to be amortised over many research projects to gain a commercial rate of return on investment.  As the WebQUAL Audit software development that is detailed later in the paper illustrates, there are ways of stretching research budgets further with Internet technology.  The magnitude of the lower variable cost possibilities is illustrated by a market research firm in the United States which charges 15 cents per email survey, or one tenth of an equivalent postal survey (Weible and Wallace 1998).  These authors conducted a study about the Net, whereby members of a technically competent sample were randomly allocated to one of four distribution methods: mail; fax, email, and Web form. Mail surveys carry the disadvantage of the following variable costs: paper, printing, collating, manual or automatic stuffing/folding into envelopes as well as postage costs. Weible and Wallace (1998) arrived at similar fixed costs for the methods mentioned, but arrived at much lower per unit variable costs in the case of email and Web form as the following comparison indicates: Mail-$US1.56; Fax-$US0.56; Email-$US0.01; Web form-$US0.01.

Response rates, speed and related issues

In addition to the need for systems reliability, market researchers need to overcome such problems as respondent identity. This issue can be resolved in business-to-business (B2B) research but remains a problem in Web user or business-to-consumer (B2C) research.   The latter point is illustrated by the ten GVU studies where respondent identity is only at the IP level (Kehoe and Pitkow 1996).  While the major benefit of high response rates—63.2 percent and 46.0 percent response rates are reported by Brennan, Rae and Parackal (1998) and around 70 percent elsewhere in the literature (Weible and Wallace 1998)—this may have been a peak due to perceived novelty value by early online market research respondents (Deans and Adam 1999). 

A major issue with online surveys is the fact that the respondent may commence completion of the survey but for a number of reasons may terminate without their response being recorded.  While online market research must remain an opt-in exercise, unless precautions are taken, incomplete responses can be confused with a low response rate.   Again, there are technological solutions to this matter just as there are technological reasons for such terminations by respondents.   Reminders are often the key to lifting response rates.  An online survey method offers benefits in this regard.  As with many benefits, the other side of the blade also needs to be considered, in that technology can be a prompt to action for some and an intrusion to others.  Web technology permits adaptive questioning, just as it permits adaptive banner advertising.  However, the fact that it can do this, may not be justification enough for its use.  For example, many potential respondents might prematurely abort their online response if they are unable to print and read the survey questionnaire at their leisure.  This is reason enough not to use adaptive questioning, and also makes it questionable to use drop-down menu response alternatives as well as other complicated 'rank and rate' techniques that Web technology permits.  In the latter case a single question can become a multi-item questionnaire in its own right and often leads to an incomplete response.  Also, the inappropriate use of a graphic rich survey may also lead to an incomplete response.

The use of 'cookies' to track respondents in a study is also questionable—reasons include the use of proxy servers and caches which stand as obstacles and on to the fact that many users' browsers are locally set not to accept cookies (Smee, Brennan, Hoek and Macpherson (1998).

Online sampling and respondent contact issues

Direct marketing organisations rely on lists of potential buyers which are hopefully converted to a customer database.  Such lists may in the first instance be rented from other organisations in the value chain.  The temptation to carry this practice into the online realm is irresistible for some businesses.  The sourcing of such information and the nature of the information gives rise to privacy concerns and on into the arguments for industry self-regulation versus more restrictive government legislation and regulation. Internet user surveys and most online consumer  research do not involve probabilistic studies.  In such non-probabilistic studies a Website is set up to both attract and capture responses from electronic passers-by [HREF 2].   Conducting online probabilistic studies means firstly identifying the population and then contacting each member of the population in such a way that each has an equal opportunity to respond. The dilemma then is how to do this in a statistically correct manner and yet not intrude on potential respondents.  Adam and Clark make the following point concerning such online intrusions:

"firms should ensure that they follow the National Principles for the Fair Handling of Personal Information [HREF 3] and the revised Privacy Principles released by the Government in January of 1999 [HREF 4]. The Human Rights and Equal Opportunity Commission developed these, following wide consultation with industry, with the principles setting standards for the collection, use, disclosure, security, access and quality of information and bring Australian law into line with the EUs data protection framework" (2000, p.75). 

These authors further point out:

"The Internet Industry Code of Practice [HREF 5] imposes further restrictions in relation to the collection and use of user details. Section 8.3, for example, provides that 'Code Subscribers will collect details relating to a user only: a) if relevant to or necessary for the provision of the service or product that the Code Subscriber is engaged to provide, or b) for other legitimate purposes made known to a user prior to the time the details are collected." Code Subscribers must also take reasonable steps to ensure that information collected is up to date and accurate and that the information is kept confidential (s. 8.4). Subscribers will not collect personal information from any user who they know or might reasonably suspect to be under the age of 18 (s. 8.8). Vendor's obligations under the Code require the vendor 'not to engage in, nor encourage the sending of, Unsolicited Email' unless there is a pre-existing business relationship or to persons who have previously indicated their consent to receive email (section 10.7)" (2000, p.76). 

The Internet industry's code of practice has implications for market researchers wishing to use the Net to contact business and consumer users.  It is argued here however, that it is impractical to restrict online researchers to non-email  invitations to participate in online studies—that is to say, non-email means of opting in.  The method used with the WebQUAL Audit is put forward in a later section of the paper as appropriately meeting the needs of researchers and meeting the intent of the code, even if the approach does not meet the requirements of the code to the letter.

A technical response: the case of WebQUAL

The WebQual Audit is a three-stage, inter-country comparative and longitudinal study of business, institutional and government use of the Web. Thus far, the three stages have been completed in Australia and New Zealand and are underway in the U.K. and Europe.   Stage One involved an email invitation sent to a sampling frame comprised of every thirty-second domain name in Australia and New Zealand. The publicly available list of domain names was used to draw a sample using a skip interval of 31.  Each Website was visited to obtain an email address.  The first stage seeks to obtain results from which to analyse Web usage (Internet, Intranet and Extranet) by ANZSIC classification and to establish how organisations evaluated their expenditure on Web presence as well as the criteria used, among other outcomes [HREF 1]. The study examines strategic reasons for Web use which earlier studies showed not to exist (Deans and McKinney 1997; NUA Surveys). Stage Two involves content analysis of  respondent Websites using a descriptive model and comparing these with user responses. Stage Three involves follow-up interviews with a range of respondents in Stage 1. Thus, unlike such user surveys as the earlier-mentioned GVU Web user surveys, WebQUAL is a probabilistic study with suggested solutions to some online issues as well as new practical problems. As the WebQUAL Audit is extended to other countries to enable inter-country comparisons, it is the intention to point users from other countries to the WebQUAL server in Australia.  In each case, co-researchers have proprietary rights to their own results, and access to other results for publication purposes.  Technical aspects are not the major issue with such an extension into other countries.  One  issue is the availability of a census of domain names, and preferably encompassing email contacts.  As an example, reports from local researchers suggest that Singapore may not make such a census available.  Without such census information it is not possible to undertake the statistical analysis that such a rich study deserves.

Server and software solutions

As already explained, Web site availability must have a high priority, thus making it unwise to use any Web server that has an underlying operating system capable of causing lengthy interruptions. In addition to high availability, financial considerations are of concern in Web server design and implementation. There is a general belief that software license fees constitute only a fraction of Total Cost of Ownership (TCO) and that hardware requirements, development costs, running costs, and maintenance costs form a large proportion of TCO. In the WebQUAL Audit case, license fees were not a prime concern, as they were mostly covered by existing site license agreements. Hardware requirements however, were important. A fast and powerful (and hence very expensive) server could not be devoted to the Audit. Thus, the project had to rely on existing hardware. As the available hardware were production servers, the WebQUAL Audit implementation could not interfere with other processes on the same server. Gregory Yerxa summarises his experience in this regard as:

"Windows NT and IIS almost always caused us far more grief in this regard than the other products. We found ourselves taking coffee break after coffee break as we waited for another system reboot. No such problems arose with Apache and Netscape running on Solaris or Linux" [HREF 6].

This confirmed local experiences with NT and IIS. On a small, dedicated Web server this might result in a mere inconvenience. On large-scale production servers however, the interruption to other services (on the same server) is simply not acceptable. 

In addition to the above, access to a programming language that would produce Dynamic Web Pages (DWP) was needed. One example of use of the DWP in the WebQUAL Audit was to overcome possible parochialism in each country. DWP was used to present university logos and researchers names in the order appropriate for the respondent (domain) company's country of domicile. There are programming languages available for all the Web servers considered (e.g. ASP for NT/IIS, and PHP for Apache). From a performance viewpoint, recent versions of ASP and PHP are on par. PHP does however have a few features which are missing from ASP, such as the availability of Object Oriented Programming features).

Last but not least, information was an important consideration. There is a wide range of options available for data storage. These range from use of a simple text file to use of a SQL database server. The use of text file output is simple to set up and use, but such a method not fault-tolerant and certainly not scalable. SQL database servers provide fast, efficient, scalable, and fault-tolerant storage. However, they tend to be very expensive, and difficult to set up and use.

After careful examination of the available facts, it was decided to use a combination of Apache, PHP, and a SQL database as the infrastructure for the survey. This decision has resulted in a light-weight, fast, inexpensive, and reliable system.

A low-end multi-processor server was used. The system has two Pentium 166 processors and enough memory to prevent having to swap to hard disk under heavy load (in this case 128MB RAM). This may not seem such a powerful system, and indeed it would not have been adequate had the NT/IIS combo been chosen. Using this hardware/software combination, the system was able to adapt and serve both email invitations and Web pages to users, and to process the incoming responses, while also having low CPU usage and fast response times. It is to be noted that due to the open and modular design of the system, components could be run on different computers had the need arisen.  This was a redundant feature as it transpired.

Without regard to processing requirements, any database operation can benefit from a fast hard disk sub-system. Four very fast SCSI hard disk were installed in the system and configured as a software-driven RAID subsystem (Redundant Array of Inexpensive Drives). The array was setup to `strip' data between the hard disks—data written to the array was divided into four chunks and written to the disks in parallel. (see Figure 1). This proved to be an efficient configuration in the case of WebQUAL. There is however a negative side to this configuration, in that data spans over several hard disks, thus increasing the possibility of data-loss. To address this, a backup strategy was put in place whereby a complete backup of the system was made every night. Fortunately, there was no need to resort to the the back-up system at any stage.

Software-driven 
RAID

Figure 1. Software-driven RAID.

Software Setup

Figure 2 shows the software setup of the WebQUAL system. Online respondents connect to the server via the Internet. The Web server calls on the scripting engine (PHP) to parse any embedded scripts in the pages. The scripting engine, when needed, interacts with the PostgreSQL database server that stores the resulting information. Results of the operation are sent back to the client's browser.

WebQUAL software setup>

Figure 2. WebQUAL software setup.

Online survey costs: stretching research budgets

Programming Language

PHP is a free server-side scripting language for creating dynamic Web pages. When a respondent opens the page, the server processes the PHP commands and then sends the results to the respondent's browser, just as with ASP or ColdFusion. Unlike ASP or ColdFusion however, PHP is Open Source and cross-platform. PHP runs on Windows NT and many Unix versions. When built as an Apache module, PHP is especially lightweight and speedy. In addition to manipulating the content of Web pages, PHP can also send HTTP headers, set cookies, manage authentication, and redirect users. It offers excellent connectivity to many databases, including PostgreSQL, Oracle, and Sybase among others. PHP also integrates with various external libraries that let it do everything from generating PDF documents to parsing XML. PHP goes right into Web pages, so there is no need for a special Integrated Development Environment (IDE). A block of PHP code starts with <?php and finishes with ?>.  It can also be configured to use ASP-style <% %> tags or even <SCRIPT LANGUAGE="php"></SCRIPT>. The PHP engine processes everything between those tags. PHP's language syntax is similar to C and Perl, so programmers familiar with these languages should feel comfortable. For C++ coders, PHP has some object-oriented features, providing a helpful way to organise and encapsulate code. Although PHP runs fastest when embedded in Apache, there are instructions on the PHP Web site for seamless setup with Microsoft IIS and Netscape Enterprise Server. A Netcraft survey [HREF 7] shows that PHP usage has jumped from 7,500 hosts in June 1998 to 410,000 in March 1999 and to over 1,000,000 in January 2000.

Database server

The main asset of the WebQUAL Audit is user responses. The fact that the data cannot be regenerated, even if the Audit were to be repeated, adds to its value. Therefore the responses must be stored using a fault-tolerant storage system. It was also necessary to take into account the future roll-out to cover more domains and a number of additional countries. Therefore it was decided to use a SQL database server for storage purposes. Having a massively complex Relational DataBase Management System (RDBMS) is all well and good if one knows what to do with it. This study  did not have the resources nor the desire to use an expensive database. In addition to low costs, the database selected needed to be Open Source, fast, standards-compliant, and have proven reliability. It was also a requirement that the database worked with UNIX. PostgreSQL was selected because it is a cross platform, Open Source, (almost) ANSI-compliant database. PostgreSQL  is from the developers of the Ingres database system and is available for almost all flavours of Unix as well as Windows NT.

Integrating the WebQUAL hardware and software

Installation was started by setting up the RAID in software (Linux kernel). This was done to maximise hard disk throughput as it was identified as the system's bottle neck. Next, the Apache Web server (including PHP) was installed and configured. Then the database engine was installed and configured so that it could be accessed by the web server. This gave a `blank' working system at this stage. This is the basis for any data-driven Web service (including e-Commerce sites). Sampling frame domain names and email addresses were entered into the database. A PERL program was developed to generate a unique random password for each of the participants. These passwords were stored in a separate table in the SQL database. This kept potential respondents' details separate from respondent resulting data. An administration page was provided which permitting editing of the email template among other features such as exporting the resulting data as and when required. It was this feature which enabled the researchers to identify respondent quality issues detailed in the next section of the paper.  The template was personalised for individual users using a feature similar to a wordprocessor's 'mailmerge' feature and included their unique password in the resulting email.  When participants entered the Web page at the URL emailed to them, the login page was generated by PHP and adapted to their country of origin. After entering a correct password, the participant was presented with the questionnaire. On completing the questionnaire and pressing the ''Submit'' button, PHP parsed the page, extracted the entered information, and placed this in the database. From the admin page the researchers could export the resulting data in comma delimited format readable by Microsoft Excel or SPSS software for analysis.

Technology's impact on response rates, speed and response quality

Table 1 illustrates comparative response speeds in a study undertaken by Weible and Wallace among a "technologically sophisticated population [was] chosen in order to better explore [the] effectiveness issues" (1998, p.22).  In this study, 23 out of 52 (44 percent) responses via web form were received within 14 days.  

Days

Mail

Fax

Email

Web form

All methods

Mean

12.9

8.8

6.1

7.4

9.6

Mode

12

12

0

1

13

Median

12

12

2

5.5

12

Table 1. Response speeds from four survey methods.

Source: Weible, R. And Wallace, J. (1998), "Cyber Research: The Impact of the Internet on Data Collection," Market Research, Fall, 10(3):p.23.

The WebQUAL Audit results shown in Table 2  indicate a faster response rate.   It is evident that 336 out of 450 (75 percent) responses occurred within the first 10 days and prior to any reminder being sent.  A total of  450 complete responses were received.

Time periods

Sampling frame
(adjusted for bad)

Bad emails / request to delete
per period

Response
per period

Response rate
per period

Response speed

Days 1-7

2991

6

214

7.2%

47.6%

Days 8-10

2985

 

122

4.1%

27.1%

Reminder 1 sent day 10  

5

     
Reminder 2 sent day 15  

4

     
Days 11-28

2976

 

114

3.8%

25.3%

Total

2976

15

450

15.1%

 

Table 2. WebQUAL response rates and response speed.

The SQL database allocating passwords indicated that some 500 (16.8 percent response) respondents had accepted the email invitation and used their passwords to enter the survey but that some did not press the submit button at the end of the Web form.  Thus, 50 had elected to abort the survey.  Of the 450 responses ultimately recorded in the resulting data SQL database, 399 were complete responses.  As a consequence, the overall response error was some 20 percent.  At the point when it was realised that 61 of the respondents had commenced the survey but not pressed the submit button, the researchers contacted these respondents by email to seek clarification as to whether an incomplete response was due to a technical matter or a deliberate withdrawal from the survey. Over 50 percent responded within 12 hours, while 50 percent did not reply at all. Half of those who replied thought they had completed the survey but indicated a willingness to try again. The overall response to this query shows the flexibility of online research, the willingness of respondents to become engaged in online research, and that there are technical issues to be overcome with electronic surveys.  It may be that those who entered the survey and then departed were merely curious or had a professional interest in seeing the online survey. 

While WebQUAL results indicate in Table 2 that email addresses selected from Websites included in the sampling frame were accurate, this remains an issue in many studies and impacts on representativeness in those studies.  Comley [HREF 8] compared data collection methods involving email (1,221 sample), post (1,769) and a postal invitation to complete a web based form (1,000).  Incorrect email addresses were high at 35 percent. This is even more concerning in that the sample was drawn from a less than one-year-old database of U.K. Internet subscribers to an online magazine. Work-based email addresses change quite often and with over 800 ISPs in Australia, it is quite possible that churn has an impact in Net user and B2C online studies that were not encountered with the WebQUAL sampling frame.

Another issue is whether or not mail servers bar email because it is regarded as SPAM. This issue comes about because the WebQUAL Audit sends a large number of invitation emails. This may trigger a false alarm in systems that are configured to block mass-email distributions. In addition, some people regard any sort of email that is not asked for as SPAM. It remains a very real concern in online academic research. Comely [HREF 6] found difficulties in measuring response quality and as with WebQUAL, the dimensions item omission, response error and completeness of answer are indicators.  In the WebQUAL case, item omission was low except in the case of turnover and Net budget items.  Completeness of answer was high in that over 90 percent gave a response to open-ended text questions.

 

Conclusion

Online research is topical in the commercial world, mainly due to the claims made concerning practicalities such as the reduced costs of obtaining customer information.   Similar claims are made concerning 'disintermediation' cost benefits in online fulfilment.  In both cases there is some truth, but there are also many issues that need resolving before the reduced cost claims outweigh other issues in both online research and the fulfilment of online transactions.  In the case of online research, a major issue concerns the ability to draw statistical inferences and move beyond mere descriptive statistics on the numbers of online user and a faint geo-demographic profile.  The WebQUAL audit was used to illustrate a number of issues in this regard, particularly the way in which the adopted hardware-software configuration enabled matters such as response error to be identified and provided the flexibility to assist with problem resolution.  Academic research budgets are limited, and the suggested configuration provided the main benefit of gathering the data while meeting budget restrictions.   At no time, however, was there any compromise in ensuring that a representative sampling frame was used and that follow-ups were used to either overcome issues or to remind those yet to respond.  However, response rates are increasingly an issue as novelty value wears off this research medium.

References

Adam, S. and Clark, E.E. (2000), "My Business and the Net", in iNet Pages Guide to Australian Business 2000. Big Colour Pages, Melbourne, Australia.
Adam, S. and Deans, K.R. (2000), "Online Business in Australia and New Zealand: Crossing a Chasm", Refereed paper, AusWeb2K Conference Proceedings, Southern Cross University, Cairns, Australia (12-17 June 2000): CD-ROM.
Brennan, M. Rae, N. and Parackal, M. (1998), "Survey-based Experimental Research via the Web: Some Observations", Refereed paper, Australian and New Zealand Marketing Academy (ANZMAC) Conference Proceedings, 30 Nov - 2 December, University of Otago, Dunedin, New Zealand, pp.223-233.
Deans, K.R. and Adam, S. (1999), "Internet Survey Data Collection: The Case of WebQUAL", Refereed paper, ANZMAC Conference Proceedings, University of New South Wales, Sydney, (29 November - 1 December):CD-ROM.
Deans, K.R. & McKinney, S. (1997), "A Presence on the Internet: the New Zealand perspective", Refereed paper, ANZMEC’97 Conference Proceedings, (December 1-3), Monash University, Caulfield.
Dholakia, U. and Rego, L.L. (1998), "What makes commercial Web pages popular? An empirical investigation of Web page attractiveness", European Journal of Marketing, Vol. 32 No. 7/8, pp.724-736.
Hanson, W. (2000), Principles of Internet Marketing. South-Western College Publishing, Cincinnati, Ohio.
Hoffman, D.L. and Novak, T.P. (1996), "Marketing in Hypermedia Computer-Mediated Environments: Conceptual Foundations", Journal of Marketing, 60, pp.50-68.
Hofacker, C.F. and Murphy, J. (1998), "World Wide Web banner advertisement copy testing", European Journal of Marketing, Vol. 32 No. 7/8, pp.703-712.
Montoya-Weiss, M.M. Massey, A.P. and Clapper, D.L. (1998), "On-line focus groups: conceptual issues and a research tool", European Journal of Marketing, Vol. 32 No. 7/8, pp.713-723.
Weible, R. and Wallace, J. (1998), "Cyber Research: The Impact of the Internet on Data Collection", Market Research, Fall, 10(3), pp.19-2

Hypertext References

HREF 1
http://ausweb.scu.edu.au/aw99/papers/adam
HREF 2
http://www.gvu.gatech.edu/user_surveys
HREF 3
http://www.privacy.gov.au/news/p6_4_1.html
HREF 4
http://www.privacy.gov.au http://www.lawlink.nsw.gov.au/pc
HREF 5
http://www.iia.net.au
HREF 6
http://www.nwc.com/1020/1020f1.html
HREF 7
http://www.netcraft.com/survey/
HREF 8
http://www.sga.co.uk/esomar.html

Copyright

Hossein S. Zadeh, Stewart Adam and Kenneth R. Deans © 2000. The authors assign to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.


[Proceedings ]


AusWeb2K, the Sixth Australian World Wide Web Conference, Rihga Colonial Club Resort, Cairns, 12-17 June 2000 Contact: Norsearch Conference Services +61 2 66 20 3932 (from outside Australia) (02) 6620 3932 (from inside Australia) Fax (02) 6622 1954