Agent Technology in Electronic Commerce and Information Retrieval on the Internet


Bassam Aoun IMAGE Technology Research Group [HREF 1] Curtin University of Technology Email: saounb@cc.curtin.edu.au Phone: (09) 351 7890 Home Page: http://www.ece.curtin.edu.au/~saounb/ [HREF 2]
Keywords: Search Agents, Electronic Commerce, Information Retrieval

Abstract

Electronic commerce is a rapidly growing area on the Internet. For the full potential of online shopping to be realised, several online capabilities must be viable. These include secure transactions, practical payment methods, and intelligent search agents.

This paper describes BargainBot, a multi-threaded, multi-connection, autonomous search agent prototype equipped to reduce the complexities of navigating the rapidly growing number of online shops on the Web, and to assist users in locating specific product items. Different agent architectures will be discussed, and comparison is made to conventional software robots. The use of agent technology in electronic commerce and general information retrieval, as well as the future of agent technology on the Internet will also be discussed.

Introduction

The World Wide Web is a burgeoning mass of interconnected data that stretches from computer to computer across the world. Contrary to popular belief however, the amount of information currently on the Web is relatively small compared to other publicly available information. Estimates put it at 70 gigabytes, representing a fraction of 1% of the world's publicly available information [Ross & Hutheesing, 1995]. As a quantitative comparison, the Web contains the equivalent of 70,000 "books" of text. The Library of Congress contains approximately 16 million books. However Lycos [HREF 3] estimates as much as 80% of publicly available information will be on the Web by the year 2000. Exponential growth of the Web poses inevitable problems that need to be addressed if the Web is to remain a useful information resource. With the explosion of the World Wide Web and online resources in general, information overload is a very real problem. With rapid growth comes the increasingly difficult task of navigating the Internet. The ability to efficiently extract what is absolutely relevant to a user's information needs will become crucial to its effective use.

Much of the current content on the Internet is commercial and indeed much of forecast growth will be stimulated by electronic commerce. Forrester Research [HREF 4] has predicted electronic commerce on the Internet to be worth $US45 billion by the year 2000, compared to approximately $US240 million in 1995. Many are suggesting online shopping will become the next growth area on the Internet. The ability to accomplish an effective search for specific product items is crucial for Internet electronic commerce to realise its full potential. "Effective search" is considered to be finding everything of relevance and equally finding nothing irrelevant. Given the explosion of online shopping, searching for a particular products amongst an online wasteland of commercial content could become a fundamental obstacle for Internet electronic commerce. In addition to practical and secure payment methods, the presence of intelligent search agents will be crucial to the success of Internet electronic commerce.

Information Retrieval and Agent Technology

Search Engines

Search engines have aided information retrieval on the Internet. However, the resources they index are often outdated, and don't adequately index the mass of unorganised data that spans the Internet. Search engines lack the facility to provide contextual searches resulting in links that have little or no relevance to a user's immediate information needs. For example how does a user search for a particular book with the intent to purchase the book online? Providing arguments to a search engine to perform such a request is difficult. The problem is further compounded by the inability of search engines to index the contents of searchable databases interfaced to the Web via a forms front end. For example, a search engine cannot learn of the books a bookstore holds within its database because it lacks the intelligence to fill in a form to search, and hence index, the contents of such a database. Currently, the only solution would be for a user to associate books with a bookstore, then manually search for different online bookstores (which in itself can be difficult) and sequentially search each of these sites to find the particular book of choice. Agent technology will have an important role in reducing this problem.

Agent Technology

Agents are software entities that operate autonomously, performing operations on behalf of a user. Intelligence is the ability for an agent to reason (by inference or at the very least through user programmed rules) and learn about its user. Search agents are autonomous agents that search the Internet on behalf of the user. UK-based consultancy firm Ovum has predicted that the agent technology industry will be worth $US3.5 billion by the year 2000 [Houlder, 1994].

The confluence of intelligent software agents [HREF 5] and distributed hypermedia systems was first exhibited with the emergence of Web Spiders [HREF 6] in 1993. The convergence of these technologies highlighted the need for autonomous agents to traverse the Web on a user's behalf. Knowbots [HREF 7] are intelligent information gathering agents that roam the Internet searching for items of interest on behalf of a user. The advent of electronic commerce on the Internet has emphasised the need for similar agent technology to help users locate specific product items on the Web.

Electronic Commerce Agents

Electronic shopping agents are agents that assist in searching the Internet for product items on behalf of a user. Users interact with a shopping agent by submitting agent requests. Upon receiving a request, the agent searches relevant online shops throughout the Internet for items that match the search criteria. The agent would return to the user a detailed description of the items found, the price of the items and a direct link to the virtual store where the user can purchase the item. The agent would format the information in a manner facilitating comparison shopping by the user. The items returned might be sorted by price for example.

Previous and Related Work

One of the earliest examples of electronic shopping agents was Andersen Consulting's BargainFinder [HREF 8]. BargainFinder is a comparison shopping agent that automates searching multiple virtual music retailers for a CD of choice. The introduction of BargainFinder highlighted the need for technical cooperation amongst online shops, when the operation of BargainFinder was thwarted by online stores blocking the agent.

General Information Retrieval

Agent technology has a large part to play in facilitating general information retrieval on the Internet. Early examples include WebDoggie [HREF 9] (formerly WebHound). WebDoggie is a personalised Web document-filtering system that employs a "collaborative filtering" process to automate the "word of mouth" approach to recommending sites of interest to users. Users submit their personal evaluations of various Web pages to specialist servers. WebDoggie builds a user profile from this and determines which users have opinions similar to yours and recommends new documents appropriately. Etzioni and Weld [Etzioni & Weld, 1994] have developed an intelligent assistant (softbot) which helps users access Internet resources. The Distributed Systems Technology Centre [HREF 10] have developed a search engine called BabyOil that provides a consistent interface to various information resources delivered using various application-level protocols (gopher, Z39.50, X.500). Other Web-based information agents include Jasper [Davies et al., 1995] and the TkWWW Robot [Spetka, 1994].

IBM's Netcomber Activist [HREF 11] service is an attempt to personalise Internet traversal. Activist is a personal proxy that monitors users' traversal of the Yahoo [HREF 12] index. It informs users of new links that have been added in those sections of the index in which the user showed an interest. The service is limited to the Yahoo index and is not sensitive to users' traversal of the greater Internet. Quarterdeck's [HREF 13] WebCompass is a personal search agent that uses major search engines to routinely search the Internet for information of interest to the user. Other personalised agents include Personal Excite [HREF 14] and the Pointcast Network [HREF 15].

Personalised News Services

Personalised news retrieval services create daily customised newspapers according to a user's interests in the news. For example if a user is interested in financial news and disinterested in sport, their personalised newspaper will reflect those preferences. BBN's [HREF 16] Personal Internet Newspaper (PIN) promises agents that extract specific information from multiple sources including external sources such as the Web and Usenet news, as well as from internal sources such as corporate databases. Mercury Centre's [HREF 17] NewsHound is a similar service that searches a range of newspapers for recent articles that match the user's interest profile and automatically emails the user their personalised paper. Aaron Fuegi's Personal Newspaper [HREF 18] whilst not a personalised newspaper service, illustrates the use of an autonomous agent to generate a daily news page by accumulating news bits from around the Internet.

BargainBot Shopping Agent Prototype

BargainBot [HREF 19] is an electronic shopping agent prototype that assists users in tracking down specific product items on the Web. BargainBot's multi-threaded, multi-connection architecture facilitates the task of simultaneously searching multiple virtual shops on the Web. BargainBot is currently being trialled with various online bookstores. As Figure 1 illustrates, users specify the details (title and author) of the book they want to purchase and BargainBot transparently searches multiple online bookshop databases worldwide. BargainBot presents the user with the details of all books that match the search. The user has the opportunity to compare prices from different bookstores, and upon making a decision can purchase the book by selecting the appropriate link. Figure 2 shows the results of a hypothetical search.

Figure 1. Screenshot of BargainBot's Web front end

Figure 1. Screenshot of BargainBot's Web front end

Figure 2. BargainBot Search Results

Figure 2. BargainBot Search Results

BargainBot has been implemented using PERL [HREF 20] (Practical Extraction and Report Language) on a HP 720 workstation. PERL has excellent text processing capabilities and support for UNIX system calls (network programming facilities in particular), both of which are heavily relied upon by BargainBot.

BargainBot Multi-Agent Architecture

As an interactive search agent, BargainBot's response time is critical. BargainBot addresses this problem by spawning (forking) "sub-agents" that independently search particular sites on the Web. As Figure 3 illustrates, BargainBot works on a user/agent -> sub-agent/server architecture, with sub-agents instigating individual network (HTTP) connections with sites world-wide.

Figure 3. BargainBot parallel, multiagent architecture

Figure 3. BargainBot parallel, multiagent architecture

Online shops allow Web users to search their databases via a Web forms interface. The CGI [HREF 21] (Common Gateway Interface) is a standard for interfacing external programs or resources to the Web. For example a bookstore might interface their SQL database to the Web via a CGI gateway. As Figure 4 illustrates, BargainBot queries a site's database through this gateway.

Figure 4. BargainBot sub-agent querying a database through a Web forms interface

Figure 4. BargainBot sub-agent querying a database through a Web forms interface

BargainBot presents the user with the results of each sub-agent upon its completion. Sub-agents work independently and any problems that a sub-agent might encounter (down sites, network problems, etc.) will not effect the operation of the other sub-agents. To that end, if a sub-agent encounters a heavily loaded site its results will not delay the presentation of information other sub-agents have already retrieved. BargainBot is implemented as a NPH [HREF 22] (Non-Parsed Header) CGI script to facilitate this task.

BargainBot interfaces to its own database that contains information on the different sites (bookstores) to which it accesses. The addition of new sites requires an entry in this database. BargainBot senses the number of entries in the database and spawns the appropriate number of sub-agents.

Addressing Heterogeneity

BargainBot must address the problem of heterogeneity amongst sites' responses. When BargainBot queries different sites for a particular book, individual sites markup their returned data in different formats. For example, one site might disassociate one book title from another with an image. BargainBot must deal with this heterogeneity by sifting through the data, extracting only what is needed (eg. removing redundant links, inappropriate images etc.) as well as addressing contextual differences of exchanged data (eg. converting relative hyperlinks to absolute links, etc.) and presenting the user with the information in a homogeneous format. Sub-agents have prior knowledge of how its associated site will format its replies, allowing it to extract the necessary details.

Preliminary Findings

Advantages Over Conventional Software Robots

A software robot [HREF 23] is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving other documents that are referenced. Autonomous recursive retrieval can have unwanted effects including unrecognised domain aliasing, black holes, and retrieval of inappropriate data such as irrelevant images. Eichmann [Eichmann, 1994] examines the issue of ethics for Web agents.

BargainBot does not reflect the common problems associated with conventional software robots. Its major aim is not general information discovery and does not blindly traverse multiple sites asking if they sell books, for example. Conventional robots can severely impact both overall network performance and the performance of the servers it accesses. BargainBot has prior knowledge of appropriate sites, knowing exactly where to locate its information. BargainBot does not produce recursive hits on a server and does not place excessive stress on the network, conserving valuable bandwidth.

Time Savings

Whilst BargainBot's operations can be performed manually, substantial time savings are to be found by performing a search with the aid of an agent. The process of performing a search for a book without the aid of BargainBot would entail:
  1. Searching for an Internet bookstore. This implies using a search engine of some sort, a time consuming task in itself.
  2. Upon finding a bookstore, search for a book of choice. This might be as simple as filling out a search form, or manually browsing product lists. If neither of these facilities are available find another bookstore.
  3. If the book is located, make note of the price and site. If no book is located, repeat steps 1 to 3.
  4. For comparison shopping, repeat steps 1 to 3.
This process assumes users have good knowledge of searching and navigating the Web. For Internet electronic commerce to become viable however, the inexperience of new users must be addressed in a manner facilitating their productivity in the least amount of time and effort. This highlights the importance of search agents in electronic commerce on the Internet.

Preparing the Web for Electronic Shopping Agents

The problem with a Web front end to an online storefront is that it is not conducive to autonomous browsing by search agents. The first problem is that of information markup on the Web. Data returned from a query is encapsulated in a markup language (HTML). Future agents will structure and impose their own representation and interaction with the information, dependent on the application or user (see Customising Interaction and Information Representation below). These agents are interested in information, not necessarily a metaphor for representing the information (hypertext) or how the information is displayed (marked up), because they impose their own methods, depending on the task or user. BargainBot must remove much of this markup and present the information in a homogeneous manner. Problems arise when a particular site changes its response format from what BargainBot expects. In this situation BargainBot might leave out details, or even, provide the user with false information.

Technical Cooperation

Technical cooperation with remote sites is required for search agents to prosper in electronic commerce. For example, the operation of BargainFinder was thwarted by many online CD stores blocking the agent's access to their site. Remote sites blocked Andersen Consulting's network via domain name restrictions, a facility provided by most, if not all, Web servers. This problem can be overcome however, by client-side agents that instigate network operations from a user's machine. Furthermore, as Sun's Java [HREF 24] Internet programming language is used to implement future Internet catalogue and ordering systems, the task of deciphering the searching mechanisms embedded within these applets for use in an agent, becomes increasingly difficult without technical cooperation with online stores.

Effective Information Querying

To query an online storefront for a particular item, BargainBot must compose HTTP requests in a manner similar to that generated by standard Web browsers. The major problem with this approach is agents must query a storefront's database indirectly through a Web forms interface (and hence a CGI gateway), and all the restrictions of abiding by the limited querying power of this approach. For example, a number of sites restricted querying to either title or author, but not both simultaneously, resulting in inaccurate responses. Certain sites also apply a "session key" to search sessions. Unique keys are applied to each search undertaken and keys expire after short time intervals. For search agents to overcome this problem they must request a key prior to instigating a search - a two way process that slows an agent's response time. Certain sites also format search results in an incomplete manner, effectively requiring the agent to follow additional links to learn of an item's price, for example. Furthermore, certain sites also made use of Netscape's HTTP Cookies [HREF 25] persistent client state mechanism to store state information on the client side. Sites that use the Cookies mechanism do not lend themselves to autonomous browsing by search agents, which further restricted BargainBot's operations. For electronic shopping agents to prosper, a cooperative environment needs to be developed to facilitate effective information querying by autonomous agents.

Z39.50

One solution to the problem of effective information retrieval is the introduction of a standard for distributed information querying. Z39.50 [HREF 26] is a client/server communications standard for database searching and record retrieval. Z39.50 was developed to facilitate multiple database searching, providing a command language and search procedures to support information retrieval in a distributed environment. Z39.50 is widely implemented for searching remote library catalogues. Version 3 (Z39.50-1995 [HREF 27]) supports non-bibliographic searching allowing online retailers, for example, to provide an interface to their database that adheres to the standard. Search agents could then comprehensively interrogate a retailer's database via the Z39.50 interface and be presented with results of the search in a standard manner.

Inter-Operable Agent Model

A second solution, to circumvent the restrictions of a HTTP interface to a database, would be an inter-operable agent model [Genesereth et al., 1994]. In this model a user agent would interact with a server-side agent on the storefront end. The server-side agent would handle requests by making use of resources on its end (databases etc.) or delegate queries to other agents to handle the request. For example, if a bookstore agent was unable to fulfil a request asking what new books are to be released by a particular publisher, it could forward the request directly to the publisher's agent, or an appropriate agent elsewhere on the Internet. Information and knowledge interchange amongst agents would adhere to an agent communication language [Genesereth and Ketchpel, 1994] such as KQML (Knowledge Query and Manipulation Language) [Finin, 1993].

Figure 5. Federated system multiagent architecture

Figure 5. Federated system multiagent architecture

The task of interagent interactions is not merely that of providing an appropriate front end for agents to search an online retailer's database. A multiagent communications architecture would facilitate the delegation of requests to other agents on the Internet that are better capable of addressing a particular problem. Figure 5 illustrates the assisted coordination of the federated system approach [Genesereth, 1992]. Interagent communication is instigated via facilitators which communicate with one another. Agents document their abilities with their local facilitators. A user agent documents its needs with its local facilitator, which in turn propagates this request in a top-down manner via other facilitators, to an agent that is capable of handling the request. If a user was interested in the cost of a particular book, for example, the following would take place:

Facilitators become more specialised as they move down the federated system hierarchy. In the above example the final facilitator that delegated the appropriate agent might be involved with bookstores (neglecting the monetary exchange operations), whilst its immediate parent may only be involved with books in general.

Other agent architectures have also been developed to support Web-based agents. Lingnau, Drobnik and Domel [Lingnau et al., 1995] discuss a HTTP-based infrastructure for mobile agents.

There are a number of problems that need to be addressed before agent technology can be readily applied to searching for particular resources, be it product items or otherwise on the Internet. Most of these problems illustrate the lack of standards and technical cooperation that is needed amongst online stores.

The Future of Agent Technology

Agents are becoming more intelligent. Future agents may be able to pay for actions taken on their user's behalf, such as paying telephone and electricity bills. Future agents will build an "interest profile" of their user, learning about their interests and shopping patterns. For example, agents will learn of their user's interest areas, favourite authors, etc., and recommend any new books that match the user's profile. Other impacts of intelligent agents on Internet electronic commerce include agents that act as salespeople. Operating on behalf of retailers, they will provide product or service sales advice, help troubleshoot customers' problems, etc. User interaction with such agents would be completely transparent. Personal agents will instigate communication with these "expert" agents via facilitators (see Inter-Operable Agent Model above) as the need arises.

Effective User/Agent Interaction

User/agent interaction will become more sophisticated with users composing more detailed requests. User/agent communication and interaction would not suffer the restrictions of a particular application protocol. For example, if the agent had a Web front end there is no mechanism for the agent to communicate periodically with the user. HTTP has little support for state information, further hindering user/agent communication. Sun's Java programming language may provide for this enhanced interaction, with Java also being regarded as a possible agent implementation language. For example the Java Agent Template [HREF 28] is a Java application that provides basic agent functionality.

Customising Interaction and Information Representation

The increasing complexity of navigating the Internet is becoming one of the fundamental obstacles to its effective use. One solution would be to reorganise the structure of the Internet. This in itself would be a mammoth task, and even then the navigational needs of all users and applications could not possibly be addressed. A better solution would be to give each user the ability to organise an individual perspective of the Internet through their own agent. The agent would replace conventional browsers, providing users with their own individual front end to the Internet. Each agent would become accustomed to the information needs of its user and adjust the structure and representation of the information as well as the interaction with the user, in a manner suitable to that user or application. An agent would interact differently with a disabled person for example. In this manner the complexities of the Internet would be addressed from an individual rather than an organisational level.

General Information Retrieval and the Internet

Agent technology has an important role in general information retrieval and discovery on the Internet. Future agents will monitor a user's actions, observing and making inferences from observation. They will take note of the sites users access, learn of a user's interests, recommend sites, news groups, message threads, etc. users have yet to see according to those interests. They will notify the user of any changes to sites since they saw them last. Agents that perform these and other information retrieval and filtering tasks have been prototyped (Maes, 1994).

Resource Migration Transparency

The current Web architecture has no support for referential integrity resulting in "broken links". Future agents will provide resource migration transparency. For example, when a Web page is relocated, an agent will sense this and configure its operations appropriately and transparently to the user. Should an agent be faced with a broken link, it will transparently seek appropriate mirrors. At worst, the agent would provide the user with a detailed description of the problem and return the requested information as it comes to hand (ie. when a remote networking problem has been resolved or when appropriate information has been sought elsewhere). Where possible, agents will actively seek local mirrors of resources, placing less strain on network traffic. Dynamic reconfiguration would be facilitated by a persistent naming system such as URN [HREF 29]. Furthermore, during peripheral information discovery, agents should engage the network during less demanding hours.

Searching the Intranet

The task of searching and retrieving corporate information on Intranets will inevitably face similar problems to those encountered on the Internet. External search engines and agents are disallowed access to internal resources on an Intranet. As a result, searching mechanisms need to be accommodated internally for individual Intranets. The opportunity arises then, to build inherent support for effective information indexing of the Intranet. One approach would be to embed meta-data information required for indexing (keywords, authors, etc.) within individual pages, or alternatively, structure the contents of pages in a manner facilitating effective indexing by an internal search engine.

Conclusion

BargainBot is an Internet electronic commerce search agent which provides users with the capability to simultaneously search multiple bookstores, and presents its findings in a manner facilitating comparison shopping by the user. BargainBot provides considerable time savings over performing similar tasks manually. BargainBot makes efficient use of the network and does not suffer the inefficiencies of a conventional software robot.

Though currently in its youth, agent software has huge potential in electronic commerce and information retrieval on the Internet. The enormous flow of information on the Internet is an opportunity and a challenge. Intelligent agents will help stem the problem of information and work overload, giving users their own personalised representation of the Internet where information is customised to the task, not the source. Intelligent agents are becoming more autonomous (performing mundane tasks such as paying bills), more intelligent (querying and observing), and more personalised (customising information representation and interaction according to the user's needs, preferences and habits) with user/agent communication becoming a cooperative process.

Internet electronic commerce will benefit greatly from the transparency of effective product searches that intelligent agents provide. Effective information querying mechanisms need to be introduced for agents to prosper in Internet electronic commerce. Search agents must adhere to ethical guidelines, using the network sparingly and not placing high demands on bandwidth. Privacy is of the utmost importance and future agents must be tamper-proof. Social difficulties also need to be addressed. How will intelligent agents interact with people and how might people think about agents?

Acknowledgements

Financial support and assistance from the following organisations is gratefully acknowledged:


References

[Davies et al., 1995]
Davies, J., "Jasper: Communicating Information Agents for WWW", Proceedings of the 4th International WWW Conference, Boston, Massachusetts, December 1995.

[Eichmann, 1994]
Eichmann, D., "Ethical Web Agents", Proceedings of the 2nd International WWW Conference, Chicago, IL, October 1994.

[Etzioni & Weld, 1994]
Etzioni, O. and Weld, D., "A Softbot-Based Interface to the Internet", Communications of the ACM, v.37, n.7, July 1994, p.72-76.

[Finin, 1993]
Finin, T. et al., "DRAFT Specification of the KQML Agent-Communication Language", The DARPA Knowledge Sharing Initiative External Interfaces Working Group, June 1993.

[Genesereth, 1992]
Genesereth, M., "An Agent-Based Approach to Software Interoperability", Proceedings of the DARPA Software Technology Conference, 1992

[Genesereth et al., 1994]
Genesereth, M. et a. "A Distributed and Anonymous Knowledge Sharing Approach to Software Interoperation", Computer Science Department, Stanford University, 1994

[Genesereth and Ketchpel, 1994]
Genesereth, M. R. and Ketchpel, S. P., "Software Agents", Communications of the ACM, v.37, n.7, July 1994, p.48-53.

[Houlder, 1994]
Houlder, V., "Special Agents", Financial Times, 15 August, page 12.

[Lingnau et al., 1995]
Lingnau, A., Drobnik, O. and Domel, P., "An HTTP-based Infrastructure for Mobile Agents", Proceedings of the 4th International WWW Conference, Boston, Massachusetts, December 1995.

[Maes, 1994]
Maes, P., "Agents that Reduce Work and Information Overload", Communications of the ACM, v.37, n.7, July 1994, p.30-40.

[Ross & Hutheesing, 1995]
Ross, P. E. and Hutheesing, N., "Along came the spiders-World Wide Web spider search software", Forbes, October 23, 1995 v156 n10 p.210.

[Spetka, 1994]
Spetka, S., "The TkWWW Robot: Beyond Browsing", Proceedings of the 2nd International WWW Conference, Chicago, IL, October 1994.

Hypertext References

HREF 1
http://www.ece.curtin.edu.au/ - IMAGE Technology Research Group

HREF 2
http://www.ece.curtin.edu.au/~saounb/ - Home Page of Bassam Aoun

HREF 3
http://www.lycos.com/ - Lycos, Inc.

HREF 4
http://www.forrester.com/ - Forrester Research

HREF 5
http://www.cs.umbc.edu/agents/ - Intelligent Software Agents

HREF 6
http://info.webcrawler.com/mak/projects/robots/robots.html - World Wide Web Robots, Wanderers, and Spiders

HREF 7
http://redwood.northcoast.com/savetz/articles/knowbots.html - Here Come the Knowbots!

HREF 8
http://bf.cstar.ac.com/bf/ - Andersen Consulting's BargainFinder

HREF 9
http://webhound.www.media.mit.edu/projects/webhound/ - WebDoggie

HREF 10
http://www.dstc.edu.au/ - Distributed Systems Technology Centre

HREF 11
http://activist.gpl.ibm.com/ - IBM's Netcomber Activist Service

HREF 12
http://www.yahoo.com/ - Yahoo Web Index

HREF 13
http://www.quarterdeck.com/ - Quarterdeck

HREF 14
http://home.excite.com/home/ - Personal Excite

HREF 15
http://www.pointcast.com/ - Pointcast Network

HREF 16
http://www.pin.bbn.com/ - BBN's Personal Internet Newspaper (PIN)

HREF 17
http://www.sjmercury.com/ - Mercury Center

HREF 18
http://www.bu.edu/~aarondf/newspaper/newspaperfaq.html - Aaron Fuegi's Personal Newspaper

HREF 19
http://www.ece.curtin.edu.au/~saounb/bargainbot/ - BargainBot Search Agent Home Page

HREF 20
http://www.perl.com/perl/ - Tom Christiansen's Perl Home Page

HREF 21
http://www.ast.cam.ac.uk/~drtr/cgi-spec.html - The WWW Common Gateway Interface Version 1.1 (Internet Draft)

HREF 22
http://hoohoo.ncsa.uiuc.edu/cgi/out.html#nph - NPH documentation

HREF 23
http://info.webcrawler.com/mak/projects/robots/threat-or-treat.html - Robots in the Web: threat or treat?

HREF 24
http://java.sun.com/ - Sun's Java Home Page

HREF 25
http://www.netscape.com/newsref/std/cookie_spec.html - Netscape HTTP Cookies Preliminary Specification

HREF 26
http://lcweb.loc.gov/z3950/agency/1995doc.html - The ANSI/NISO Z39.50-1995 document

HREF 27
http://www.cni.org/pub/NISO/docs/Z39.50-1992/www/50.brochure.toc.html - The ANSI/NISO Z39.50 Protocol: Information Retrieval in the Information Infrastructure

HREF 28
http://cdr.stanford.edu/ABE/JavaAgent.html - Java Agent Template

HREF 29
http://www.w3.org/hypertext/WWW/Addressing/Addressing.html - Names and Addresses, URIs, URLs, URNs, URCs

HREF 30
http://www.devetwa.edu.au/imago.htm - Imago Multimedia Centre

HREF 31
http://www.ece.curtin.edu.au/ - Curtin University of Technology


Copyright

Bassam Aoun ©, 1996. The author assigns to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author also grants a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers, and for the document to be published on mirrors on the World Wide Web. Any other usage is prohibited without the express permission of the author.
Pointers to Abstract and Conference Presentation
Abstract Interactive Version Papers & posters in this theme All Papers & posters AusWeb96 Home Page

AusWeb96 Second Australian World Wide Web Conference, Southern Cross University, PO Box 157, Lismore NSW 2480, Australia Email: "ausweb96@scu.edu.au"