Nolan, Mark.,
IBM Australia,
Redpath, Robert, Caulfield School of
Information Technology, Faculty of Information Technology,
Currently
service discovery and service request fulfillment are treated as being
contiguous steps, assuming the semantic knowledge to be present to allow this.
Alternately it is recognised that this is rarely possible and service discovery
and service request are treated as two independent steps. In reality the
service discovery and service request need to be treated separately but share
knowledge if both functional and non-functional requirements are to be met in a
consistent fashion.
This paper
proposes an architecture for service selection, service contract negotiation,
dynamic service request configuration and quality of service monitoring. Our
architecture is based on the belief that the semantic web is useful and
powerful and allows ontological approaches but these approaches are
underdeveloped. The benefits of the architecture, realized through the use of a
common repository and ontological approaches, are support for a contract
negotiation, which takes account of both functional and non-functional
requirements, at time of service discovery. In addition the negotiation need
not be based on a particular policy language and contract compliance can be
monitored at time of service request.
Web services
have appeared and developed as a standards based approach to provision business
services over the World Wide Web. While web services are used at this moment by
many organizations and individuals they are still at an early stage of
development. Examples of current use of services, by consumers in client side
applications, show supplementation of the build processes by human intervention
to establish suitability, trust and monitoring by service consumers and
providers.
The need for
human intervention exists because current architectural approaches are of two
types. The first approach is to consider the services to exist in the ideal
semantic web where discovery and request are dynamically configured, assuming a
perfect match of provider's service to consumer's need [1]. In reality the
assumption of a perfect match cannot be sustained and thus discovery and
request are considered in isolation and separate architectures are proposed and
implemented for each part [2]. By establishing a common repository the
architecture put forward here allows discovery to be done separately from
request but to share data.
The sharing of
data allows service discovery and request execution to be integrated and a number
of objectives as detailed below can be met.
" The
design and build process is supported by a semantically rich environment
through the use of a common repository.
" The
common repository is extensible in a dynamic way to permit changes due to
contract negotiation or for other reasons.
" A
negotiation can be carried out by having service discovery separate from
service request and the results of the negotiation can be captured in a
contract.
" The
negotiation can be based on both functional and non-functional requirements.
" Prior to
service request execution a trial request can be tested.
" The
negotiation need not be based on a particular policy language through the use
of an ontological approach.
"
Monitoring of contract compliance can be done at time of service request by a
service broker.
In order to
fulfill the above objectives a Semantic Service Dissemination Architecture
(SSDA) is proposed that is composed of three major components the 1) Service
Selection Engine 2) Dynamic Configuration Manager and 3) a Service Broker.

Figure 1
Semantic Service Dissemination Architecture
Current
architectures for service discovery and service binding can be classified as either
autonomic or single component. The autonomic approach considers the services to
exist in the ideal semantic web where discovery and binding are dynamically
performed, based on an assumption of a perfect match of provider's service to
consumer's need for both functional and non-functional requirements [1, 3, 4].
In [1], Maxmilien and Singh propose an architecture that performs discovery,
selection and binding automatically using a service broker which matches on QoS
parameters and assumes that the service offers the required functionality.
There are a number of other studies which assume functional matching based on a
limited application domain such as grid, mobile computing or control of
electronic devices. In these areas it may be reasonable to assume that services
have well defined and agreed specifications and interfaces. [3] proposes the
Web Service Discovery Architecture that at runtime allows Grid applications to
discover and adapt to remote services. Approaches such as [4] the ReMMoC
architecture, are restricted to the mobile service environment and aim to
overcome platform heterogeneity.
In reality the assumption of a perfect functional match cannot be sustained and
thus discovery and request are considered in isolation and separate architectures
are proposed and implemented for each component. The single component
architecture defines a component to partially perform the processes of service
discovery, selection or execution without necessarily considering combined
requirements and the interaction between other components. An example of the
single component architecture is the Service Broker such as
" work flow coordination
" middleware adaptation
" message routing and translation
" security mapping
" state management
" discovery mechanisms
" transaction monitoring
The service broker is a runtime component and provides support for dynamic
discovery of well defined interfaces. The functionality of a service broker is
similar to that of the mature Object Request Broker in the CORBA standard from
OMG [7] that provides a large number of runtime services to perform discovery,
binding and management of execution of objects. CORBA provides access to this
type of functionality which may be embedded in the Object Request Broker or
provided by distributed services. CORBA does not provide support for design
time activities and as such has no need to include functionality to support
functional service selection. Functional matching is provided by the concept of
horizontal domains which define tightly specified objects to perform common
application functions.
The approach we
propose makes no assumptions concerning well defined functional services. Our
approach believes that the processes of service discovery and request execution
happen in different phases of the SDLC and that the service consumer can only
move from the discovery phase to execution after the establishment of a
contract via negotiation. Our architecture supports this belief via the
definition of components to perform these tasks which share common data.
The major
contribution of the proposed architecture is the integration of the business
process, service configuration rules and NFRs that are all stored flexibly in a
UDDI and OWL-S repository and may be used in both service selection and service
configuration at runtime. This ensures that the documentation and
implementation are equivalent. The implementation of the architecture has been
separated into 3 projects. A brief overview of the 3 projects is provided below
consisting of the service selection process as implemented by the Service
Selection Engine, the Dynamic Configuration Manager and the Service Broker
components of the SSDA.
Our approach is
to partially automate service selection and to complement the manual processes
of 1) contract negotiation and 2) implementation of the client side design,
build and test of the business processes using the discovered web services.
Project 1 defines the structure of our Service Selection Engine which provides
both an automated service selection component and a query engine to further
explore implementation related information provided in the references provided
by the tModels.
A further
research question under investigation in Project 1 is the consideration of the
structure of the UDDI[8]/OWL-S[4] Common Repository and related reference data
defined in the various tModels. The research question posed here is to
determine the structure and format of data for use in automated service
selection. We are also trying to understand how the data contained and
referenced in the various tModels can be structured and integrated with the
data used by the automated processes.
Influencing
forces on our architecture include that it be WS-* Standards based, use the
infrastructure of the semantic web and that it make this data available to both
the Dynamic Configuration Manager and Service Broker.
The Service
Selection Engine operates as follows. First, aautomated matching occurs whereby
syntactic matching takes place using UDDI search and retrieval facilities to
match the WSDL of the service request to that of the web service offerings. For
all, returned candidate web services matching on the WSDL, NFR requirements are
then matched [5] against those on offer as stored in the Common Repository.
Next, semantic matching [6,7] is performed using the Web Service configurable
data. Having completed service selection, the service consumer requires
information to help incorporate the selected WS into the client application.
The SSE provided
is then able to query tModel references for information to help with
implementation issues including, business process information describing
choreography details, build information for example; error messages returned,
and testing information whereby canned responses are made available to allow
trialling and simulation.
Project 2
investigates a major requirement for highly extensible applications that of
configurability. Configurability means applications must be capable of
dynamically configuring Web service message structures and this may extend to
the incorporation of changes to the workflow configuration. To this end we have
propose a highly extensible architecture. A problem with extensible WS
applications is the constant republishing of WSDL that must take place as
extensions occur. We propose a generic XML structure to overcome this problem
while understanding the trade-off between SOAP message validation and the use
of generic WSDL means some loss of validation. The major elements of the
configurable architecture for an application are:
1. message structure and validation
2. interfaces to legacy systems
3. configurable work flows
4. process status messages
Project 2 defines the ontology used to support the dynamic configuration manager.
We have created a prototype dynamic configuration manager which uses data
stored in the Common Repository that is also used by the Service Selection
Engine.
Project 3 -
After having published available service policies, the Service Broker ensures
these polices are met. The Service Broker is a run-time component that routes a
service request to the appropriate Application Server for execution. The
Service Broker manages, administers, monitors and provides transparent
implementation of the NFRs of each web service. The NFRs are stored in a
UDDI/OWL-S Common Repository and loaded into the Service Broker at runtime. The
following is an example of the implementation of both the security and
availability NFRs that can be managed by the Service Broker.
Our companies
policy for Security[9] is that Business Confidential data must conform to a
secure communication policy using the Microsoft UserNameOverTransport assertion
and that Business Critical data must conform to a highly secure policy using
the UserNameOverCertificate assertion.
The Policy for
Availability uses the concept of categories of NFRs similar to the security
classifications from Microsoft[10] . Example availability classifications are
24*7, near 24*7 (3 hours per month and a maintenance window on the 4th Sunday
in the month), Standard Business Day - 0900 to 1700, Extended Business Day -
0500 to 2100. If we consider a web service that has been classified as business
confidential and has availability of Extended Business Day, then the Service
Broker will reject requests without a password and not submitted between 0500
and 2100. Authentication is performed by an appropriate security server.
The paper
describes the research questions we address concerning each of the major
components of the SSDA and is organized as follows. Section 2 describes the
Service Selection Engine and the Common Repository which stores details of the
web services and related design document in UDDI and a linked OWL repository.
In section 3 we describe the Dynamic Configuration Manager and its use of
ontologies to store configurable data. In section 4 we describe the Service
Broker and how this is used perform transformations, administration and store
NFR data relating to the policies of the offered web service. In section 5 we
conclude the paper and present areas for future work.
Service discovery is envisaged as occurring at design time to permit the
reassurance of establishing a contract that is understood by the service
requestor and the service provider. It will also allow a trial run of the work
flow using a test harness created by the service provider. This will follow the
establishment of the contract.
The discovery process
can be summarized as having a number of major steps.
(i) A syntactic match is performed using a standard set of UDDI query APIs
seeking a match on the basic entities associated with the service including the
business entity and business service.
(ii) Further matching is achieved by then considering whether the technical
fingerprint is acceptable. This is done by inspecting the WSDL as stored in the
associated tModel.
(iii) A more exact semantic match (smart service matching) is obtained by
considering the ontology stored in the location indicated (URI) by the
directory entry.
(iv) Further matching can then occur on the non functional requirements (NFRs)
that are stored in the directory as a tModel.
(v) Once selected, a number of tModels with URLs pointing to design
documentation is made available to service consumers.
If the requestor
obtains a satisfactory match a contract can be established and stored in the
UDDI directory with the associated service. There may be many contracts
relating to a particular service. At a later time a trial implementation could
take place based on the tModel in the UDDI directory with details of the test
harness.
2.2 The Structure of the UDDI directory
The structure of the UDDI directory observes the specification and is enhanced
by the use of links to URIs where ontological details of the service can be
stored for smart service matching. The businessEntity containing many
businessService(s) containing many bindingTemplate(s) are held. In addition
these may reference tModels for a full description. Also held are tModels for
the non functional requirements (NFRs) and the test harness to allow trials of
the workflow implementation.
The NFRs are contained
in an ontology linked to UDDI via the tModels. We have defined an XML Schema
Definition (XSD) to contain categories of policies. We have provided a
description of categories of policy assertions structured as an ontology to
provide an extensible policy language suitable for inclusion into UDDI for
consumption in the service discovery process. Our structure is dependent on the
requirements of policy languages which are used to define a class model of the
Policy Schema. We have created a class model of the Policy Categories to store
categories of policy assertions used in both the service selection and service
request process
2.4 A Full featured UDDI Query Language
There is a requirement for a full featured query language to search the UDDI
directories. Capabilities are needed to permit traversal of ontological
information linked to a particular service. In addition non functional
requirements again stored in a linked ontology would need to be searched and
matched based on a specified query.
3 Dynamic Configuration Manager
A major requirement for highly extensible applications is configurability. That
is, applications must be capable of implementing new products and services,
which may even involve the incorporation of changes to the workflow by a configuration.
To this end we have created a highly extensible architecture. The major
elements of the configurable architecture for a product provisioning
application are:
1. Product or service structure and validation
2. Interfaces to legacy systems
3. Configurable work flows
4. Process Status Information
The following sections describe each of the elements required to support
configurable work flows.
3.1 Product and Service Structure
The XML service request must be simple and generic in order to make additions
to products and services transparent. Simplicity must be maintained even if
this requires extensive remapping to legacy systems. XML elements or setting
values are obtained via a settings value table, through user input or are
derived using data driven functionality. Once a product or service is defined
by the service administrator the WSDL are dynamically reconfigured using this
information. XML element validations are performed using the concept of a
Dynamic Configuration Manager.
Dynamic Configuration Managers
A Dynamic Configuration Manager (DCM) is a data driven function associated with
a level of the service or product hierarchy. DCMs are used when an element of
the hierarchy is able to be data driven, for example, to modify the display
attributes of a setting. DCMs are designed to be reusable. DCMs are used to
perform:
1) field validations
2) field compatibility validations
3) legacy system translation
4) screen display
5) the logging of status information
DCMs are
programmed with any parameters they may need to function. The Administrator
associates the DCM with one or more entities as appropriate via administration
of the appropriate ontology. The programmer defines where this DCM may be used
in the application. The DCM category (e.g. validation) combined with the entity
it is associated with specifies the behaviour of this DCM in the application.
For example, the 'Numeric Validation' DCM enables user input for a setting
value to be validated to be numeric. When the administrator has associated the
'Number Validation' DCM with a setting, the application will apply the
validation DCM when an XML element is parsed in the application. This prevents
the need for constant updating and republishing XML any time a product or
service is added or its structure is updated.
3.2 Interfacing Legacy Systems
In order to allow extensible services, the interface to legacy systems must
also be configurable. Legacy system translation rules are entered via system
administration. The transformation manager first identifies the element to be
mapped and then identifies appropriate translation rules and applies the
translation.
3.3 Configurable Work Flows
BPEL is an extensible workflow based language based on XML standards to define
business process flow. It aggregates services by composing their interactions
and is based on a recursive aggregation model. The process exposes WSDL
interfaces to other partners who interact with it, and the corresponding
partners may be part of another business process[10]. A workflow based application
comprises of, a process model which co-ordinates the sequence of activities,
and the individual components which implements the various activities. In a
BPEL environment, the process model is described with BPEL, and the individual
components are the Web services.
Companies no
longer want to be locked into static environments with propriety tools and
languages. BPEL provides a solution by choreographing Web services for business
processes in the SOA paradigm[10]. However, BPEL assumes all business partners
are Web services and does not provide explicit support for human tasks. Even
though automation of business processes is an important goal, very often there
is a need for users to interact with the BPEL workflow to specify details about
their goal, make choices, enter information or get information about various
subtasks involved in achieving their goal[11]. Because BPEL was designed to
implement only the collaboration logic, what it offers are fundamental
activities[12].
BPEL has
developed in response to the need to orchestrate the existing Web service
technologies into coherent workflows. It also has conceptual foundations in
work on earlier workflow languages [13]. It is mostly used to create static
compositions[14] through specified transport protocols in the WSDL documents
that describe the Web services. For transport independence in BPEL, a transport
separate to the business process is required so that the transportation logic
is decoupled from the business process. This layer acts like a gateway/adapter
to reach the users via the different transport protocols (e.g. FTP, SMTP,
HTTP).
An area of
recent interest is in the support for a publish-subscribe (pub-sub) model. The
current Web services model is based on binding and directed messaging. This
poses a limitation on the ability to support asynchronous messaging in a BPEL
process. In the case of an example where a process requires some form of user
input from a Web browser, when this process is invoked and run, the process
thread is blocked until a reply is given from the screen, and only then is it
able to proceed. With the pub-sub model, a Web service (event sink) can
subscribe to another Web service (event source) to receive notification
regarding specific events[15]. WS-Eventing[16] and WS-Notification[17] provides
support for the pub-sub model over the web services protocol and addresses
event driven systems. This will result in looser and more dynamic coupling over
distributed systems and at the same time enable parallelism and flexibility in
business processes.
The aim of this
project is to design and implement an event driven BPEL architectural model
that also offers transport independence. The model is aiming to overcome
problems coordinating multiple processes in an asynchronous way. It also aims
to arrive at a model that can support a configurable business process that is
independent of transport issues and thus permit development of workflows that
focus on the business processes with advantages that flow from such a
separation of concerns.
To provide a
logical separation of concerns to enhance loose coupling and reusability, a
multi-tiered architecture will be used. Our approach to this is described
below:
" Business Process Tier which covers only the business logic or business
workflow. This tier is not affected by changes in transportation protocols or
addition of new ones.
" Event Management Tier which handles events generated by Web services.
The event controller can send out notifications of events to services that have
existing subscription with the event system. This layer enables asynchronous
messaging within the business process and so the process thread does not have
to stop and wait at each point an input is required from users.
" Transport Tier which acts as an adapter/gateway to reach the user
depending on the initiating transport protocol. Protocols can be added or
deleted depending on the business needs without affecting other tiers, thus
providing the independence and reusability.
" User Interaction Tier where forms are generated to provide an interface
for the workflow when input is required from the user through different
transport mechanisms.
Our solution
addresses the two main requirements, namely to:
- allow for externalised definitions of Configurable Work Flows whereby these
definitions govern the workflow at the run time and are stored in an ontology
of events and are therefore easily reconfigurable;
- uniform handling of both an interactive and a B2B channel over a variety of
transport protocols.
3.4 Dynamic Process Status Information
As a business processes passes from state to state or activity to activity we
wish to capture that transition in a transaction history log. Our architecture
allows us to dynamically configure status messages when we enter a process and
leave a process, the SSDA also optionally captures any messages returned from
other processes or legacy applications. Messages can be given a level number in
order to nest them within an activity. These messages may also used by the
event server in order to change a workflow. The messages can cause emails to be
generated to one person or a group of people who may be affected by a
particular outcome.
4 A Quality of Service Aware Service Broker
For web services to fulfill the expectations of requestors there must be an
approach that allows for Non Functional Requirements (NFRs), including those
that relate to the Quality of Service (QoS), to be specified and enforced. The
approach suggested here makes use of an ontology to describe the QoS for
service requestors and providers. This is preferred to having some standardized
QoS specifications as it permits flexibility for individual needs and contracts
(also referred to as an alternative) can be established based on service
discovery supported by an ontology for NFRs. If needed, ontology matching and
translation is available[18]. Typical QoS requirements might include
throughput, response times, hours of operation and reliability as measured in
failed transactions in the previous period.
The architecture
implemented would consist of a Service Broker to receive requests from and to
forward requests to the application server (or to another service broker acting
in a symmetrical way) most successfully meeting the QoS requirements. The
Service Broker would validate, forward and monitor requests as well as
providing error handling relating to fulfillment of requests. The service
broker is positioned as a gateway with links to the other necessary software in
the architecture. This includes the UDDI registry, the application server of
the provider (or another Service Broker) and indirectly to the Service
Selection Engine. Assuming all other lower level gateway issues are dealt with
such as ISO transparency issues the service broker will then concern itself
with NFRs and in particular those relating to QoS.
Existing
approaches include the suggested use of policy description languages such as
WSPL[19], KaOS or Rei[20]. These approaches require both the requestor and the
provider to express their policy in the same language. Also the simple matching
of language statements can lose semantic meaning. Our suggested approach is
more flexible in that it employs the ontology used by either the requestor or
the provider and will match that semantic information to another ontology or
simply some other policy language if that is what the other party has used.
Approaches that
do not employ ontologies, such as WS-Policy, provide a high level way for
service providers and service requestors to define needs. But without an
agreement on how to express the needs (or assertions) the provider and
requestor will not be able to effectively communicate. In our approach the
service broker will allow for an automation of the matching of assertions
between the provider and requestor relating to the NFRs. The automation of the
matching of assertions could either be done prior as a contract which would be
stored in the UDDI registry or dynamically matched with a translation component
that would allow establishment of a new contract. The translation component
would rely on the NFR/QoS ontology defined by the requestor and NFR/QoS
ontology defined by the provider to carry out the matching task. Error handling
would encompass indicating when a match was unable to be achieved by the
translation component.
In a typical
process the functional requirements of the service have already been
established when service discovery occurred earlier and the service required is
known. The service broker component will establish if the NFRs for a particular
service are contained in a contract or not. If the NFRs are not in a contract
the message will be inspected for NFR assertions and the request will proceed.
(The broker may also redirect a request for establishment of a contract if
requested.) The request will then be forwarded to the application best
satisfying the NFRs of the requested service server (or another service broker
offering a service, in a symmetrical relationship); load balancing may be a
consideration in the choice of application server to satisfy the request.
The architecture
as described, by incorporating a service broker gateway permits flexibility and
interoperability using the UDDI directory as a rich repository and existing
standards to ensure QoS needs are met in a web services environment.
5 Conclusion
In this paper we have presented a proposal for a Semantic Service Dissemination
Architecture. This architecture is currently being implemented as a number of
proof of concept honours level projects. We have described the three major
components of this architecture which integrate service discovery and request
processing. These are 1) Service Selection Engine which processes service
discovery requests and is involved in contract negotiation and establishment 2)
Dynamic Configuration Manager which configures web services at runtime using
information stored as an ontology in the Common Repository 3) Service Broker
which validates service requests against contracted requirements. The Service
Broker then forwards those validated requests to the appropriate application
server and returns Quality of Service (QoS) statistics to the Service Selection
Engine as evidence of compliance to the QoS parameters agreed in the contract.
The benefits of
this architecture are its flexibility, obtained by the use of configurable web
services, a common repository holding contract and ontological information for
the service and its non functional requirements, that is used at both service
selection and at runtime; support for automated and manual activities in the
service selection and implementation processes; and also the provision of
quality of compliance functionality through the capture of QoS statistics.
Future activity involves working with a commercial partner to explore further
aspects of the SSDA and to resolve any issues identified in current projects.
[1] E. M.
Maximilien and M. P. Singh, "Toward autonomic web services trust and
selection," presented at Proceedings of the 2nd international conference
on Service oriented computing, SESSION: QoS models,
[2] E. Brown, "
[3] W. Hoschek, "The Web Service Discovery Architecture," presented
at Conference on High Performance Networking and Computing, Proceedings of the
2002 ACM/IEEE conference on Supercomputing Baltimore, Maryland, USA 2002.
[4] P. Grace, G. S. Blair, and S. Samuel, "A Reflective for Discovery and
Interaction in a Hetrogenous Mobile Environments," ACM SIGMOBILE
[5] D. Alur, J. Crupi, and D. Malks, Core J2EE Patterns: Best Practices and
Design Strategies, 2 ed: Prentice Hall / Sun Microsystems Press, 2003.
[6] B. Newport, "Requirements for Building Industrial Strength Web
Services:
The Service Broker ", vol. 2006: Serverside.com, 2001.
[7] OMG, "CORBA™/IIOP™ Specification, [HREF1] .", vol. 2006, 2006.
[8] OASIS, "UDDI Version 3.0.2. October 2004, [HREF2]
," 2004.
[9] T. Janczuk, "WSE SECURITY: Protect Your Web Services Through The
Extensible Policy Framework,[HREF3] ," in WSE 3.0 MSDN
Magazine, Feb 2006 vol. 2006: Microsoft, 2006.
[10] Weerawarana S., Curbera F., Leymann F., Storey T., and F. D.F., Web
services platform architecture : SOAP, WSDL, WS-Policy, WS-Addressing, WS-BPEL,
WS-Reliable Messaging, and more Upper Saddle River, NJ:: Prentice Hall PTR,
2005.
[11] Kloppmann M., König D., Leymann F., Pfau G., and R. D., "Business
process choreography in WebSphere: Combining the power of BPEL and J2EE,"
IBM Systems Journal, vol. 43, pp. 270-296, 2004.
[12] J. Pasley, "How BPEL and SOA are changing Web services
development," Internet Computing, IEEE, vol. 9, pp. 60-67, 2005.
[13] "The Workflow Management Coalition,[HREF4] ," 2006.
[14] K. Leune, W. J. van den Heuvel, and M. Papazoglou, "Exploring a
multi-faceted framework for SOC: how to develop secure Web-service
interactions?," presented at 14th International Workshop on Research
Issues on Data Engineering: Web Services for e-Commerce and e-Government Applications
2004, 2004.
[15] M. A. Talib, Z. Yang, and Ilyas Q.M., "A framework towards Web
services composition modeling and execution, ," presented at IEEE EEE05
international workshop on Business services networks,
[16] S. Vinoski, "Web Services Notifications," Internet Computing,
IEEE, vol. 8, pp. 86-90, 2004.
[17] S. Graham, P. Niblett, D. Chappell, and e. al., "Web Services
Notification (WS- Notification); [HREF5]," 2005.
[18] P. Nolan, "Understand WS-Policy processing: Explore Intersection,
Merge, and Normalization " in WS-Policy, vol. 2006, 2004.
[19] A. H. Anderson, "An Introduction to the Web Services Policy Language
(WSPL)," presented at IEEE International Workshop on Policies for
Distributed Systems and Networks,
[20] B. Parsia, V. Kolovski, and J. Hendler, "Expressing WS Policies in
OWL; [HREF6],"
presented at Policy Management for the Web Workshop,14th International World
Wide Web Conference,
HREF1
http://www.omg.org/technology/documents/formal/corba_iiop.htm
HREF2
http://uddi.org/pubs/uddi_v3.htm
HREF3
http://msdn.microsoft.com/msdnmag/issues/06/02/WSE30/default.aspx
HREF4
HREF5
http://www-128.ibm.com/developerworks/library/specification/ws-notification/
HREF6
http://www.mindswap.org/papers
Mark Nolan and Robert Redpath, © 2006. The
authors assign to Southern Cross University and other educational and
non-profit institutions a non-exclusive licence to use this document for
personal use and in courses of instruction provided that the article is used in
full and this copyright statement is reproduced. The authors also grant a
non-exclusive licence to Southern Cross University to publish this document in
full on the World Wide Web and on CD-ROM and in printed form with the
conference papers and for the document to be published on mirrors on the World
Wide Web.