Web Services: yesterday's hype or tomorrow's promise?

Madeleine Wright,Lecturer, Department of Computer Science Rhodes University, PO Box 94, Grahamstown, South Africa, 6140. m.wright@ru.ac.za


The development of web services is at a significant stage, with much attention focused now on Service Oriented Architecture (SOA), which utilizes web services as a messaging system and as a wrapping mechanism for legacy applications. While there is still controversy over the multiplying and sometimes competing specifications which surround the generally recognized web-service standards of SOAP (no longer an acronym) and WSDL (Web Services Description Language), there has been significant uptake of a different model of web service: REST or Representational State Transfer.

This paper examines some of the historical influences underlying the core standards and attempts to explain the reason for recent instances of the adoption of the REST model. The paper suggests that the increasing complexity of the SOAP-based approach may be counter-productive and that the REST model offers a simplicity and an adherence to the architectural principles of the Web that is seen as both practical and appealing, not only for computing systems with large resources but also for mobile devices which currently have fewer resources and more constraints.

The paper concludes with an attempt to predict the future uptake of the two web-service models, in the context of the Web as a system in which simplicity is paramount and over-complexity has often led to failure.

Acknowledgments: the author wishes to thank the Department of Computer Science at Rhodes University, particularly George Wells and Peter Clayton as supervisors of her Masters Thesis, for their support of this research. She also wishes to thank the sponsors of the Centre of Excellence in the Department for their financial support: Business Connexion, Comverse, Telkom, Thrip and Verso Technologies.


I believe we are at a crossroads for the web-services approach to distributed computing. I think there is value to be gained both from looking at what has gone wrong so far and from considering some future directions which may turn out not to be so new after all. Above all, I think web services need to be considered from the viewpoint of the context given to them by their name — Web Services. We all know (or we wouldn't be attending this conference) that the web is the most spectacularly successful means ever of communication and for distributing information resources. We would be foolish to lose sight of that. As I shall proceed to demonstrate, the most successful web services so far have all run over HTTP and it is that model that still holds out the most promise.

Behind web services we have the wreck of the distributed object model and RPC, and still requiring to be addressed is the specification nightmare. Both of these problem areas deserve some expansion in terms of what has gone wrong with them and I shall attempt to provide that. But I am going to start by giving you my working definition of web services and for this I am indebted to Newcomer [HREF1] who defines a web service as follows:

It's the action of sending an XML document to a receiving system that understands how to parse the XML and map it to an underlying execution environment, and optionally receiving a reply, also in the form of an XML document… A Web service must exist independently of any programming language's view of it. If it didn't, we would not achieve the benefit of universal interoperability… The whole thing really has to start and end with the XML, not the Java or the C# or the Perl or the Python or the COBOL, SQL, or whatever.

Web services are singular — and in a grammatical sense as well! (Web services is the name given to the technology as well as the plural of a web-service instance.) Newcomer's definition places XML solidly at the heart of web services, regardless of language, platform or even of protocol, and it also broadens the scope in that web services are not tied to any set of specifications. I cannot improve on Newcomer's definition and am in agreement with it, although my focus favours the simplicity of the HTTP protocol and therefore the web aspect of web services.

Before I can move to explain how powerful web services can be, I have to sweep away some detritus. I shall start with RPC (the Remote Procedure Call) and a good diagram to illustrate its deficiencies as a paradigm for web services (redrawn from Thomas et al.) [HREF2]

Sequence Diagram for the Remote Procedure Call

The Remote Procedure Call

Only two years ago, RPC lay at the heart of web services. It was the default mode for most web-service implementations. Only in the last eighteen months has it actually been generally recognized as a faulty model for distributed services. Its problems are inherent in its function, which is to make transparent the boundaries between local and remote objects, and therefore to hide the very problems that need to be recognized before they can be handled, problems summed up by Deutsch in his famous 8 Fallacies of Distributed Computing [HREF3],which are that:

Half a decade before the term "web service" was first used, Waldo et al. [HREF4] had pointed out that it was impossible to separate the interface of an object from the context in which it was used: There are fundamental differences between the interactions of distributed objects and the interactions of non-distributed objects. Further, work in distributed object-oriented systems that is based on a model that ignores or denies these differences is doomed to failure, and could easily lead to an industry-wide rejection of the notion of distributed object-based systems.

Prophetic words indeed. With RPC, stubs (or proxies) on both the client and the server act as handlers for the interface between code and the run-time system. This approach conceals the communication process from the programmer. Not only is RPC an attempt to hide the boundaries, but the systems that use it tend to be distributed-object systems, now perceived as the antithesis to the loose coupling that defines web services. Because there must be a run-time binding between client and server, RPC by its nature is not loosely coupled. RPC does not fail gracefully and is not scalable.

Sun's increasingly ambivalent relationship with RPC might be said to exemplify the gradual shift in attitudes towards RPC that has occurred over the last few years. RPC was defined, not uniquely, by Sun in 1988 [HREF5], partly as a response to the need for platform-independent communication structures that might access file systems across the Internet. 1995 saw the beginnings of a change in emphasis, with an Open Standards version (or version 2) [HREF6] stating:

The intended use of this protocol is for calling remote procedures. Normally, each call message is matched with a reply message. However, the protocol itself is a message-passing protocol with which other (non-procedure call) protocols can be implemented [my italics].

The late 90s saw Sun enthusiastically espousing a version of web services that put XML hand-in-hand with RPC, although within J2EE servers the implementation was founded on RMI or Remote Method Invocation (even more restrictive in its insistence that Java has to be at both ends of the communication). By 2004, however, an article on the Sun Developer Network carried the explanation:

Although JAX-RPC [Java API for XML-based RPC] and its name are based on the RPC model, it offers features that go beyond basic RPC. It is possible to develop web services that pass complete documents and also document fragments [HREF7].

The final rebuttal of RPC for web services came in May last year with the announcement that "JAX-RPC 2.0 has been renamed to JAX-WS 2.0 (Java API for XML-Based Web Services)" [HREF8].

Implementations of web services on the major platforms inevitably lag behind such pronouncements, and scarcely feature even yet, for example, in the world of the Java Micro Edition or J2ME. The Web Services API or JSR 172, for example, is still an optional addition found only on high-end Java-enabled mobiles. But there is hope, in this movement away from RPC, that more realistic approaches will be taken up. Within Visual Studio, Microsoft has for some time implemented asynchronous web services over HTTP and recent implementations of J2EE servers, such as BEA's WebLogic, have also developed asynchronous messaging patterns over HTTP.


I turn now to the other nightmare that has bedevilled (and still bedevils) web services: the specification spaghetti. And why do I call it a spaghetti? Well, take a look a Jeon Jong-Hong's 2005 image [HREF9].

Web Service Specifications, early 2005

The worsening situation was hinted at in 2004 by Bray [HREF10] when he counted the pages of the then-existing specifications and was horrified to discover they then comprised 783 pages. The further spaghettification can best be illustrated, however, by Jong-Hong's image, appalling in its complexity. It is hard to see how anyone can view such a proliferation of specifications without incredulity. (And that was a snapshot taken over a year ago! He has not updated it to take into account what has happened since then.)

As long as this proliferation of specifications continues in an ad hoc and competing manner, there can be little prospect for either sensible implementations or for the development of skills in the area, and the prospect of another CORBA (or Common Object Request Broker Architecture) awaits us, in which my service cannot talk to yours — probably because both of us have proprietary features which have been added with the aim of vendor lock-in and neither of us is using the same standards.

Thus, the current accumulation of complex specifications not only narrows the general acceptability of web services but also forms a barrier to interoperability in that, when there are competing specifications, it is unlikely that both will be implemented. It is even more unlikely that implementations will support all the possible features — there are just too many of them. At the present rate, web services will either become the province of the few who can afford the expensive tools to implement them, because mastering the Babel of specifications will be beyond the scope of any normal developer — or the simmering revolution already evident in the ranks of those who work with XML on a daily basis [HREF11] will take hold and another, simpler solution will be found. An example of the simmering revolution is a column written by Bray, in which he writes: "No matter how hard I try, I still think the WS-* stack is bloated, opaque, and insanely complex. I think it's going to be hard to understand, hard to implement, hard to interoperate, and hard to secure" [HREF12]. This view has also been reflected in the trade press: "Without clear direction on standards, the payoff of the massive industry bet on Web services could be delayed — or derailed — because customers are sitting on the sidelines of a politicized and contentious standards process" [HREF13]. Sun's [then] President admitted last year: "[Web services have] either got to be simplified, or radically rethought… today's web services initiatives are in danger of vastly overcomplicating a very simple (really simple) solution" [HREF14].

Bosworth, former Chief Architect at BEA, the original architect of XML and MS Access at Microsoft, a major contributor to the HTML basis of Internet Explorer, and now employed by Google, said recently:

I'm trying, right now to figure out if there is any real justification for the WS-* standards and even SOAP in the face of the complexity when XML over HTTP works so well? So, I'm kind of a sceptic of the value apart from the toolkits. They do deliver some value, (get a WSDL, instant code to talk to service), but what I'm really thinking about is whether there can't be a much simpler [kinder] way to do this [HREF15].

Developers such as Bray, who were in at the birth of XML, deplore the complexity of the current realization of web services. They designed XML so that message exchange over the internet would be both simple and capable of encapsulating complexity where needed, and they find the specification proliferation irksome at the very least. Bosworth is well positioned to be the spokesman for the simplicity they crave. He cites it as the major benefit of XML over HTTP:

You don't have to worry about any of the complexity of WSDL or WS-TX [Web Services Transactions Project] or WS-CO [Web Services Coordination]. Since most users of SOAP today don't actually use SOAP standards for reliability (too fragmented) or asynchrony (even more so) or even security (too complex), what are they getting from all this complex overhead[?]. …How do you keep it really simple, really lightweight, and really fast[?]. Sure, you can still support the more complex things, but the really useful things may turn out to be simplest ones [HREF16].

The history of software systems to date suggests that rigid, over-elaborate systems (CORBA, for example) do not survive, not only because they do not have the flexibility to adapt to change but also because people do not like to be constrained by them. Web services in their current state seem to be at the mercy of those who want to control and regulate for every possible eventuality, to such an extent that the whole becomes unmanageable, even incomprehensible — worlds away from the creative, innovative spirit that produced the Internet, HTTP and HTML, and later simplified SGML into XML.

Requirements for web services

So where does that leave us? With a requirement for simplicity, interoperability and loose coupling. And how is that to be achieved? Not, if those qualified to pronounce on it are to be believed, with the SOAP stack of web services.

I'm not going to attempt to demolish the usefulness of the SOAP stack, with its sister components of WSDL and, less significantly, UDDI (an acronym for Universal Description, Discovery and Integration), because there is definitely a place for these standards — perhaps on the intranet, along with CORBA, if not, perhaps, over the global web. We all know that SOAP got going before the W3C Schema specification could become a recommendation — and that a few months might have made a world of difference as to whether there was actually a need for WSDL. We all also know that there was a serious attempt to address the concerns of those most closely involved with HTTP in version 1.2 of SOAP, in its inclusion of the HTTP GET verb and also in its inclusion of URIs in HTTP headers. SOAP 1.2 recommends that, where practical, particularly when using the HTTP binding, separate resources should be identified by separate URIs, so that SOAP endpoints fit into the web architecture in the same way as other web accessible resources.

There are definitely situations in which the SOAP stack works, with its potential for features such as encrypted signatures, and there is a place for its usefulness in integrating legacy applications within a corporate intranet.

UDDI is another matter. Unlike SOAP and WSDL, UDDI has never really caught on, mostly, as Lomow and Newcomer argue [2004], because of the unwillingness of companies to enter into transactions with unknown business entities that have not been approved as trading partners. In terms of its current proprietary implementations and lack of relevance beyond the corporate intranet, UDDI is the most CORBA-esque of the three specifications that make up the core SOAP stack and the one most easily cast aside.

What, then, are the core requirements for web services? Essentially the same as those for a loosely-coupled, distributed architecture, as determined by Slama et al. [2004], and described in this table:

A comparison between tight and loose coupling

The table illustrates features mentioned earlier in this talk, such as the asynchronous and dynamically-bound nature of web services, as opposed to the object-oriented, synchronous, statically-bound nature of distributed object technologies with RPC. A further interesting opposition is that between the strong data typing of tightly-bound systems and the weak data typing of loosely-bound systems. The loosely-bound alternative detailed in the table is an illustration of the concept of payload semantics which do not require a client to change with each addition, unlike systems employing interface semantics which do require a change in the client and are therefore much less flexible. Slama et al. [2004] point out that, significantly, "RPC-style interaction is typically based on interface semantics. Every procedure call has a meaningful name that indicates its purpose".

If the SOAP stack does not offer us the kind of loosely-coupled simplicity required for web services (and the current state of the specifications cannot be called simple by any stretch of the imagination), where does that leave us? Well, there's probably only one alternative, first named by the author of HTTP 1.1 and co-founder of the Apache Foundation, Roy Fielding. It's named REST or Representational State Transfer.

Representational State Transfer (REST)

Representational State Transfer

This diagram, taken from Hinchcliffe [HREF17], gives an illustration of the kind of message passing used in Representational State Transfer. As you are probably aware, Fielding defines the term as follows [2000]:

Representational State Transfer is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through an application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use.

Prior to the achievement of W3C Recommendation status by SOAP 1.2, there had been a flurry of arguments concerning the advantages or disadvantages of REST over SOAP (with SOAP here representing the whole SOAP stack). The REST camp is still vocal but I venture to speak in favour of the concept, not with the evangelical fervour of some of its supporters, for whom it is the only route to distributed computing, but from a purely practical standpoint. The odds in favour of REST have stacked up remarkably during the last eighteen months to two years. Not because there have been heated and triumphant discussions of its advantages, but simply because more and more well-known companies are deciding to expose their web services using what they describe as REST interfaces — where plain old XML (or POX as Obasanjo acronymizes it! [HREF18] is used over HTTP. Just look at them:

Many less obviously well-known organizations also offer what they term REST interfaces as any search on Google will reveal. Late 2005 also saw developers, for example, of Python [HREF22] and of the Rails framework [HREF23] showing an interest in REST .

What companies as successful as Amazon and Flickr have to say about the popularity of their REST interfaces is particularly interesting in the light of the conflict that arose in the W3C TAG [Technical Architecture Group] when supporters of SOAP-based web services such as Manes ridiculed REST as a purely academic pursuit:

W3C is, at heart, an academic organization. And its perfectly reasonable for W3C to pursue its academic goals (REST and the Semantic Web). But if W3C wants to play a major role in business systems, and if W3C wants to continue receiving funding from the big software vendors, then the W3C TAG must be willing to [accommodate] the requirements of big business. If the REST faction continues to try to undermine the existing Web services architecture, it will alienate big business [HREF24].

Fielding's response to this posting is as pointed as it is obviously angry, not only in its rebuttal of the idea of REST as a purely academic pursuit but in its placing SOAP in the context of failed distributed-object architectures:

The only reason SOAP remains in the W3C for standardization is because all of the other forums either rejected the concept out of hand or refused to rubber-stamp a poor implementation of a bad idea. If this thing is going to be called Web Services, then I insist that it actually have something to do with the Web. If not, I'd rather have the WS-I group responsible for abusing the marketplace with yet another CORBA/DCOM than have the W3C waste its effort pandering to the whims of marketing consultants [HREF25].

It need hardly be said that the main advantage of REST-based services is that they are completely interoperable. All that is required for the client is that it be able to send information to the web server hosting the service and receive information from it in the simplest language of all &mdash text-based XML. There is no problem with data binding and serialization in terms of the message transmission because there are no objects to be transmitted. There are no language or platform issues, no complex specifications to incorporate, no WS-I Basic Profile to satisfy. (WSI is an acronym for Web Services Interoperability Organization. Its Basic Profile is an attempt to standardize what a (SOAP-based) web service may and may not do. Needless to say, the WS-I Profiles are also bedevilled by versioning problems!) Significantly, of course, no toolkits are required to translate innumerable complex specifications into terms a lay person can understand.

A major advantage lies in the expressive power of XML itself and its core specifications, such as W3C Schema Language or the increasingly popular RELAX-NG. Although WSDL can be used alongside REST, a simpler approach is to standardize on a schema of some kind as the pattern for the message content.

A second major advantage of working directly with XML is that the processes are seen to be data-centric, rather than object-centric, as in the older model for distributed objects.

Thirdly, it is no insignificant advantage for REST-based web-service styles that they are also seen to conform to the principles described in the W3C's recommendation, Architecture of the World Wide Web [HREF26].

Fourthly, there is some evidence that, depending on client implementations, REST has some advantage of speed over SOAP-based services in that it does not carry the data type conversion and greater textual overheads that SOAP usually necessitates [Barr, 2005, email].

It is with these advantages of the REST architecture that I wish to conclude. But before doing so, I would like to place REST in a context to which it is most eminently suited and where it has not yet been significantly explored: the context of mobile devices, where performance and speed can both be adversely affected by unwieldy applications. While, as has been mentioned, top-end mobile phones may include resources for RPC-style web services and the simple handling of XML messages, developments in web-services technologies over the last eighteen months have left them behind. It is possible that Sun's announcement in February, this year [HREF27], of its intention to join forces with Openwave in creating new platforms for mobile devices may signal an intention to address this problem but that remains to be seen.

Of particular interest in South Africa, or more generally the African context, is the availability of mobile phones, which far exceeds that of PCs and laptops. Mobile phones provide unprecedented and still expanding internet access to millions. Services that can be employed as thin clients on mobile phones are only just beginning to be seen and are considered a potential major growth point in Africa. Of further interest in this context is the current availability for Nokia series 60 phones of the cut-down mobile Apache server, Racoon, which serves "mobsites" as opposed to websites, and which now offers the possibility that mobile phones may be not only web clients but also web servers.

A Forum Nokia paper entitled, Optimizing the Client/Server Communication for Mobile Applications, Part 3, Version 1.0 [HREF28] illustrates in every respect the advantages of text-based messaging for devices with such limited resources. The paper details a Currency Converter application that was run on different mobile devices with different application styles. While REST was not actually one of the styles explored, and while the overheads of XML would add to the loads recorded for the text-based message passing application, the differences recorded between text-based approaches and those of old-style web services are still significant.

Performance of different message transmission systems

A simple XML-over-HTTP approach here is particularly important because of context, in this case the necessity for thin-client web services in the context of a device with currently very limited memory. Perhaps ultimately it will be context that will settle the argument between REST and SOAP-based web services. It was context that ultimately decided the fate of RPC-style web services — the fact that the context of the internet in distributed computing is not the same as the local context. It may be that context will decide the fate of the specification contests and the efforts of the WS-I organization to be their arbiter, with answers to the question whether each specification has a sufficiently wide context or applicability for it to be worth adopting?

There is a telling joke shared by Shirky, in which context is everything:

Two old men were walking down the street one day, when the first remarked to the second, "Windy, ain't it?"
"No," the second man replied, "It's Thursday."
"Come to think of it," the first man replied, "I am too. Let's get a coke" [HREF29].

Possibly as significant as context is simplicity, which cannot be ignored when considering the likely uptake of a technology. Simplicity is frequently acknowledged as the key to good software design and, despite its inclusion in the early acronymic meaning of SOAP, it is a feature significantly lacking in the popular web-services stack. It may well turn out that web services will repeat the example set by RSS (or Really Simple Syndication), in which those embroiled in arguments over standards failed to realize that the world had decided on the simpler model of RSS 2.0 and moved on, leaving them behind. The advice of Rasmussen of Google Maps not to break the simplicity of the web [2005] is also excellent common sense.

This paper started with the fact that the web was the most successful means ever of communication and distributing information resources. It might be argued that a large measure of this success lay in its simplicity: as the Cluetrain Manifesto humorously explains: "Here's the instruction manual for a web browser: if it's blue and underlined, click on it" [HREF30]. If its simplicity is indeed the secret of its success, then time will tell whether a vendor-pushed system like the SOAP stack, notorious now for its confusing complexity, actually has a future and whether the simpler REST-based approach will outperform it.


Greg Lomow, Eric Newcomer (2004). Understanding SOA with Web Services. Addison Wesley Professional.

Dirk Slama, Dirk Krafzig, Karl Banke (2004). Enterprise SOA: Service-Oriented Architecture Best Practices. Prentice-Hall PRT.

Roy Fielding (2000). Architectural Styles and the Design of Network-based Software Architectures, PhD. Thesis, UCLA, Irvine, Chapter 6.

Jeff Barr (2005) email to Madeleine Wright

Lars Rasmussen (2005), address given at Sydney University, cited in ComputerWorld, 14/10/2005.

Hypertext References



Madeleine Wright, © 2006. The author assigns to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author also grants a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.