Sound advice - blog

Tales from the homeworld

My current feeds

Sat, 2009-Jun-13

The REST Statelessness Constraint

The Statelessness constraint of REST is important, but often poorly understood. It is more restrictive than the SOA statelessness principle and prohibits a number of useful communication patterns from REST architecture.

REST-style Statelessness

Fielding's Dissertation describes the constraint thusly:

We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.

Section 5.3.1 expands on this with the following:

REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability.

The purposes of introducing the statelessness constraint include improvements to visibility, reliability, and scalability. This means that proxies and other intermediaries are better able to participate in communication patterns that involve self-descriptive stateless messages, server death and failover does not result in session state synchronisation problems, and it is easy to add new servers to handle client load again without needing to synchronise session state.

REST achieves statelessness through a number of mechanisms:

  1. By designing methods and communication patterns that they do not require state to be retained server-side after the request.
  2. By designing services that expose capabilities to directly sample and transition server-side state without left-over application state
  3. By "deferring" or passing back state to the client as a message at the end of each request whenever session state or application state is required

The downside of statelessness is exposed in that last point: Applications that demand some kind of session state persist beyond the duration of a single request need to have that state sent back to the client as part of the response message. Next time the client wants to issue a request, the state is again transferred to the service and then back to the client.

As noted by Roy, this is a trade-off. More network bandwidth is used in order to achieve visibility, reliability, and scalability benefits for the server side. Other REST constraints such as caching are intended to balance this out so that an acceptable amount of bandwidth is used.

Deferred application state is often in the form of resource identifiers such as URLs. This is an ideal form, because it can be stored indefinitely and easily passed around between clients so other clients can continue the the application at a later point in time.

Let's take a simple banking example:

So my client issues a GET request to <http://accounts.example.com/myaccount/transactions>. In my response I get both the current list of transactions, plus a "next" link. This hyperlink is a reference to a resource whose identity incorporates where I am currently up to, in this case it might be <http://accounts.example.com/myaccount/transactions?dtstart=2009-06-11>. This link allows me to unambiguously go back and fetch only new transactions, yet the service doesn't have to remember where I was up to in the transaction list. The dtstart identifier that marks the boundary between the transactions I have seen and those I have not has been deferred back to me as part of the returned message.

If the client is an accounting program, this will allow me to reconcile my accounts with those of my bank in a nice unambiguous way. If I have to restore my accounts from backup, the backup will have the old URL and will query from the right place. If I lose my data completely I can go back to using the original transaction URL. I can handle services being upgraded, and individual servers being added or failing over. They don't need to know where I was up to until I send my next request.

The gold standard of stateless interaction is that application state is first reduced to its minimum possible set, then any remaining necessary application state is stored with the client and not with the server.

A SOA take on Statelessness

If you read the excellent Principles of Service Design by Thomas Erl you will find statelessness as a key SOA principle. However, the mechanisms described to support statelessness revolve around the use of state databases on the server side. This is not statelessness from a REST perspective, which would see the entire service including it state database as "the server side". Deferring state within the service does not meet the stringent REST criteria for this constraint. That is not to say these kinds of databases aren't useful from time to time, but they have no place in a strict REST architecture.

REST Statelessness in detail

Some people will tell you that the statelessness constraint can be worked around by giving the session state a resource identifier on the server side. Doing so certainly could improve the visibility of state, and its addressability through hyperlinks. However, it does not address the core reliability and scalability concerns that the REST constraint seeks to address.

There is a subtlety in the statelessness constraint that can make it difficult to grasp. That subtlety is the difference between application state (or session state) and service state. Application state is the state that is built up as part of a client and server interacting in order to fulfil application requirements. From a SOA perspective this might be seen as the state of a service composition as opposed to the state of an individual service. The difficulty is that you can't just point to state on the server side and say that it must be service state. It could be application state. Certainly taking application state and storing it on the server side does not magically transform the state into service state.

At this point it is probably worth trying formalise the definition of these two alternative forms of state. I think the best way to do this is to say that service state is the state a service holds independently of the state of any client, while application state is the state being held in some kind of mirroring relationship with a client.

Let's take for example a straightforward PUT request. I generate some state on the client, and through the PUT request I transfer that state (ie intent) to the server side. If the server accepts my request it will update its service state to match my intent. When it returns a response back to me as the client I can forget this state. It has been transferred, and is no longer my responsibility. It is service state, and not application state.

I can do an optimistic lock completely within REST constraints. Say I get an ETag back from an initial GET request. I can now submit a conditional PUT request to update that state only if the ETag matches what I saw in my GET. This is stateless because while the server needs to remember that that particular resource had that particular ETag and has to remember to update it when its state changes, it doesn't really care about me. How many clients have a reference to this ETag, and therefore hold an optimistic lock? None? One, a thousand? The server doesn't need to keep track. It just has to honour the interface of the conditional PUT request.

One more example of a communication pattern that can be done within the RESTful statelessness constraint is the ability to grab a set of changes to a resource rather than refetch the entire representation. This may at first sound challenging, but consider a service that keeps a buffer of recent changes. It might keep the last hundred changes, or the last hour's worth, or use some other measure. Now, each time a client requests the set of changes from a defined point it can return this set. If the client requests changes that are too old, it can be directed back to the complete representation. Again this is stateless. The server will store the same number of changes and the same set of updates regardless of the number of clients. One? None? A thousand? Again, the server's consumption of limited memory resources is not affected by the number of clients that might be trying to stay synchronised with its resource state.

Communication patterns prohibited: pub/sub and pessimistic lock

Two communication patterns that are excluded from a REST architecture are pessimistic locking and the publish/subscribe pattern. These are not stateless, and there are two immediate clues that tell us this. Firstly, there is a current client expectation and state that is reflected on the server side between requests. Secondly, the server will have a timeout in both of these cases to deal with a client silently going away. Let's dig into these scenarios a bit further.

A pessimistic lock requests that the service prevent modification of the state behind a resource until the lock is released. This will typically involve some kind of LOCK request being sent to the server, and once the lock is created the client can go about its business before eventually submitting a request with its new intent for the resource or resources. Pessimistic locking is often needed more as systems scale to avoid the race conditions that can result from an optimistic locking approach. Importantly, pessimistic locks are typically the foundation of transactions both across multiple resources exposed by a service and across multiple services. These transactions can make the difference between a particular service composition being possible or impossible.

Unfortunately, the lock maintained by the service demonstrates all of the classic features of application state. The number of locks increases with the number of concurrent clients. Each lock consumes server-side resources and in this case also restricts concurrency between clients. Server-side state and client-side state is synchronised, with both client and service having knowledge of the lock between requests. Finally, the server side will always have to maintain the lock in terms of a time-limited lease. If a client goes away without releasing its lock, the server needs to eventually clean up the lock resources or become unavailable. Giving a pessimistic lock a resource identifier and allowing a DELETE request to be issued to it does not alter these conditions.

Publish/subscribe carries a similar fate. If I SUBSCRIBE to a resource, I create a subscription state object somewhere within the service that I as a client depend on between requests. The server calling me back is nothing to be concerned about, but the server-side subscription state is application state. Again the number of subscriptions increases with the number of concurrent clients, even between requests. Each subscription consumes sever-side resources, server-side state and client-side state is synchronised between requests, and subscriptions will always need to be obtained on a time-limited lease to facilitate server-side cleanup. Roy Fielding explicitly argues against the use of pub/sub on the Web, so you can go and read about this trade-off in his own words.

Bending REST Statelessness

Of all the REST constraints, statelessness and the communication patterns it prohibits grates most strongly at the enterprise scale. This isn't the Internet, where hoards of clients can suddenly appear and knock your service off the Web. There aren't enough clients to warrant extreme attention to scalability. Existing enterprise systems exhibit the banned patterns, lending credence to arguments that statelessness between requests is overly strict in this context.

This argument has merit. REST was designed for the Web, and doesn't particularly have the concerns of service compositions as its core objectives. REST seeks to make things work on the big scale, and for that you need to make some harsh decisions. On the Web it can make sense to require clients to actively poll a server for new updates, rather than maintain subscriptions on their behalf. The cost of extra messages and extra delay is often less than the cost of maintaining per-client state on the sever side. The enterprise has a different set of trade-offs that will often shift this balance the other way.

From a REST purist perspective we can tolerate stateful interactions only within the boundaries of a single service or a single client. It is perfectly OK to wrap up non-conforming interactions within a service boundary, so long as the interface it exposes does not include these features. In the same way as a WS-* interface can wrap up a legacy system for simpler and more unified access, a REST interface can wrap up a set of WS-* services for simpler access again and more scalable interactions on the larger scale.

Architects that choose instead to break the REST constraint may certainly do so, but should not claim their architecture is RESTful. It might be REST with some exceptions, and this "with exceptions" approach probably the right for most enterprise architectures. Some exceptions even appear on the Web itself. Bear in mind that by making exceptions reliability and scalability will be impacted, but sufficient money thrown at these problems along with only moderately large architectures will not yield too many adverse affects. As always: Understand the consequences of your decisions, and think for yourself.

Conclusion

It is my opinion that constraints will be bent and specifications that bend REST constraints will be more widely accepted as HTTP and REST thinking become more integral parts of the Enterprise. HTTP in the future will have to serve the dual masters of Web and Enterprise, and find the least perverse ways of matching the two. We have already seen one attempt to fork HTTP for the enterprise in the form of the Web Services stack. The lack of synergy between the Web and WS-* has since rebounded back on its authors, and many of us are back at this point of having to decide whether to embrace and extend HTTP or to produce yet another alternative.

I tend to think that both the Web and the Enterprise will be well served by a minimal set of extensions to HTTP that support some features currently common at the enterprise scale. In particular, I see cross-service transactions and publish/subscribe as important in some use cases. I would rather define these in a way that breaks REST and is synergistic with HTTP than to start again from scratch. I release that by adding non-compliant features to HTTP we risk that these will leak out onto the web in a way that harms the features of Web architecture, however I see this risk as moderate compared to the potential benefit of bringing the Web and Enterprise into rough protocol alignment. Whether it is the job of the w3c, IETF, or another body to build these non-conformant synergistic HTTP extensions is not quite clear to me. If this sort of work can't be housed within the IETF I suspect a new standards body would have to be formed along the lines of OASIS.

Benjamin