Sound advice - blog

Tales from the homeworld

My current feeds

Tue, 2011-Dec-06

Best Practices for HTTP API evolvability

is the architectural style of the Web, and closely related to REST is the concept of a HTTP API. A HTTP API is a programmer-oriented interface to a specific service, and is known by other names such as a RESTful service contract, , or a URI Space.

I say closely related because most HTTP APIs do not comply with the uniform interface constraint in it's strictest sense, which would demand that the interface be "standard" - or in practice: Consistent enough between different services that clients and services can obtain significant network effects. I won't dwell on this!

One thing we know is that these APIs will change, so what can we do at a technical level to deal with these changes as they occur?

The Moving Parts

The main moving parts of a HTTP API are

  1. The generic semantics of methods used in the API, including exceptional conditions and other metadata
  2. The generic semantics of media types used in the API, including any and all schema information
  3. The set of URIs that make up the API, including specific semantics each generic method and generic media types used in the API

These parts move at different rates. The set of methods in use tend to change the least. The standard HTTP GET, PUT, DELETE, and POST are sufficient to perform most patterns of interactions that may be required between clients and servers. The set of media types and associated schema change at a faster rate. These are less likely to be completely standard, so will often include local jargon that changes at a relatively high rate. The fastest changing component of the API is detailed definition of what each method and media type combination will do when invoked on the various URLs that make up the service contract itself.

Types of mismatches

For any particular interaction between client and server, the following combinations are possible:

  1. The server and client are both built against a matching version of the API
  2. The server is built against a newer version of the API than the client is
  3. The client is built against a newer version of the API than the server is

In the first case of a match between the client and server versions, then there is no compatibility issue to deal with. The second case is a backwards- compatibility issue, where the new server must continue to work with old clients, at least until all of the old clients that matter are upgraded or retired.

Although the first two cases are the most common, the standard nature of methods and media types across multiple services means that the third combination is also possible. The client may be built against the latest version of the API, while an old service or an old server may end up processing the request. This is a forwards-compatibility issue, where the old server has to deal with a message that complies with a future version of the API.

Method Evolution

Adding Methods and Status

The addition of a new method may be needed under the uniform interface constraint to support new types of client/server interactions within the architecture. For HTTP these will likely be any type of interaction that inherently breaks one or more other REST constraints, such as the stateless constraint. However, new methods may be introduced for other reasons such as to improve the efficiency of an interaction.

Adding new methods does not impact backwards-compatibility, because old clients will not invoke the new method. It does impact forwards-compatibility because new clients will wish to invoke the new method on old servers. Additionally, changes to existing methods such as adding a new HTTP status code for a new exceptional condition can break backwards-compatibility by returning a message an old client does not understand.

Best Practice 1: Services should return 501 Not Implemented if they do not recognise the method name in a request

Best Practice 2: Clients that use a method that may not be understood by all services yet should handle 501 Not Implemented by choosing an alternative way of invoking the operation, or raising an exception towards their user in the case that no means of invoking the required operation now exists

Best Practice 3: A new method name should be chosen for a method that is not forwards-compatible with any existing method - i.e. a new method name should be chosen if the new features of the method must be understood for the method to be processed correctly (must understand semantics)

These best practice items deal with a new client that makes a request on and old server. If the server doesn't understand the new request method, it responds with a standard exception code that the client can use to switch to fallback logic or raise a specific error to their user. For example:

Client: SUBSCRIBE /foo
Server: 501 Not Implemented
Client: (falling back to a period poll) GET /foo
Server: 200 OK


Client: LOCK /foo
Server: 501 Not Implemented
Client: (unable to safely perform its operation, raises an exception)

Best Practice 4: Services should ignore headers they do not understand or the components of which they do not understand. Proxies should pass these headers on without modification or the components they do not understand without modification.

Best Practice 5: The existing method name should be retained and new headers or components of headers added when a new method is forwards-compatible with an existing method

These best practice items deal with a new client that makes a request on an old server, but the new features of the method are a refinement of the existing method such as a new efficiency improvement. If the server doesn't understand the new nuances of the request it will treat it as if it were the existing legacy request, and although it may perform suboptimally will still produce a correct result.

Best Practice 6: Clients should handle unknown exception codes based on the numeric range they fall within

Best Practice 7: A new status should be assigned a status code within a numeric range that identifies a coarse-grained understanding of the condition that already exists

Best Practice 8: Clients should ignore headers they do not understand or the components of which they do not understand. Proxies should pass these headers on without modification or the components they do not understand without modification

Best Practice 9: If a new status is a subset of an existing status other than 400 Bad Request or 500 Internal Server Error then refine the meaning of the existing status by adding information to response headers rather than assigning a new status code.

These best practice items deal with a new server sending a new status to the client, such as a new exception.

Removing Methods and Status

Removing an existing method introduces a backwards compatibility problem where clients will continue to request the old method. This has essentially the same behaviour as adding a new method to a client implementation that is not understood by an old service, with the special property that the client is less likely to have correct facilities for dealing with the 501 Not Implemented exception. Thus, methods should be removed with care and only after surveying the population of clients to ensure no ill effects will result.

Removing an existing status within a new client implementation before all server implementations have stopped using the code or variant has similar properties to adding a new status. The same best practice items apply.

Media Type Evolution

Adding Information

Adding information conveyed in media types and their related schemas has an impact on the relationship between the sender of the document and the recipient of the document. Unlike methods and status which are asymmetrical between client and server, media types are generally suitable to travel in either direction as the payload of a request or response. For this reason in this section we won't talk about client and server, but of sender and recipient.

Adding information to the universe of discourse between sender and recipient of documents means either modifying the schema of an existing media type, or introducing a new media type to carry the new information.

Best Practice 10: Document recipients should ignore document content that they do not understand. Proxies and databases should pass this content on without modification.

Best Practice 11: Validation of documents that might fail to meet Best Practice item 10 should only occur if the validation logic is written to the same version of the API as the sender of the document, or a later version of the API

Best Practice 12: If the new information can be added to the schema of an existing media type in alignment with the design objectives of that media type then it should so added

For XML media types this means that recipients processing a given document should treat unexpected elements and attributes in the document as if they were not present. This includes the validation stage, so an old recipient shouldn't discard a document just because it has new elements in it that were not present at the time its validation logic was designed. The validation logic needs to be:

  1. Performed on the sender side, rather than the recipient side
  2. Performed on the recipient side only if the document indicates a version number that the recipient knows is equal to or older than its validation logic, or
  3. Performed on the recipient side only after it has checked to ensure its validation logic is up to date based on the latest version of the media type specification

With these best practice items in place, new information can be added to media type schemas and to corresponding documents. Old recipients will ignore the new information and new recipients are able to make use of it as appropriate. Note that information can still only be added to schemas in ways consistent with the "ignore" rules of existing recipients. If the ignore rule is to treat unknown attributes and elements as if they do not exist, then new extensions must be in the form of new attributes and elements. If they cannot be made in compliance with the existing ignore rules then the change becomes incompatible as per the next few Best Practice items.

Best Practice 13: Clients should support a range of current and recently-superseded media types in response messages, and should always state the media types they can accept as part of the "Accept" header in requests

Best Practice 14: Services should support returning a range of current and recently-superseded media types based on the Accept header supplied by its clients, and should state the actual returned media type in the Content-Type header

Best Practice 15: Clients should always state the media type they have included within any request message in the Content-Type header

Best Practice 16: Services that do not understand the media type supplied in a client request message should return 415 Unsupported Media Type and should include an Accept header stating the types they do support.

Best Practice 17: Clients that see a 415 Unsupported Media Type response should retry their request with a range of current and recently-superseded media types with due heed to the server-supplied Accept header if one is provided, before giving up and raising an exception towards their user.

Content negotiation is the mechanism that HTTP APIs use to make backwards-incompatible media type schema changes. The new media type with the backwards-incompatible changes in its schema is requested by or supplied by new clients. The old media type continues to be requested by and supplied by old clients. It is necessary for recent media types to be supported on the client and server sides until all important corresponding implementations have upgraded to the current set of media types.

Removing Information

Removing information from media types is generally a backwards-incompatible change. It can be done with care by deprecating the information over time until no important implementations continue to depend upon the information. Often the reason for a removal is that it has been superseded by a newer form of the information elsewhere, which will have resulted in information being added in the form of a new media type that supersedes one or more existing types.

URI Space Evolution

Adding Resources or Capabilities

Adding a resource is a service-specific thing to do. No longer are we dealing with a generic method or media type, but a specific URL with specific semantics when used with the various generic methods. Some people think of the URI space being something that is defined in a tree that is separate to the semantics of operations upon those resources. I tend to take a very server-centric view in thinking of it a service contract that looks something like:

Adding new URIs (or more generally, URI Templates) to a service, or adding new methods to be supported for an existing URI do not introduce any compatibility issues. This is because each service is free to structure it resource identifiers in any way it sees fit, so long as clients don't start embedding (too many) URI templates into their logic. Instead, they should use hyperlinks to feel their way around a particular service's URI space wherever possible.

However, this can still become a compatibility issue between instances of a service. If it takes 30 minutes to deploy the update to all servers worldwide then there may well be client out there that are flip flopping between an upgraded server and an old server from one request to the next. This could lead to the client directed to use the new resources, but having their request end up at a server that does not support the new request yet. The best way to deal with this is likely to be to split the client population between new users and old users, and migrate them incrementally from one pool to the next as more servers are upgraded and can cope with new increased new client pool membership. This can be done with specialised proxies or load balancers in front of the main application servers and can be signalled in a number of ways, such as by returning a cookie that indicates which pool the client is currently a member of. Each new request will continue to state which pool the client is a member of, allowing it to be pinned to the upgraded set of servers. Alternatively, the transition could be made based on other identifying data such as ranges of client IP addresses.

Best Practice 18: Clients should support cookies, or a similar mechanism

Best Practice 19: Services should keep track of whether a client should be pinned to old servers or new servers during an upgrade using cookies, or a similar mechanism

Replacing Resources or Capabilities

Often as a URI space grows to meet changing demands, it will need to be substantially redesigned. When this occurs we will want to tear up the old URLs and cleanly lay down the new ones. However, we're still stuck with those old clients bugging us to deal with their requests. We still have to support them or automatically migrate them. The most straightforward way to do this is with redirection.

Best Practice 20: Clients should follow redirection status responses from the server, even when they are not in response HEAD or GET requests

Best Practice 21: When redesigning a URL space, ensure that new URLs exist that have the same semantics as old URLs, and redirect from old to new.

RFC2616 has some unfortunate wording that says clients MUST NOT follow redirection responses unless the request was HEAD or GET. This is harmful and wrong. If the server redirects to a URL that doesn't have the same semantics as the old URL then you have the right to bash their door in an demand an apology, but this redirection feature is the only feature that exists for automated clients to continue working across reorganisations of the URI space. It it madness for the standard to try and step in and stop such a useful feature from working.

By supporting all of the 2616 redirection codes, clients ensure that they offer the server full support in migrating from old to new URI spaces.


I have outlined some of the key best practice items for dealing with API changes in a forwards-compatible and backwards-compatible way for methods, media types, and specific service contracts. I have not covered the actual content of these protocol elements, which depend on other abstraction principles to minimise coupling and avoid the need for interface change. If there is anything you feel I have missed at this technical level, please leave a comment. At some stage I'll probably get around to including any glaring omissions into the main article text.

Thanks guys!


Wed, 2011-Feb-09

Jargon in REST

Merriam-Webster defines jargon as . In or REST-style there are really two different levels where jargon appears:

Jargon (within a service inventory)
Methods, patterns of client/server interaction, media types, and elements thereof that are only used by a small number of services and consumers.
Jargon (between service inventories)
Methods, patterns of client/server interaction, media types, and elements thereof that are only used by a small number of service inventories.

Jargon has both positive and negative connotations. By speaking jargon between a service and its consumers the service is able to offer specific information semantics that may be needed in particular contexts. The service may be able to offer more efficient or otherwise more effective interactions between itself and its consumers. These are positive features. In contrast there is the downside of jargon: It is no longer possible to reuse or dynamically recompose service consumers with other services over time. More development effort is required to deal with custom interactions and media types. Return on investment is reduced, and the cost of keeping the lights on is increased.

Agility is one property that can both be increased and reduced through use of jargon. An agile project can quickly come along and build the features they need without propagating these new features to the whole service inventory. In the short term this increases agility. However, the failure to reuse more general vocabulary between services and consumers means that generic logic that would normally be available to support communication between services and consumers is necessariy missing. Over the long term this reduces the agility of the business in delivering new functionality.

The REST uniform interface constraint is a specific guard against jargon. It sets the benchmark high: All services must express their unique capabilities in terms of a uniform contract composed of methods, media types, and resource identifier syntax. Service contracts in REST are transformed into tuples of (resource identifier template, method, media types, and supporting documentation). Service consumers take specific steps to decouple themselves from knowledge of the individual service contracts and instead increase their coupling on the uniform contract instead.

However, a uniform contract that contains significant amounts of jargon defeats the uniform interface constraint. At one level we could suggest that the world should look just like the HTML web, where everyone uses the same media types with the same low-level semantics of "something that can be rendered for a human to understand". I would suggest that a business IT environment demands a somewhat more subtle interpretation than that.

That the set of methods and interactions used in a service inventory should be standard and widely used across that service inventory is relatively easy to argue. Each such interaction describes a way of moving information around in the inventory, and there are really not that many ways that information needs to be able to move from one place to another. Once you have covered fetch, store, and destroy you can combine these interactions with the business context embodied in a given URL to communicate most information that you would want to communicate.

The set of media types adds more of a challenge, especially in a highly automated environment. It is important for all services to exchange information in a format that preserves sufficiently-precise machine-readable semantics for its recipients to use without guessing. There are far more necessary kinds of information in the world then there are necessary ways of moving information around, so we are always going to see a need for more media types than methods when machines get involved.

The challenges for architects when dealing with jargon in their uniform contracts are to:

  1. Ensure that the most widely used and understood media type available is used to encode a particular kind of information, at least as an alternative supported by content negotiation. This significantly reduces coupling between services within an inventory and between service inventories as they each come to increase coupling on independently-defined standards instead of their own custom jargon.
  2. Ensure that the semantics of jargon methods and media types are no more precise than required, to maximise reusability. In particular, if the required semantics for a field are "a human can read it" then no further special schema is required. This approach significantly reduces coupling between sender and recipient because the recipient does not have to do any custom decoding and formatting of data before presenting it to the human. Changes to the type of information presented to the user can be made without modifying the recipient's logic.
  3. Every new method, interaction, media type, link relation, or any other facet of communication begins its life as jargon. Warnings against jargon should not amount to a ban on new features of communication. When jargon is required, set about on a strategy to promote the new jargon to maximise its acceptance and use both within a service inventory and between service inventories.
  4. Feed back experience from discovered media types into information modelling and high level service design processes to maximise alignment between required and available semantics. For example, vcard data structures can be adopted as the basis for user information within data models used by services.

Only by increasing the quality of agreements and understanding between humans can our machines come to communicate more effectively and with reduced effort. It is the task of humans to reduce the jargon that exists in our agreements, to increase our coupling to independently-defined communication facets, and to reduce our coupling to service-specific or inventory-specific facets.


Wed, 2011-Jan-12

B2B Applications for REST's Uniform Contract constraint

uniform interface constraint (or uniform contract constraint) requires service capabilities be expressed in a way that is "standard" or consistent across a given context such as a service inventory. Instead of defining a service contract in terms of special purpose methods and parameter lists only understood by that particular service, we want to build up a service contract that leverages methods and media types that are abstracted away from any specific business context. REST-compliant service contracts are defined as collections of lightweight unique "resource" endpoints that express the service's unique capabilities through these uniform methods and media types.

To take a very simple example, consider how many places in your service inventory demand that a service consumer fetch or store a simple type such as an integer. Of course the business context of that interaction is critical to understanding what the request is about, but there is a portion of this interaction that can be abstracted away from a specific business context in order to loosen coupling and increase reuse. Let's say that we had a technical contract that didn't specifically say "read the value of the temperature sensor in server room A", or "getServerRoomATemperature: Temperature" but instead was more specific to the type of interaction being performed and the kind of data being exchanged. Say: "read a temperature sensor value" or "GET: Temperature".

What this would allow us to do is to have a collection of lightweight sensor services that we could read temperature from using the same uniform contrct. The specific service we decided to send our requests to would provide the business context to determine exactly which sensor we intended to read from. Moreover, new sensors could be added over time and old ones retired without changing the uniform interface. After all, that particular business context has been abstracted out of the uniform contract.

This is very much how the REST uniform contract constraint works both in theory and in practice. We end up with a uniform contract composed of three individual elements: The syntax for "resource" or lightweight service endpoint identifiers, the set of methods or types of common interactions between services and their consumers, and the set of media types or schemas that are common types or information sets that are exchanged between services and their consumers. By building up a uniform contract that focuses on the what of the interaction, free from the business context "why" we are free to reuse the interface in multiple different business contexts. This in turn allows us to reuse service consumers and middleware just as effectively as we reuse services, and to compose and recompose service compositions at runtime without modification to message processing logic and without the need for adaptor logic.

On the web we see the uniform contract constraint working clearly with various debugging and mashup tools, as well as in the browser itself. A browser is able to navigate from service to service during the course of a single user session, is able to discover and exploit these services at runtime, and is able to dynamically build and rebuild different service compositions as its user sees fit. The browser does not have to be rebuilt or redeployed when new services come along. The uniform interface's focus on what interaction needs to occur and on what kind of information needs to be transferred ensures that the services the browser visits along the way are able to be interacted with correctly with the individual URLs providing all of the business context required by the browser and service alike.

When we talk about and , we move into a world with a different set of optimisations than that of the Web. There will clearly be cases where the uniform interface constraint significantly reduces complexity. Maybe we have a generic dashboard application. Maybe we have a generic data mining application. By interacting with different services and different capabilities using the same types of intraction and the same types of data these kinds of service consumers are significantly simplified the robustness of the architecture as a whole can improve. However, we start to run into some questions about the appicability of the constraint we we reach entity services within a true service-oriented architecture.

One of the key properites of a well-run SOA is that service logic and data housing is normalised. We usually end up with a layer of services that capture different kinds of important business entities and the operations that are legal to perform on these entities. Along with many of these entities we can expect to find special schemas or media types that correspond to them: An invoice type for an invoice service, a customer type for a customer service, a timetable type for a timetable service, etc etc.

As each normalised service introduces its own new media types, the unifrom contract constraint can quickly retreat. If we are talking about invoices then we are probably talking to the invoice service. If we are talking to the invoice service, and this is the only service that knows about invoices, then what other services are we supposed to have a uniform interace with exactly?

To me there are two basic answers to this. The first is that entity services are not the whole story. There is generally a layer of task services that sit on top of these entity services that will also need to talk about invoices and other entity types. Sharing a common interface between these task services will significantly increase the runtime flexibility of service compositions in most businesses. The second answer is that the uniform contract constraint is particularly applicable when service denormalisation does occur. This may occur within businesses through various accidents of history, but almost certainly will occur between businesses or between significant sectors of a business that operate their own independent service inventories.

Service-orientation generally ends at a service inventory boundary. Sure we have patterns like domain inventory where we all try to get together and play nicely to ensure that service consumers can be written effectively against a collection of service inventories... but ownership becomes a major major issue when you start to get different businesses or parts of a business that compete with each other at one level or another. If I am in competition with you, there is no way that your services and my services can become normalised. They will forever overlap in the functionality that we compete against each other in or with. This is where a uniform contract approach can aid service consumer development significantly, especially where elements of the uniform contract of a given service inventory are common to related inventories or comply with broader standards.

Consider the case where we want to have a service consumer do automatic restocking of parts from a range of approved suppliers. Our service consumer will certainly be easier to write and easier to deal with if the interface to supplier A's service is the same as the interface to supplier B's service. Such an interface will be free of the business context of whether we are talking to supplier A or supplier B, and instead will focus on the type of interfaction we want to have with each service and the type of information we want to exchange with the service. Moreover, once this uniform interface is in place we can add supplier C at minimal cost to us so long as they comply with the same interface.

The unifrom contract and the marketplace build each other up in a virtuous cycle, and eventually we see a tipping point as we saw on the early Web where the cost of adding support for the interface to services and to consumers falls drastically compared to the value of participating in the marketplace. The more people use a HTTP "GET" request to fetch data, the easier and more valuable it becomes to add support for that request to services and consumers. The more people use a html format to exchange human-readable data, the easier and more valuable it becomes to add support for that type of data to services and consumers. The same is true for more special-purpose media types and even for more special purpose interaction types.

At another level, consider the problem of trying to keep customer records up to date. Rather than trying to maintain our own database of customer details, what if we could fetch the data directly from a service that the customer owned or operated whenever we needed it? Again, this sort of interaction would benefit from having a uniform contract in place. Our service consumer may itself be our customer service, doing a periodic scrape of relevant data, but whatever form that consumer takes it is valuable for us to be able to grab that data over a uniform interface to avoid needing to develop special message processing logic for each customer we wanted data from. Likewise, it could become valuable enough to have one of these services that the customer would provide it for all of their suppliers. Having one interface in this case benefits the customer as well in not having to support a different interface for each of various suppliers.

The REST uniform contract constraint sets the bar of interoperability high: It sets the bar right where it is needed to select which service to interact with at runtime based on the appropriate business context. This is the right level to start to build up marketplaces for valuable services. It is also careful to separate the interaction part of the uniform contract from the media type part of the contract. This allows the separate reuse of each, and significantly increases the evolvability and backwards-compatibility of these interfaces.

While classical service-orientation at least in theory puts a limit on how valuable the REST uniform contract constraint can be, the real world's denormalised inventories and business to business scenarios put a premimum on the use the uniform contract pattern and on related patterns. In turn the uniform contract constraint puts the burden on people to come to agree on the kinds of interaction they with to support, and the kinds of information they wish to exchange so that machines are able to exchange that information without the need for excessing transformation and adaptor logic.


Sun, 2010-Oct-03

REST service discovery

is a discipline within computer science that seeks to maximise business value firstly by encapsulating reusable logic and data within well-defined and easily accessible services, and then secondly by actually reusing these services. A service-oriented architecture distinguishes between logic that will not be reused (application logic) from logic that is reusable (service logic). Service logic is reused from application to application as business needs change to avoid reinvention of the wheel, reduce time to market for new applications, reduce the maintenance costs associated with having redundant implementations of reusable logic, and increase the return on investment of reusable logic by amortising costs over a series of applications.

Service-orientation is set up in opposition to traditional silo mentalities of software development where application development teams do not talk to each other across an organisation and where each new application is built up often completely from scratch based on whatever third-party or language framework tools are available.

A major focus of the architectural style is on building a uniform contract that decouples services from their consumers, decoupling consumers from individual service contracts. What service-oriented folks see as loose coupling "I'm depending on a service's contract, but I have abstracted it to a degree where the service is able to evolve and even change technology foundations without breaking the contract", REST folk see as tight coupling "Your consumer can only talk to a single service?? You have to rewrite your consumer each time you swap one business for another within your supply chain??".

REST would have us depend only on a uniform contract in order to access the capabilities of a service. Consumers at design time should not know the details of any particular service contract. Instead, they should be written against the uniform contract based on a generic conceptualisation of the capabilities a service might offer.

The classic example of this approach can be found in the Web browser. A user types in a browser address, then navigates from page to page regardless of what service is being offered. The browser doesn't have to be rewritten in order to navigate from yahoo to google, or from facebook to the library of congress. Defining a uniform contract means that the technology part of the equation - the web browser - is able to interact correctly with whatever it is directed to interact with. If the browser knew the contract of the services it was dealing with ahead of time it would become tightly coupled to these services and browsing as we know it would end.

But let's take another look at this vision of browsing. Some REST proponents would suggest that we have eliminated the service contract in this model. We have substituted it for a reusable uniform contract shared by all services on the Web. I disagree with this notion entirely, and say that the service contract remains. It is captured in the individual unqiue sets of URLs each service offers that expose its service capabilities. "getGoogleSearchForm(): SearchForm" has simply been substituted for "GET text/html". The same underlying capability continues to exist and still has a unique form of addressing available to invoke it.

We can take this view further: Say that I deduce from the form that I can submit a query of "GET", what would happen if this resource stopped working or changed its semantics? Well, my operation certainly wouldn't work. This is a contract that if broken will have significant impacts on service consumers. This is the service contract.

So we can take a step back, and say that while service contracts still exist in REST we discover these at runtime. Service consumers don't hard-code knowledge of these capabilities at design time, do they?

Well, in some cases they do. Web browsers these days will almost certainly be hard-coded with knowledge of the URL they need to upgrade themselves when they are out of date. When a web browser is installed on my computer it is almost certainly going to have a "home page" and a set of default bookmarks to access. In fact, quite a bit of contract information is built into a web browser at design time. Much of this can be tailored or replaced at runtime, but nevertheless it is there.

It is this observation that is perhaps most powerful in determining how we should publish the contracts of our RESTful services, and how consumers that are more automated than a typical web browser should discover services and their contract. Discovery of services by developers of automated consumers is critical to the goals of service-orientation in achieving actual reuse of service logic. If the developer of a new application is unable to discover existing service logic and learn effectively how to use it then it is doomed to be reinvented and the silo mentalities continue within your organisation.

REST Service Discovery

The figure above shows a basic model for discovering services within a REST context. We start out with the publication of each service's contract. This is essential in order to achieve reuse of service logic. This doesn't necessarily have to be the whole contract, but has to be sufficient for any service consumer that is previously unequipped with service URLs to discover any other URLs it may need to access over its application lifespan. Key elements of the published service contract for each service capability will be:

  1. The uniform contract method that will be used to invoke the service capability (e.g. GET or PUT)
  2. The URI Template to be used in invoking the capability (e.g.{query} or
  3. The uniform contract media type alternatives that are supported for the capability (e.g. text/html or application/ Multiple alternatives may be present, using content negotiation to select the most appropriate form for information exchange in any given interaction
  4. Sufficient human-readable semantic information for the application developer to understand the capability and validate it as applicable to their needs

The next step is performed by the application developer. The developer would scan a registry of services and related capabilities in order to find one or more that will reduce the cost of developing their application.

Steps 3 and 4 can occur in parallel. (3) is reasonably easy to understand. It is figuring out how to use the uniform contract correctly within the service consumer. In a perfect RESTful world this might be the only step between these two that we would take. It involves studying the specifications for uniform contract elements used by this service and determining how to build the logic of the service consumer based on these specifications. If we find a blog service we might determine that we are going to use GET requests to the service's URLs and make sure we are able to process application/atom+xml responses in return. We might choose to understand a subset of link types in order to discover related URLs and do things like fetch the contents of podcasts and vodcasts on behalf of our user.

By depending only on the uniform contract while determining these key details we are able to remain loosely coupled with the service we initially discovered. If another service happens to reuse those same uniform contract details our consumer can use the capabilities of that other service without needing to change its logic at all. It just needs to select different URLs to go and interact with. When our consumer follows those links it knows how to understand it doesn't care what service is actually offering the links. It just navigates from resource to resource using the uniform contract as its guide as to how to interact correctly.

Step (4) kicks in wherever the uniform contract detail is insufficient to base our service consumer upon. Typically this step does not specify how to build the logic of the service consumer, as step (3) did. Instead, it defines what the configuration file for our service consumer will look like. It preconfigures the consumer with a set of resource identifiers, and sufficient semantics to figure out what to do with these configured URLs. Typical web browser examples include "home page" - the page you open when you start. "bookmarks" - the links and titles to put in the bookmark folder for the user to select. In a more automated world these URLs might be attached to semantics such as "source of data for your reports", "resource to check to see whether operation xxx is permitted", or "place to store to when you want to launch your nuclear missiles". The semantics of these URLs embedded within the configuration file are abstractions of the semantics of the service contracts they came from. The more of an abstraction from the service contract, the more loosely coupled this service consumer will be from the service. In REST we are seeking to minimise this coupling wherever it could appear, and maximise dependency on uniform contract elements instead.

When we have completed (3) and (4) we move on to implementing the service consumer. We should be able do this based on the briefs produced in analysing the service contract and its related uniform contract elements. Developers of the service consumer should not have directly reference the service contract itself. They should especially avoid embedding specific URLs or URL templates that have not been through some process of review and abstraction prior to implementation. In a REST-style SOA services are discoverable, but consumers are able to freely navigate and be reconfigured to interact with the capabilities of other services as required.


Fri, 2010-Aug-13

REST-compliant service contract notation

and are traditional rivals for developer mindshare. Despite this I think they have a lot in common, and in my view a fair amount that each opposing camp can teach the other. I have been looking forward to the convergence between these camps and their styles for some time now, and I have been working on a title called SOA with REST with several other authors that explores this intersection. My understanding of the intersection of REST and SOA has evolved and expanded over this time to a point where I comfortably use some of the techniques of service-orientation to build and define REST architecture. So much so, that the line between what is REST practice and what techniques are from service-orientation have somewhat blurred.

Today I wanted to take the gentle reader through a few basic concepts of the uniform contract: How to define and build services that are compliant with the REST uniform interface constraint, but are able to express their unique service capabilities. I have approached this subject from a few different directions in the past, but this time around I particularly wanted to cover what it means to talk about service contracts within the context of the REST uniform interface constraint.

From a SOA perspective, compliance with the uniform interface constraint requires that across our service inventory (or a significant proportion thereof) we are able to define a single technical contract that can be used to access any resource. This constraint increases integration maturity of a service inventory by allowing one resource to be substituted for another by service consumers at runtime as required. It allows service consumers to discover a resource at runtime that they wish to interact with, and to interact with it correctly without changing any of its logic and without introducing any kind of glue logic. At the same time it replaces the conventional single "service endpoint" with often infinite sets of resources that each form a lightweight endpoint for the service.

The uniform interface constraint simplifies the architecture by reusing very general contract elements that services, consumers, and middleware can all understand and participate in effectively. This initially reduces performance, but leads to improvements in scalability and other architectural properties by allowing middleware such as proxies to be involved in scrutinising and supporting network interactions. It can massively reduce coupling within a service inventory and allows for extremely simple dynamic reconfiguration of service compositions.

I see part of the friction between the two camps as coming down to a lack of shared terminology when talking about services, contracts, and interfaces. I would like to see if I can clear this up a little with a few diagrams:

Uniform and Service Contracts

This first diagram shows the key elements of the uniform contract, and the relationship between a uniform contract and a service contract. There are a number of key points to draw from it.

REST advocates will have immediately picked up the obvious extension of a conventional uniform contract with the service contract concept. This is a startling inclusion and one that deserves explanation. Let me begin this explanation with another diagram:

REST-compliant service contract realisation

This diagram shows a simple REST-compliant realisation of an abstract service. The high level capabilities that analysis suggested this service needs were the capability to convert from degrees Fahrenheit to degrees Celsius, and the corresponding capability to convert back again. These capabilities were refined through reference to the uniform contract into a service contract suitable for use in a REST-style SOA. The name of the service is the same as the authority of the service's resources ( to ensure maximum autonomy of this service. Resources in the contract are identified relative to the identifier for the service.

The actual contract definition can be written in many different ways, but I find this one is easy to use to communicate with people from either an object-oriented or a service-oriented background. What this notation does is to provide a clear service boundary for the REST-compliant service, a clear identification of the service capabilities (which remain readable), and a clear relationship to facets of the uniform contract.

The service capabilities in this case have been refined as:

It is clear from a REST perspective that what we are doing is specifying the resource identifiers that the refined service provides to its consumers. It is normal to expect that a REST-compliant service will define its own resource identifiers to use in conjunction with the uniform contract facets. These resource identifiers provide business context for generic and abstract methods such as "GET".

In the example above we are promising to support a GET operation to two distinct families of resource identifiers. One set will return the number of degrees Celsius for a Fahrenheit value, and the other set will do the reverse. Both will return their result as a simple text-formatted number. Each different value that can be input to the service is embodied in a different identifier, so each refined capability is actually defining a family of resources.


Services and consumers are more loosely coupled with each other in a REST-style SOA than in a conventional SOA. While SOA always encourages the use of loosely-coupled contracts to support implementation changes behind the contract, REST encourages us to abstract the consumer away from design-time knowledge of the service contract itself. Service consumers and middleware are coupled to the uniform contract, and are sufficiently abstracted from specific service contract details to be reusable from service to service.

With this objective in mind, I think it is useful to talk explicitly about service contracts as an element of REST. We need to talk about them because they are important in governing, developing, maintaining, and supporting REST-compliant services. We need to talk about them so that we can offer explicit guidance about what constitutes coupling of a consumer to a service contract. We need to talk about them so that we can point to patterns that help build the loosely-coupled architectures that REST demands.

If we are able to explicitly talk about service contracts, we will be in a better position to understand how they shape our service inventories and our architectures.


Fri, 2010-Apr-02

SOA With REST Patterns preview

I have been working on a book for some time now with Raj Balasubramanian, Thomas Erl, and Cesare Pautasso (in alphabetical order). The title is SOA with REST and it is something I am pretty excited about being able to share with the world in the coming months. I'm sure that it will cause a few second takes on both sides of what has traditionally been a political divide between the and camps, and I see this as a good thing. SOA with REST is intended to be a roadmap for bringing REST and SOA experience together into a cohesive conceptual framework and to allow adults to have adult discussions about what REST means to SOA and vice versa. I see the book as describing an architecturally pure yet also practical approach to the REST style and the SOA model.

Some attendees of last year's SOA Symposium will have gone home with a gallery of chapters from the book. For the wider audience we now have parts of our patterns chapter available as candidates in the catalogue at The set of patterns is partially based on trying to explain some of the REST constraints in a patterns language, and partly derived from Web experience. The published candidates are:

Uniform contract
A partial explanation of the whys and wherefores of the Uniform Interface constraint.
Entity endpoint
Why do we address resources in REST, rather than whole services as endpoints?
Entity linking
What is the point of hyperlinking in REST? (hint: Combine with uniform contract and entity endpoint to achieve runtime service discovery)
Message-based state deferral
How does REST ensure services are stateless between requests if it doesn't allow us to defer state to session databases or session management services?
Response caching
How does REST overcome the loss of publish/subscribe and other event-based communication patterns due to client/server and stateless constraints?
Endpoint redirection
How does the Web support services in deprecating old resources?
Content negotiation
How does the Web simultaneously support old and new service consumers as its uniform contract changes over time?
Code on demand
How does REST defer processing to service consumers, and allow consumers to be extended and customised as required by services?
Consumer-processed Composition
What does a typical service composition look like on the Web, anyway?
Idempotent capability
How does the web statelessly provide reliable messaging when there are no humans in the loop? (hint: idempotentcy is one half of the solution, and waiting until the last response has come back before you send another request that depends on the previous succeeding is the other half of the solution)


Fri, 2010-Apr-02

Programme for WS-REST 2010

The programme for WS-REST 2010 has been published

I am especially looking forward to the presentation of Federico Fernandez and Jaime Navon's paper, "Towards a Practical Model to Facilitate Reasoning about REST Extensions and Reuse"

I would like to put out a big thankyou to everyone who submitted.


Sun, 2010-Jan-17

Scaling through the REST "stateless" constraint

Uniform Interface is the poster boy of the REST constraints. It attracts much of the interest, and much of the controversy. Less-often discussed is the equally important and far more controversial Stateless Constraint. Specifically, the constraint is that services in a REST-style architecture are stateless between requests. They don't carry any session state on behalf of clients that aren't currently in the process of making requests.

Services are typically near the core of a network. Network cores often have great storage, bandwidth, and compute resources but also great demands on these resources. Services are responsible for handling requests on behalf of their entire base of consumers, which on any large network will be a significant set. Nearer to the edge of the network are the consumers. Paradoxically, the resources available in the form of cheap desktop hardware and networking equipment typically have more available capacity at this network edge than is present near the network core where big-iron scaling solutions are being employed. This is due to the large number of consumers out there, typically orders of magnitude more than exist a data centre. Spare resources are relatively fast, large, and responsive nearer to the ultimate user of the system. On the down-side, nodes near the edge of a network tend to be less reliable and more prone to unwanted manipulation than those near the network core.

While big-iron scaling solutions near the core are important, any architecture that really scales will be one that seeks to make use of the resources available near the network edge. Roy envisages a architecture, where most consumers are in a "REST" state most of the time. This is a concept intrinsically linked to statelessness as well more obliquely to notions of code on demand, cache, and the Web's principle of least power.


The first step towards a REST scalability utopia is to move as much storage space from services to the edge of the network as possible. This is a balancing act. You don't normally want to move security-sensitive storage to the edge of the network, nor store any information that you have promised to keep in the less-reliable edge nodes of the network. There is also some state associated simply with underlying transport protocols such as TCP that cannot be eliminated. However, the less information that is stored by the service the better it will be able to cope with the demands of its consumers. REST sets the bar for statelessness at the request level: No session state needs to be retained by the service between requests in a REST architecture for normal and correct processing to occur. The service can forget any such state and will still understand the consumer's request within the session.

The scalability effect of this constraint is that session state is moved back to the service consumer at the end of each request. Any session state required to process subsequent request is included in those subsequent requests. The session state flows tidally to and from the service rather than being retained within the service. Normal service state (information the service has promised to retain) still resides within the service and can be read, modified, or added to as part of normal request processing. I have written before about the difference between session state and service state, so I won't go over that ground again today.

Applying this constraint has positive and negative effects. On the plus side, the service must only provision storage capacity sufficient deal with its own service state. It no longer has to deal with a unit of session state for each currently-active service consumer. The service can control the rate at which it processes requests and only has to cope with the session storage requirements of those it is currently processing in parallel. It may have a million currently-active consumers, but if it is only processing ten requests at a time then its session storage requirements are bounded to ten concurrent sessions. The other 999990 sessions are either stored within the related service consumer or are currently in transit between the service and related consumer. Sessions are expensive for services to store, but cheap for consumers. Session state is also often invalid if the consumer terminates, so if the session happens to be lost when this occurs there is typically no negative effect.

The negative impacts of statelessness include the extra bandwidth usage for that tidal flow of state, as well as the prohibition of really useful patterns such as publish/subscribe and pessimistic locking. If the service is able to forget a subscription or forget a lock, then these patterns really don't work any more. These patterns are stateful and force a centralisation of state back to services near the network core.


is often talked about as a scalability feature of REST. However, it exists primarily to counter the negative effects of stateless on the architecture. Stateless introduces additional bandwidth requirements between services and consumers as session state is transferred more frequently, and we may have additional processing overhead on the service to deal with consumers polling for updates when they previously could have made use of a stateful event-based message exchange pattern. Caching seeks to eliminate both problems by eliminating redundant message exchanges from the architecture. This reduces bandwidth usage as well as service compute resources down to the minimum possible set, ensuring that the stateless architecture is a feasible one.

A cache positioned within a service consumer reduces latency for the client as it makes a series of network requests, some of which will be redundant. The cache detects redundant requests and reuses earlier responses to respond quickly in place of the service. A cache positioned at a data centre or network boundary is principally concerned with reducing bandwidth consumption due to redundant requests. A cache positioned within the service itself is primarily concerned with reducing processing overhead due to redundant requests.

The Web's principle of least power and code on demand

Now that we have moved unnecessary storage requirements to the edge of the network and reduced network bandwidth to a minimum, the obvious next step is to try and reduce our service-side compute requirements. The Web offers its standard approach of the . This principle essentially says that if you provide information instead of a program to run that consumers of your service will understand the content and be able to process it in useful and novel ways. The compute implication of this is that you will often be able to serve a static or pre-cached document to your consumers with practically zero compute overhead. The service consumer can accept the document, understand it, and perform whatever processing it requires.

REST adds the concept of code-on-demand. While something of an anti-principle-of-least-power, it serves more or less the same purpose as far as scalability is concerned: It allows the service to push compute power requirements out to the edge of the network. Instead of actually executing the service composition, a BPEL engine could simply return the BPEL and let the consumer execute it. Hell, it could happily drop the BPEL processor itself into a virtual machine space offered by the service consumer and run it from there. So long as there is nothing security-sensitive or consistency-sensitive in the execution you have just saved yourself significant compute resources over the total set of capability invocations on the service. If you are lucky, the files will already be cached when the consumer attempts to invoke your capability and the request won't touch the service at all.

The Web's use of applets, javascript, html, and pretty much everything else it can or does serve up demonstrate how compute resources can be delegated out to browsers and other service consumers in order to keep services doing what they should be doing: Ensuring that the right information and the right processing is going on without necessarily doing the hard work themselves.


Between the offload of storage space offered by the REST stateless constraint and the offload of compute resources offered by code on demand and the principle of least power, REST significantly alters the balance of resource usage between services near the core of the network and service consumers nearer to the edge of the network. Service consumers place no demands on bandwidth, cpu, or storage except when they have requests outstanding. Services are able to control the rate at which they process requests, and the network itself controls the bandwidth that can be consumed by requests and responses. Caching ensures this approach is feasible in most circumstances for most applications. If you are considering investing in additional hardware scaling mechanisms, make sure you also consider whether applying these architectural constraints would also make a difference to the scalability of your services.


Mon, 2009-Aug-31

WADL for REST-style SOA

In the words of Mark Nottingham:

CORBA has IDL; SOAP has WSDL. Developers often ask for similar capabilites for HTTP-based, "RESTful" services. While WSDL does claim support for HTTP, isn't well-positioned to take advantage of HTTP's features, nor to encourage good practice.

is designed to fill this gap. It is HTTP-centric and attempts to provide a straightforward syntax for describing the semantics of particular methods on resources.

Resources and SOA

Resources are a key concept in , and also in REST-style SOA. A service expresses its interface as a set of resources. All resources share the same Uniform Contract. However, different resources have different associated semantics.

Resources effectively replace the traditional service-specific contract in a . In doing so they introduce a meta-data gap. Where the contract previously described the interface to the service in a single coherent place, the Uniform Contract of resources does little to describe the interface of a given service.

Filling this gap is an important part of applying the REST style to SOA. This occurs in two parts: One part is the application of additional hypermedia so that clients can locate the correct resource to invoke requests upon based on the information that they are likely to have at hand. This hypermedia is incorporated into the Uniform Contract in terms of defined link relationship types and dedicated hypermedia-intensive media types. The second part is the context where WADL can be applied, and is incorporated into the service description.

Objectives of WADL in SOA

The kind of machine-readable description WADL could offer is required to fulfil a number of specific needs:

The information is not fundamentally for service consumer consumption beyond a basic level of discovery. Importantly, knowledge of relationships between resources should not normally be known by service consumers ahead of time. Consumers should make use of links between resources to discover relationships. Failure to adhere to this principle undoes a number of REST features such as the ability to link freely from one service to another with confidence that service consumers will successfully discover and exploit the specified resource at runtime.

Key Resource Metadata

I tend to look on the kind of interface description needed at this level as more of a table than a tree. I think that it is generally advisable to include most of the path when describing the semantics of methods on a given resource. Nevertheless, XML is adequate to describe this structure. I would tend to include the following features:

An example description in table format might be:

Base URL
Resource Identification Method (Uniform) Media Types (Uniform) Cache Documentation
/{invoice id} GET
  • application/
  • application/
Must revalidate Retrieve invoice for invoice id
  • application/
No cache Update Invoice for invoice id to match specified state
/{invoice id}/payment/ POST
  • application/ebxml.transaction+xml
No cache Add a payment relating to this invoice
/{invoice id}/payment/{trans id} GET
  • application/ebxml.transaction+xml
5 minutes Fetch a specific transaction for this invoice
/?[date=]&[customer=]&[paid=] GET
  • application/
Must-revalidate Query for invoices with specified properties

Each line describes a set of resources corresponding to the URL template. The template is filled out with parameters that the server will interpret when the request is processed. URLs can be seen as a message from the service to itself. It should not generally be parsed outside of the service, nor constructed outside of the service. I tend to use the query convention in the last URL template to indicate this rule is being broken and explicit service/consumer coupling is being introduced.

Methods and Media Types are referred to by their identifier only. There is no need to include them in a service description, because they should already be adequately described in uniform contract specification documents. Supporting multiple media types for a given method is important in an architecture with an evolving set of media types. It allows old services and consumers to continue to interact with new ones over the transition period without having to perform simultaneous upgrade.

Cache is also important, as this is a key REST constraint that needs to be described to support governance activities in compliance with the constraint.

While nesting exists to a point, there is no strongly-implied relationship between different nested or non-nested URLs. Each resource has its own distinct semantics. Hypermedia is incorporated into multiple resources by way of links embedded in invoice representations. For example, invoices are likely to include a link to the customer entity that the invoice was made out to. The set above also includes hypermedia in the form of query URLs that a service consumer who has a number of starting parameters can use to find the invoices they want.

Applying WADL to the problem

The WADL equivalent for the above service metadata is follows:

	<html:p>Meta-data for the Invoice service, corresponding to the
	<html:a href=""> uniform interface</html:a>
	<resources base="">
		<resource path="/{invoice-id}">
			<param name="invoice-id" style="template" type="xsd:NMTOKEN"/>
			<method name="GET">
				<doc><html:p>Retrieve invoice for invoice-id</html:p></doc>
					<representation mediaType="application/"/>
					<representation mediaType="application/"/>
			<method name="PUT">
				<doc><html:p>Update Invoice for invoice id to match specified state</html:p></doc>
					<representation mediaType="application/"/>
		<resource path="/{invoice-id}/payment/">
			<param name="invoice-id" style="template" type="xsd:NMTOKEN"/>
			<method name="POST">
				<doc><html:p>Add a payment relating to this invoice</html:p></doc>
					<representation mediaType="application/ebxml.transaction+xml"/>
		<resource path="/{invoice-id}/payment/{trans-id}">
			<param name="invoice-id" style="template" type="xsd:NMTOKEN"/>
			<param name="trans-id" style="template" type="xsd:NMTOKEN"/>
			<method name="GET">
				<doc><html:p>Fetch a specific transaction for this invoice</html:p></doc>
					<representation mediaType="application/ebxml.transaction+xml"/>
		<resource path="/">
			<param name="date"
				style="query" required="false" type="xsd:dateTime"
			<param name="customer"
				style="query" required="false" type="xsd:anyURI"
			<param name="paid"
				style="query" required="false" type="xsd:boolean"
			<method name="GET">
				<doc><html:p>Query for invoices with specified properties</html:p></doc>
					<representation mediaType="application/ebxml.transaction+xml"/>

That actually wasn't too painful. It was easy enough to mimic the table structure. I think this makes the description of a resource more readable. It handled the various parameters to these URLs in a straightforward way. The only thing really missing from this description is the caching information from our earlier table.


I think WADL is more or less suitable as a machine-readable media type for describing the set or resources exposed by a service. It could perhaps do with some extensions (and better extensibility), but it seems like a good starting point to me.

I have written about WADL from a slightly different perspective previously.


Sun, 2009-Aug-16

MIME types holding REST back

With the increasing focus on within enterprise circles, the practice of how REST gets done is becoming more important to get right outside of the context of the Web. A big part of this is the choice of application protocol to use, the "Uniform Contract" exposed by the resources in a given architecture. Part of this problem is simple familiarisation. Existing enterprise tooling is built around conventional RPC mechanisms, or layered on top of HTTP in SOAP messages. However, another part is a basic functional problem with HTTP that has not yet been solved by the standards community.

HTTP/1.1 is a great REST protocol. GET and PUT support content negotiation and redirection. They are stateless, and easy to keep stateless. They support layering. GET fits extremely well into caching infrastructure. These methods fit into effective communication patterns that solve the majority of simple distributed computing communication problems. HTTP works well with the URI specification, which remains best practice for identifying resources. HTTP also accommodates extension methods in support of its own evolution, and in support of specialisations that may be required in specific enterprise architectures or service inventories.

A significant weakness of HTTP in my view is its dependence on the standard for and on the related iana registry. This registry is a limited bottleneck that does not have the capacity to deal with the media type definition requirements of individual enterprises or domains. Machine-centric environments rely in a higher level of semantics than the human-centric environment of the Web. In order for machines to effectively exploit information, every unique schema of information needs to be standardised in a media type and for those media types to be individually identified. The number of media types grows as machines become more dominant in a distributed computing environment and as the number of distinct environments increases.

Media type identification is used in messages to support content negotiation and appropriate parser or processor selection. At the scale of the Web, only a small number of very general types can be accommodated. It is difficult to obtain universal consensus around concepts unless the concepts themselves are universal and agreeable. Smaller contexts, however, are able to support a higher degree of jargon in their communication. An individual enterprise, a particular supply chain, a particular problem domain is capable of supporting an increased number of media types over and above the base set provided by the Web. The ability to experimentally define and evolve these standards outside the Web is essential to a healthy adoption of the REST style and related styles beyond the Web.

An example of the capability to perform media type negotiation with HTTP can be found in the upgrade from RSS to ATOM feeds. While the human element of this upgrade rarely required this in practice, HTTP makes it possible for a client to state which of these types it supports. The server is then able to respond with content in a form the client understands. In a machine-centric environment, this is even more important. Few content types used in the early days of most architectures will survive into maturity. Types will change and evolve, and many will be superseded. Machine-centric environments do not have the same capability to change URLs based on their upgrades, so content negotiation based on media type allows incremental upgrade of a system... one component at a time.

A URL-based Alternative

The main alternative for media type identification would be to use . These already provide a decentralised registry, and can double at the URL of the related human-readable specification document. These seem like a simple answer. Existing IANA types can be grandfathered into the scheme with a prefix, which would already point as a URL to an appropriate specification document for a number of types.

However, URNs suffer three problems when compared to MIME identifiers. The first is simply that HTTP does not permit their use in the appropriate headers. The second is that URNs cannot be further interpreted when they are read. The third is that URNs cannot be parameterised as MIME types are.

MIME types are capable of identifying not only their specific type, but additional types they inherit from or are interpretable as. For example, most XML media types include an extension "+xml". This allows generic processors of XML to interpret the content based on broad-brushed generic mechanisms. One could extend this concept to support specialisations of higher-level media types such as atom. Storing a specific structure of data within an atom envelope does not prevent it from being interpreted as atom. Leaving this information in place within the media type identifier gives generic processors additional visibility into the messages being exchanged.

The use of parameters on media types would essentially not be possible within the URL itself. These could be included in syntax around the media type if they are still required. Typically, XML and binary types should no longer require these headers... so this may be of historical and plain text importance only. Plain text types will often be able to use different HTTP fields to convey relevant information such as their content-encoding.

The solution to these problems could be a revision of HTTP to include URI syntax support for media types, combined with a protocol whereby processors could determine whether one media type inherits from another. Whether HTTP can be revised is a difficult question to answer, but a protocol for discovering inheritance relationships is relatively easy to develop. One could either make use of HTTP headers in the GET response for the URI, or specify a new media type for media type description. The obvious approach with link headers would be to say Link: rel="inherits". However, this is a limited approach. An actual media type description media type could take the form of a short XML document or simple microformat for human-readability, and is perhaps more general and future-proof.

Specific changes that would have to occur in HTTP would apply to the Content-Type and Accept headers. Content-Type would be relatively easy to extend to support the syntax, however problems may emerge in any deviation from the definition of MIME itself and the use of this header within SMTP and other contexts. Accept, as well, would be relatively easy to extend. Quote characters such as " (&quote;) would need to be included around URLs to avoid confusion when ";" (semicolon) and "," (comma) characters are reached during parsing. This may impact backwards-compatibility.

Backwards-compatibility is a prime concern for HTTP. It may be worth doing some trials with common proxies and intermediaries to see how Content-Type and Accept header changes would impact them, to see just how big this problem would be in practice.

A decentralised registry in MIME syntax

An alternative to going all the way to URNs might be to make use of the DNS system alone as part of a media type identifier. For example, a "dns" sub-tree could be registered under MIME. The sub-tree would allow an example organization to safely maintain a set of identifiers beginning with application/ without IANA coordination. Any organization with a presence in the DNS tree could do the same.

The main upside in this approach is that consistent syntax could be maintained for new media type identifiers. HTTP could be used as-is, and individual organizations could create, develop, and market their standards as they see fit. The "+" syntax could continue to be used for inheritance, so we might still end up with hybrid types such as application/ If this got out of hand we could be talking about fairly long media types, but hopefully the social pressures around reasonable media type identification would work against that outcome.

Perhaps the strongest argument against this alternative is a loss of discovery of standards documentation. URLs can easily be dereferenced to locate a specification document. This hybrid of DNS and MIME would need additional help to make that so. It would be necessary to have a means of translating such a MIME identifier into a suitable URL, which quickly leads into the world of robots.txt and other URL construction. While this is not a desirable outcome, at least it doesn't leave the lookup of a URL as an integral part of the parsing process. The URL-based solution may do that.

As a strawman, one might suggest that any MIME type registered in this way would be required to have something human-readable at a /mime path under the DNS name: e.g. application/ would become the URL. This would be quite awkward.

A Hybrid Approach

A third alternative would be to define a way to encode URLs or a subset of possible URLs into media type identifiers. In this example the IANA subtree might be "uri" instead of "dns". The type name would have to be constructed so that the end of the dns part of the identifier could be found and further "." characters treated as URL path delimiters. For example, application/ could indicate that the type can be interpreted as both xml and atom+xml. In addition, the specific specification for this variant of atom+xml can be found at <>.

Specification Persistence

All decentralised options share possible persistence problem. We can probably trust the IETF to hold onto standards document for historical reference for a few hundred years. Can we trust a small start-up business to do the same? What happens when the domain name of an important standard is picked up by domain squatters? Most standards should still not be registered under an individual company name once they reach a certain level of importance. They should fall under a reputable standards body with a fairly certain future ahead of them.


I'm torn on this one. I don't want to go to the IANA every time I publish a media type within my enterprise. I like URLs, but want a straightforward way to discover important inheritance relationships. I don't want to break backwards-compatibility with HTTP, and there is no better protocol available for the business of doing REST. What's a boy to do?

My preference is to go with a hybrid approach. This would yield compatible syntax with today's HTTP, but still support a highly decentralised approach. Over time, a significant number of standards should climb the ladder from early experimental standard through to enterprise and industry standards to eventually become part of a universally-acceptable set that describe the semantic scope of the machine-readable Web. Their identifiers may change as they make the transition from rung to rung so that the most important standards are always under the namespace of the most well-respected standards bodies.