Sound advice - blog

Tales from the homeworld

My current feeds

Sun, 2007-Jun-17

On ODBMS versus O/R mapping

Debate: ODBMS sometimes a better alternative to O/R Mapping?

Objects see databases as memento and object-graph storage. Databases see objects as data exposed in table rows. RDF databases see objects data exposed in schema-constrained graphs. The private of one is the public of the other. The benefits of each conflict with the design goals of the other.

Perhaps REST is the middle ground that everyone can agree on. Objects interface easily using REST. They simply structure their mementos as standard document types. Now their state can easily be stored and retrieved. Databases interface easily using REST. They just map data to data. So the data in an object and the data in a database don't necessarily have precisely-matched schemas. They just map to the same set of document types and these document types define the O-R mapping. The document type pool can evolve over time based on Web and REST principles, meaning that tugs from one side of the interface don't necessarily pull the other side in exactly the same direction.

If O-R mapping is the Vietnam of computer science, perhaps we should stop mapping between our object and our relational components. Perhaps we should start interfacing between them, instead.

Benjamin

Mon, 2007-Jun-11

The Web Application Description Language 20061109

The Web Application Description Language (WADL) has been touted as a possible description language for REST web services. So what is a REST Description Language, and does this one hit the mark?

The Uniform Interface Perspective

I have long been a fan, user, and producer of code generation tools. When I started with my current employer some seven or eight years ago, one of my first creations was a simple language that was easier to process than C++ to define serializable objects. I'm not sure I would do the same thing now, but I have gone on to use code generation and little languages to define parsers and all manner of convenient structures. It can be a great way to reduce the amount of error-prone rote coding that developers do and replace it with a simplified declarative model.

I say that I wouldn't do the serialisation of an object the same way any more. That's because I think there is a tension between model-driven approaches such as code generation and a "less code" approach. Less code approaches use clever design or architecture to reduce the amount of rote code a developer writes. Instead of developing a little language and generating code, we can often replace the code that would have been generated by simpler code. In some cases we can eliminate a concept entirely. In general, I prefer a "less code" approach over a model-driven approach. In practice, both are used and both are useful.

One of the neat things about REST architecture is that a whole class of generated code disappears. SOA assumes that we will keep inventing new protocols instead of reusing the ones we have. To this end, it introduces a little language in the form of an IDL file definition and includes tools to generate both client and server code from IDL instances. In contrast, REST fixes the set of methods in its protocols. By using clever architecture, the code we would have generated for a client or server stub can be eliminated.

In a true REST architecture, both the set of methods associated with the protocol (eg GET, PUT, DELETE) and the set of content types transferred (eg HTML, atom, jpeg) are slow-moving targets compared to the rate at which code is written to exploit these protocols. Instead of being generated, the code written to handle both content and content transfer interactions could be written by hand. Content types are the most likely targets to be fast-moving and are probably best handled using tools that map between native objects and well-defined schemas. Data mapping tools are an area of interest for the w3c.

So does this leave the WADL little language out in the cold? Is there simply no point to it?

I think that is a question that is tied to a number of sensitive variables that will depend on where you are on the curve from traditional SOA to REST nirvana. It is likely that within a single organisation you will have projects at various points. In particular, it is difficult to reach any kind of nirvana where facets of the uniform interface are in motion. This could be for a number of reasons, the most common of which is likely to be requirements changes. It is clear that the more change you are going through the more tooling you will need and benefit from in dealing with the changes.

The main requirement on a description language that suits the uniform interface as a whole is that it be good at data mapping. However this specification may or may not be the same as suits specific perspectives within the architecture.

The Server Perspective

Even if you are right at the top of the nirvana curve with a well-defined uniform interface, you will need some kind of service description document. Interface control in the REST world does not end with the Uniform Interface. It is important to be able to concisely describe the set of URLs a service provides, the kinds of interactions that it is valid to have with them, and the types of content that are viable to transfer in these interactions. It is essential that this set be well-understood by developers and agreed at all appropriate levels in the development group management hierarchy.

Such a document doesn't work without being closely-knit to code. It should be trivial from a configuration management perspective to argue that the agreed interface has been implemented as specified. This is simplest when code generated from the interface is incorporated into the software to be built. The argument should run that the agreed version of the interface generates a class or set of classes that the developer wrote code against. The compiler checks that the developer implemented all of the functions, so the interface must be fully implemented.

The tests on the specification should be:

  1. Does it capture the complete set and meaning of resources, including those that are constructed from dynamic or configuration data and including any query parts of urls?
  2. Does it capture the set of interactions that can be had with those resources, eg GET, PUT and DELETE?
  3. Does it capture the high-level semantic meaning of each interaction, eg PUT to the fan speed sector resource sets the new target fan speed?
  4. Does it capture the set of content types that can be exchanged in interactions with the resource, eg text/plain and application/calendar+xml?
  5. Does it defer the actual definition of interactions and content types out to other documents, or does it try to take on the problem of defining the whole uniform interface in one file? The former is a valid and useful approach. The latter could easily mislead us into anti-architectural practice.

I admit this point is a struggle for me. If we make use of REST's inherent less-code capability we don't need to generate any code. We could just define a uniform interface class for each resource to implement, and allow it to register in a namespace map so that requests are routed correctly to each resource object. This would result in less code overall, but could also disperse the responsibility for implementing the specification. If we use generated code, the responsibility could be centralised at the cost of more code overall.

The Client Code Perspective

To me, the client that only knows how to interact with one service is not a very interesting one. If the code in the client is tied solely to google, or to yahoo, or to ebay, or to amazon... well... there is nothing wrong with that. It just isn't a very interesting client. It doesn't leverage what REST's uniform interface provides for interoperability.

The interoperable client is much more interesting. It doesn't rely on the interface control document of a particular service, and certainly doesn't include code that might be generated from such a document. Instead, it is written to interact with a resource or a set of resources in particular ways. Exactly which resources it interacts with is a matter for configuration and online discovery.

An interoperable client might provide a trend graph for stock quotes. In this case it would expect to be given the url of a resource that contains its source data in the form of a standard content type. Any resource that serves data in the standard way can be interacted with. If the graph is able to deal with a user-specified stock, that stock could either be specified as the url to the underlying data or as a simple ticker code. In the former case the graph simply needs to fetch data from the specified URL and render it for display. In the latter case it needs to construct the query part of a URL and append it to the base URL it has been configured with. I have mentioned before that I think it is necessary to standardise query parts of urls if we are to support real automated clients, so no matter which web site the client is configured to point to they should interpret the url correctly.

Again we could look at this from an interface control perspective. It would be nice if we could catalogue the types of resources out there in terms of the interactions they support and with which content types. If we could likewise catalogue clients in terms of the interactions they support with which content types we might be able to interrogate which clients will work with which resources in the system. This might allow us to predict whether a particular client and server will work together, or whether further work is required to align their communication.

Everywhere it is possible to configure a URL into a client we might attempt to classify this slot in terms of the interactions the client expects to be able to have with the resource. A configuration tool could validate that the slot is configured against a compatible resource.

I have no doubt that something of this nature will be required in some environments. However, it is also clear that above this basic level of interoperability there are more important high-level questions about which clients should be directed to interact with which resources. It doesn't make sense and could be harmful to connect the output of a calculation of mortgage amortization to a resource that sets the defence condition of a country's military. Semantics must match at both the high level, and at the uniform interface level.

Whether or not this kind of detailed ball and socket resource and client cataloging makes sense for your environment will likely depend on the number of content types you have that mean essentially the same thing. If the number for each is "one" then the chances that both client and resource can engage in a compatible interaction is high whenever it is meaningful for such an interaction to take place. If you have five or ten different ways to say the same thing and most clients and resources implement only a small subset of these ways... well then you are more likely to need a catalogue. If you are considering a catalogue approach it may still be better to put your effort into rationalising your content types and interaction types instead.

The non-catalogue interoperable client doesn't impose any new requirements on a description language. It simply requires that it is possible to interact in standard ways with resources and map the retrieved data back into its own information space. A good data mapping language is all it needs.

The Client Configuration Perspective

While it should be possible to write an interoperable client without reference to a specific service's interface control document, the same cannot be said for its configuration. The configuration requires publication of relevant resources in a convenient form. This form at least needs to identify the set of resources offered by the service and the high-level semantics of interactions with the resource. If we head down the catalogue path, it may also be useful to know precisely what interactions and content types are supported by the resource.

The requirements of mass-publication format differ from those of interface control. In particular, a mass-publication of resources is unable to refer to placeholder fields that might be supplied by server-side configuration. Only placeholders that refer to knowledge shared by the configuration tool and the server can be included in the publication.

WADL

Of all these different perspectives WADL is targeted at the interface control of a service. I'm still thinking about whether or not I like it. I have had a couple of half-hearted stabs and seeing whether I could use it or not. If I were to use it, it would be to generate server-side code.

I have some specific problems with WADL. In particular, I think that it tries to take on too much. I think that the definition of a query part of a URL should be external to the main specification, as should the definition of any content type. These should be standard across the architecture, rather than be bound up solely in the WADL file. I note that content type definitions can be held in external files at least.

I'm still thinking about if and how I would do things differently. I guess I would try to start from the bottom:

  1. Define the interactions of the architecture. Version this file independently, or each interaction's file independently.
  2. Define the content types of the architecture. Version each file independently.
  3. Define the set of url query parts that can be filled out by clients independently of a server-provided form. Version each file independently.
  4. Specify a service Interface Control Document (ICD) that identifies each of the resources provided by the service. It should refer to the various aspects of the uniform interface that the resource implements, including their versions. I wouldn't try to specify request and response sections in the kind of freeform way that wadl currently allows. Version this file independently of other ICDs.
  5. Specify a mass-publication format. It should fill a similar role to the ICD, but be be more focused on communicating high-level semantics to configuration tools. For example, it might have tags attached to each resource for easy classification and filtering.

Conclusion

I think that discussion in the REST description language area is useful, and could be heading in the right direction. However, I think that as with any content type it needs to be very clearly focused to be successful. We have to be clear as to what we want to do with a description language, and ensure that it isn't used in ways that are anti-architectural. I'm sure we have quite a way to go with this, and that there are benefits in having a good language in play.

Benjamin

Mon, 2007-Jun-11

Lessons of the Web

Many people have tried to come up with a definitive list of lessons from the Web. In this article I present my own list, which is firmly slanted towards the role of the software architect in managing competing demands over a large architecture.

One of the problems software architects face is how to scale their architectures up. I don't mean scaling a server array to handle a large number of simultaneous users. I don't mean scaling a network up to handle terabytes of data in constant motion. I mean creating a network of communicating machines that serve the purposes of their users needs at a reasonable price. The World-Wide Web is easy to overlook when scouting around for examples of big architectures that are effective in this way. At first, it hardly seems like a distributed software architecture. It transports pages for human consumption, rather than being a serious machine communication system. However, it is the most successful distributed object system today. I believe it is useful to examine its success and the reasons for that success. Here are my lessons:

You can't upgrade the whole Web

When your architecture reaches a large scale, you will no longer be able to upgrade the whole architecture at once. The number of machines you can upgrade will be dwarfed by the overall population of the architecture. As an architect of a large system it is imperative you have the tools to deal with this problem. These tools are evident in the Web as separate lessons.

Protocols must evolve

The demands on a large architecture are constantly evolving. With that evolution comes a constant cycling of parts, but as we have already said: You can't upgrade the whole Web. New parts must work with old parts, and old parts must work with new. The old Object-oriented abstractions of dealing with protocol evolution don't stack up at this scale. It isn't sufficient to just keep adding new methods to your base-classes whenever you want to add an address line to your purchase order. A different approach to evolution is required.

Protocols must be decoupled to evolve

A key feature of the Web is that it decouples protocol into three separately-evolving facets. The first facet is identification through the Uniform Resource Identifier/Locator. The second facet is what we might traditionally view as protocol: HTTP. The definition of HTTP is focused on transfer of data from one place to another through standard interactions. The third facet is the actual data content that is transferred, such at HTML.

Decoupling these facets ensures that it is possible to add new kinds of interactions to the messaging system while leveraging existing identification and content types. Likewise, new content types can be deployed or content types be upgraded without compromising the integrity of software built to engage in existing HTTP interactions.

In a traditional Object-Oriented definition of the protocol these facets are not decoupled. This means that the base-class for the protocol has to keep expanding when new content types are added or entire new base-classes must be added. The configuration management of this kind of protocol as new components are added to the architecture over time is a potential nightmare. In contrast, the Web's approach would mean that the base-class that defines the protocol would include an "Any" slot for data. The actual set of data types can be defined separately.

Object identification must be free to evolve

Object identification evolves on the Web primarily through redirection, allowing services to restructure their object space as needed. It is an important principle that this be allowed to occur occasionally, though obviously it is best to keep it to a minimum.

New object interactions must be able to be added over time

The HTTP protocol allows for new methods to be added, as well as new headers to communicate specific interaction semantics. This can be used to add new ways to transfer data over time. For example, it allows for subscription mechanisms or other special kinds of interactions to be added.

New architecture components can't assume new interactions are supported by all components.

Prefer low-semantic-precision document types over newly-invented document types

I think this is one of the most interesting lessons of the Web. The reason for the success of the Web is that a host of applications can be added to the network and add value to the network using a single basic content type. HTML is used for every purpose under the sun. If each industry or service on the Web defined its own content types for communicating with its clients we would have a much more fragmented and less valuable World-Wide-Web.

Consider this: If you needed a separate browser application or special browser code to access your banking details and your shopping, or your movie tickets and your city's traffic reports... would you really install all of those applications? Would google really bother to index all of that content?

Contrary to perceived wisdom, the Web has thrived exactly because of its low semantic value and content. Adding special content types would actually work against its success. Would you rather define a machine-to-machine interface with special content types out to a supplier, or just hyperlink to their portal page? With a web browser in hand, a user can often integrate data much more effectively than you can behind the scenes with more structured documents.

On the other hand, machines are not as good as humans at interpreting the kinds of free-form data that appear on the Web. Where humans and machines share a common subset of information they need the answer appears to be in microformats: Use a low-semantic file format, but dress up the high-semantic-value parts so that machines can read it too. In pure machine-to-machine environments XML formats are the obvious way to go.

In either the microformat or XML approaches it is important to attack a very specific and well-understood problem in order to future-proof your special document type.

Ignore parts of content that are not understood

The must-ignore semantics of Web content types allows them to evolve. As new components include special or new information in their documents, old components must know to filter that information out. Likewise, new components must be clear that new information will not always be understood.

If it is essential that a particular piece of new information is included and understood in a particular document type, it is time to define a new document type that includes that information. If you find yourself inventing document type after document type to support the evolution of your data model, chances are you are not attacking the right problem in the right way.

Be cautious about the use of namespaces in documents

I take Mark Nottingham's observation about Microsoft, Mozilla, and HTML very seriously:

What I found interesting about HTML extensibility was that namespaces weren’t necessary; Netscape added blink, MSFT added marquee, and so forth.

I’d put forth that having namespaces in HTML from the start would have had the effect of legitimising and institutionalising the differences between different browsers, instead of (eventually) converging on the same solution, as we (mostly) see today, at least at the element/attribute level.

Be careful about how you use namespaces in documents. Consider only using them in the context of a true sub-document with a separately-controlled definition. For example, an atom document that includes some html content should identify the html as such. However, an extension to the atom document schema should not use a separate namespace. Even better: Make this sub-document a real external link and let the architecture's main evolution mechanisms work to keep things decoupled. Content-type definition is deeply community-driven. What we think of as an extension may one day be part of the main specification. Perhaps the worst thing we can do is to try and force in things that shouldn't be part of the main specification. Removing a feature is always hard.

New content types must be able to be added over time

HTTP includes the concept of an "Accept" header, that allows a client to indicate which kinds of document it supports. This is sometimes seen as a way to return different information to different kinds of clients, but should more correctly be seen as an evolution mechanism. It is a way of supporting clients that only understand a superseded document type and those that understand a current document type concurrently. This is an important feature of any architecture which still has an evolving content-type pool.

Keep It Simple

This is the common-sense end of my list. Keep it simple. What you are trying to do is produce the simplest evolving uniform messaging system you possibly can. Each architecture and sub-architecture can probably support half a dozen content types and fewer interactions through its information transport protocol. You aren't setting out to create thousands of classes interacting in crinkly, neat, orderly patterns. You are trying to keep the fundamental communication patterns in the architecture working.

Conclusion

The Web is already an always-on architecture. I suspect that always-on architectures will increasingly become the norm for architects out there. There will simply come a point where your system is connected to six or seven other systems out there that you have to keep working with. The architecture is no longer completely in your hands. It is the property of the departments of your organisation, partner organisations, and even competitors. You need to understand the role you play in this universal architecture.

The Web is already encroaching. Give it ten more years. "Distributed Software Architecture" and "Web Architecture" will soon be synonyms. Just keep your head through the changes and keep breathing. You'll get through. Just keep asking yourself: "What would HTML do?", "What would HTTP do?".

Benjamin