Sound advice - blog

Tales from the homeworld

My current feeds

Mon, 2006-May-29

Moving Towards REST - a case study

I mentioned in my last entry that I developed my own object-oriented IPC system some years ago, and have been paying my penance since. The system had some important properties for my industry, and a great deal of code was developed around it. It isn't something you can just switch off, and isn't something you can easily replace. So how is that going? What am I doing with it?

I am lucky in my system that I am working with a HMI quite decoupled from the server side processes. The HMI is defined in terms of a set of paths that refer to back-end data, and that data is delivered and updated to the HMI as it changes. To service this HMI I developed two main interfaces. There is a query/subscribe interface and a command interface. These both work based on the path structure, so in a way I was already half-way to REST when I started to understand the importance of the approach. Now, I can't just introduce HTTP as a means of getting data around. HTTP is something the organisation has not yet had a lot of experience with, and concerns over how it will perform are a major factor. The main concern, though, is integration with our communications management system. This system informs clients of who they should be communicating with, when. It tells them which of their redundant networks to use, and it tells them how long to keep trying.

A factor we consider very carefully in the communications architecture of our systems is of how they will behave under stressful situations. We need clients to stop communicating with dead equipment in a short period of time, however we also expect that a horrendously loaded system will continue to perform basic functions. If you have been following the theoretical discussions I have had on this blog over the last few years you'll understand that these requirements are in conflict. If A sends a message to B, and B has not responded within time t, is B dead or just loaded? Should A fail over to B's backup, or should it keep waiting?

We solve this problem by periodically testing the state of B via a management port. If the management port fails to respond, failover is initiated. If the port conintues to operate, A keeps waiting. We make sure that throughout the network no more pings are sent than are absolutely required, and we ensure that the management port always responds quickly irrespective of loading. Overall this leads to a simple picture, at least until you want to try and extend your service guarantees to some other system.

So, for starters they don't understand your protocols. If they did understand them (say you offered a HTTP interface) you would have to also add support for accessing your management interfaces. Their HTTP libraries probably won't support that. So you pretty much have to live with request timeouts. Loaded systems could lead to the timeouts expiring and to failovers increasing system load. Oh well.

So the first step is definately not to jump to HTTP. Step number one is to create a model of HTTP within the type system we have drawn up. We define an interface with a call "request". It accepts a "method", "headers", and "body" parameter list with identical semantics to those of HTTP. Thus, we can achieve the first decoupling benefit of actual HTTP. We can decouple protocol and document type, and begin to define new document types without changing protocol.

I changed requests over our command interface to now go over our mock HTTP. This means it will be straightforward in the future to plug actual HTTP into our applications either directly or as a front-end process for other systems to access. I added an extended interface to objects that recieve commands now so that they can have full access to the underlying mock or actual HTTP request if they so chose. They will be able to handle multiple content types by checking the content-type header. Since the change, our objects are not tied to our procotol. Their main concern is of document type and content, as well as the request method that indicates what a client wants done with the document. We can change protocol as needed to support our ongoing requirements.

Step two is to decouple name resolution from protocol. We had already done that effectivley in our system. Messages are not routed through a central process. Connections are made from point to point. Any routing is done at the IP level only. Easy. So we connect our name system to DNS and other standard name resolution mechanisms. We start providing information through our management system not only of services under our management, but also of services under DNS management only. The intention is that over time the two systems are brought closer and closer together. One day we will have only one domain name system, and we have a little while between then and now to think about how that unified system will relate to our current communications management models.

Alongside these changes we begin bringing in URL capabilities, and mapping our paths onto the URL space. We look up the authority through our management system, and pass the path on to whomever we are directed to connect to. Great! We can even put DNS names in, which is especially useful when we want to direct a client to speak to localhost. Localhost does not require a management system, which is what makes IPC simpler than a distributed comms system. There is no hardware to fail that doesn't affect us both. We can direct our clients to look at a service called "foo.bar", or use the same configuration file to direct our client to "localhost:1234". The extended support comes for free on the client side.

As the cumulative benefits of working within a RESTful model start to pile up, we are moving the functionality of other interfaces onto the command interface. As more functionality is exposed, more processes can get at that functionality easily without having to write extra code to do so. That is lesscode at its finest. Instead of building complex type-strict libraries for dealing with network communications, we just agree on something simple everywhere. We don't need to define a lot of types and interfaces. We just need the one. Based on an architectural decision, we have been able to get more cross-domain functionality for less work.

So, what is next? I am not a beliver in making architectural changes for the sake of making them. I do not think that polishing the bed knobs is a valuable way for a software developer to spend his or her time. We must deliver functionality, and where the price of doing things right is the same or cheaper than the price of doing things easily we take the opportunity to make things better. We take the opportunity to make the next piece of work cheaper and easier too. Over time I hope to move more and more functionality to the command interface. I hope to add that HTTP front-end, and perhaps integrate it into the core in the near to medium term future. I especially hope to provide simple mechanisms for our system to communicate with other systems using an architecture based on document transfer. Subscription will be in there somewhere, too.

The challenge going forward will be riding that balance between maintaining our service obligations, and making things simple to work with and standard. The obligations offered in my industry are quite different to those offered on the web, so hard decisions will need to be made. Overall, the change from proprietary to open and from object-oriented to RESTful will make those challenges worth overcoming.

Benjamin

Sun, 2006-May-28

Communication on the Local Scale (DBUS)

There is a divide in today's computing world between the small scale and the large scale. The technologies of the internet and the desktop are different. Perhaps the problem domains themselves are different, but I don't think so. I think that the desktop has failed to learned the lessons of the web. SOAP is an example of that desktop mindset trying to overcome and overtake the web. One example of the desktop mindset overcoming and overtaking the open source desktop is the emerging technology choice of DBUS.

DBUS is an IPC technology. Its function is to allow procesess on the same machine to communicate. It's approach is to expose object-oriented interfaces through an centralised daemon process that performs message routing. The approach is modelled after some preexisting IPC mechanisms, and like those it is is modelled after gets things wrong on several fronts:

  1. DBUS does not separate its document format from its protocol
  2. DBUS pushes the object-oriented model into the interprocess-compatiblity space
  3. DBUS does not have a mapping onto the url space
  4. DBUS does not separate name resolution from routing

From a RESTful perspective, DBUS is a potential disaster. I know it was (initially at least) targetted at a pretty small problem domain shared by kde and gnome applications, but the reason I feel strongly about this is that I have gone down this road myself before. I'm concerned that dbus will come to be considered a kind of standard interprocess communications system, and that it will lock open source into an inappropriate technology choice for the next five or ten years. I'll get to my experiences further down the page. In the mean-time, let's take those criticisms on one by one. To someone from the object-oriented world the approach appears to be pretty near optimal, so why does a REST practictioner see things so differently?

Decoupling of Document Format from Protocol

Protocols come and go. Documents come and go. When you tie the two together, you harm the lifecycle of both. DBUS defines a document format around method calls to remote objects. There have been protocols in the past that handled this, and there will be in the future. There are probably reasons that DBUS chose to produce its own protocol for function parameter transmission. Maybe they were even good ones. The important thing for the long-term durability of DBUS is that there should be some consideration for furture formats and how they should be communicated.

Objects for interprocess communication

The objects in an object-oriented program work for the same reason that the tables within SQL database work: They make up a consistent whole. They do so at a single point it time, and with a single requirements baseline. When requirements change, the object system changes in step. New classes are created. Old classes retooled or retired. The meaning of a particular type with the object system is unambiguous. It neither has to be forwards or backwards compatible. It must simply be compatible with the rest of the objects in the program.

Cracks start to emerge when objects are used to build a program from parts. Previous technologies such as windows DLLs and COM demonstrate that it is hard to use the object and type abstraction for compatability. A new version of a COM object can add have new capabilities, but must still support the operations that the previous version supported. It indicates this compatability by actually defining two types. The old type remains for backwards-compatibility, and a new one inherits from the old. A sequence of functionality advances results in a sequence of types, each inheriting from the previous one.

This in and of itself is not a bad thing. Different object systems are likely to intersect with only one of the interface versions at a time. The problem is perhaps deeper within object-orientation itself. Objects are fundamentally an abstraction away from data structures. Instead of dealing with complex data structures everywhere in your program, you define all of the things you would like to do with the data structure and call it a type. The type and data structure can vary independently, thus decoupling different parts of the application from each other. The trouble is that when we talk about object-oriented type, we must conceive of every possible use of our data. We must anticipate all of the things people may want to do with it.

Within a single program we can anticipate it. Between programs with the same requirements baseline, we can anticipate it. Between different programs from different organisations and conflicting or divergent interests, the ability to anticipate all possible uses becomes a god-like requirement. Instead, we must provide data that is retoolable. Object-orientation is built around the wrong design principle for this environment. The proven model of today is that of the web server and the web browser. Data should be transmitted in a structure that is canonical and easy to work with. If it is parsable, then it can be parsed into the data structures of the remote side of your socket connection and reused for any appropriate purpose.

Mapping to URL

DBus addresses are made up of many moving parts. A client has to individually supply a bus name (which it seems can usually be ommitted), a path to an object, a valid type name that the object implements, and a method to call. These end up coded as indivdual strings passed individually to dbus methods by client code. The actual decomposition of these items is really a matter for the server. The client should be able to unambiguously be able to refer to a single string to get to a single object. The web does this nicely. You still supply a separate method, but identifying a resource is simple. https://soundadvice.id.au/ refers to the root object at my web server. All resources have the same type, so you know that you can issue a GET to this object. The worst thing that can happen is that my resource tells you it doesn't know what you are talking about: That it doesn't support the GET method.

Let's take an example DBUS address: (org.freedesktop.TextEditor, /org/freedesktop/TextEditor, org.freedesktop.TextEditor). We could represent that as a url something like <dbus://TextEditor.freedesktop.org /org/freedesktop/TextEditor;org.freedesktop.TextEditor> It's a mouthful, but it is a single mouthful that can be read from configuration and be passed through verbatim to the dbus system. If you only dealt with the org.freekdesktop.TextEditor interface, you might be able to shorten that to <dbus://TextEditor.freedesktop.org /org/freedesktop/TextEditor>.

There are still a couple of issues with that url structure. The first is the redudancy in the path segment. That is obviously a design decision to allow different libraries within an application to regieter paths independently. A more appropriate mechanism might have been to pass base paths into those libraries for use when registering, but that is really neither here nor there. The other problem is with that authority.

Earlier versions of the uri specification allowed for any naming authority to be used in the authority segement of the url. These days we hold to DNS being something of the one true namespace. As such, we obviously don't want to go to the TextEditor at freedesktop.org. It is more likely that we want to visit something attached to localhost, and something we own. One way to write it that permits free identification of remote dbus destinations might be: <dbus://TextEditor.freedesktop.org.benc.localhost /org/freedesktop.TextEditor>. That url identifies that it is a local process of some kind, and one within my own personal domain. What is still missing here is a naming resolution mechanism to work with. We could route things through dbus, but an alternative would be to make direct connections. For that we would need to be able to resolve an IP address and port from the authority, and that leads into the routing vs name resolution issue.

Routing vs Name Resolution

The easist way to control name resolution is to route messages. Clients send messages to you, and you deliver them as appropriate. This only works, of course, if you speak all of the protocols your clients wants to speak. What if a client wanted to replace dbus with the commodity interface of http? If we decoupled name resolution and routing, clients that know how to resolve the name can speak any appropriate protocol to that name. The dbus resolution system could be reused, even though we had stopped using the dbus protocol.

Consider an implementation of getaddrinfo(3) that resolved dbus names to a list of ip address and port number pairs. There would be no need to route messages through the dbus daemon. Broadcasts could be explicitly transmitted to a broadcast service, and would need no special identification. They could simply be a standard call which is repeated to a set of registered listeners.

Separating name resolution from routing would permit the name resolution system to survive beyond the lifetime of any individual protocol or document type. Consider DNS. It has seen many protocols come and go. We have started to settle on a few we really like, but DNS has kept us going through the whole set.

Coupling and Decoupling

There are some things that should be coupled, and some not. In software we often find the balance only after a great deal of trial and error. The current state of the web indicates that name resolution, document transfer protocol, and document types should all be decoupled from each other. They should be loosely tied back to each other through a uniform resource locator. The web is a proven sucessful model, however experimental technologies like CORBA, DBUS, and SOAP have not yet settled on the same system. They couple name resolution to protocol to document type, and then throw on an inappropriate object-oritented type model to ensure compatability is not maintainable in the long term and on the large scale. It's not stupid. I made the same mistakes when I was fresh out of university at the height of the object-oriented frenzies of the late 90's.

I developed a system of tightly-coupled name resolution, protocol, and document type for my employer. It was tailored to my industry, so had and continues to have important properties that allow clients to achieve their service obligations across all kinds of process, host, and network failures. What I thought was important in such a system back then was the ability to define new interfaces (new O-O types) easily within that environment.

As the number of interfaces grew, I found myself implementing adaptor after adaptor back into a single interfacing system for the use of our HMI. It had a simpler philosophy. You provide a path. That path selects some data. The data gets put onto the HMI where the path was configured, and it keeps updating as new changes come in.

What I discovered over the years, and I suppose always knew, is that the simple system of paths and universal identifiers was what produced the most value across applications. The more that could be put onto the interfaces that system could access, the easier everything was to maintain and to "mash up". What I started out thinking of as a legacy component of our system written in outdated C code turned out to be a valuable architecural pointer to what had made previous revisions of the system work over several decades of significant back-end change.

It turns out that what you really want to do most of the time is this:

  1. Identify a piece of data
  2. GET or SUBSCRIBE to that data
  3. Put it onto the screen, or otherwise process it

The identity of different pieces of data is part of your configuration. You don't want to change code every time you identify a new piece or a new type of data. You don't want to define too many ways of getting at or subscribing to data. You may still want to provide a number of ways to operate on data and you can do that with various methods and document types to communicate different intents for transforming server-side state.

What the web has shown and continues to show is that the number one thing you want to do in any complex system is get at data and keep getting at changes to that data. You only occasionally want to change the state of some other distributed object, and when you do you are prepared to pay a slightly higher cost to achieve that transformation.

Benjamin