Sound advice - blog

Tales from the homeworld

My current feeds

Sat, 2006-Jul-22

Defining Object-Orientation (and REST)

Andrae muses about what object-oriented programming is, and comes to a language theorist's conclusion:

Object Oriented Programming is any programming based on a combination of subtype polymorphism and open recursion

I'll take a RESTafarian stab it it:

Object-Oriented Programming divides application state into objects. Each object understands a set of functions and corresponding parameter lists (its interface type) that can be used to access and manipulate the subset of application state it selects.

Objects with similar functions can be accessed without knowing the precise object type, either through knowledge of an inherited interface type or by direct sampling of the set of functions the object understands.

Here is my corresponding definition of REST Programming

REST Programming divides application state into resources. Each resource understands a set of representations of the state it selects, and a standard set of methods that can be used to access and manipulate its state using those representations. The representation types themselves are selected from a constrained set.

All resources have similar functions, and in pure REST all have similar representation types (content types). This means that all resources can be accessed without knowing the precise resource type.

Object-Orientation defines different types for different objects, and must then consider mechanisms such as introspection to discover type information and interact with an unknown object. REST defines one type for all objects. The one type is used regardless of application, industry or industry sector, or other differentiating factor. The goal of the uniform interface is that no client-side code needs to be written to support any particular new application or application type. They are accessed through the existing uniform interface using existing tools.

In this pure model, a single browser program can access banking sites or sports results. It can access search engines, or browse message group archives. The principle is that information is pushed around in forms that everyone understands. No new methods are required to access or manipulate the information. No new content types are introduced to deal with new data.


Sun, 2006-Jul-09

URI Reference Templating

I think Mark has got it partly wrong. I think that his Link-Template header needs to be collapsed back into the HTTP Link header, and I think that his URI templating should be collapsed back into the URI Reference. Let me explain:

It is rarely OK for a client to infer the existence of one resource from the fact that another exists. A client should only look up resources it is configured with, resources it has seen hyperlinks to, and resources that have come out of some form of server-provided forms or templating facility. At the moment, our templating facilities centre around HTML forms and XForms. Is this in need of some tweaking?

We already have a separation between a URI and a URI reference. A URI is what you send over the wire to your HTTP server. A URI reference tells you how to construct the URI, and what to do with the result returned from the server. Consider the URI reference <>. This tells us to construct and send a request with url <> to the server, and look for the "heading1" tag in the returned document. The exact processing of the "heading1" fragment depends on the kind of data returned from the server, and the kind of application the client is. A browser will show the whole document, but navigate to the identified tag. Another client might snip out the "heading1" tag and its descendents for further processing.

Mark Nottingham has proposed a reintroduction of and enhancement to HTTP's link header. In his draft, he suggests a templating mechanism for relationships between documents. He proposes a Link header for URI referenes, and a separate Link-Template header that allows for templating. He defers definition of how the template is expanded out to the definition of particular link types.

Danny is unsure of the use cases for link templating. I'm not sure about templating of the HTTP link header, although I'm sure Mark has some specific cases in mind. I have at least one use case for a broader definition of URI templating, and I am not sure that the link header is the right place to specify it or the link type the place to specify how it is expanded. I wrote to Mark last week commenting on how useful it would be if his hinclude effort could be coupled with a templating mechanism:

Consider a commerce website built around the shopping cart model. If I am a user of that site, I may have urls that only relate to me. I may have my shopping cart at <>. My name would form part of the url for the purpose of context. Urls specific to me are not the only ones that need to return me personalised content, however. Consider a product url such as <>. That page may contain a link to my shopping cart or even include a mini-checkout facility as part of the page, and may include useful customer-specific information for my convenience.

The server could return a static product page to the user. The client side could rewrite the HIinclude src attribute before issuing the include request. This rewrite could take into account contextual information such as a username to ensure that the static page ended up being rendered with my own customisations included.

I think that perhaps the right technical place to specify such a mechanism is as part of the URI reference. It would remain illegal to include "{" or "}" characters in a URL, however a URI reference would allow their inclusion. All substitutions must be completed as part of the transformation from a URI reference to a URI. This process would use context-specific information to be defined by individual specifications, such as HTML or HTTP. HTML would likely have a means of transferring information from javascript or directly from cookies into the templating facility. Other contexts may have other means of providing the templating. If no specification is available for a particular context on how to perform the tranformation, the use of curly braces in a URI reference would effectively be illegal.

As for actually including the curlies into a URI rfc, I understand that might be taking the bull very firmly by the horns. Perhaps the notion of a templated URI reference to eventually be merged with the general URI reference would be the right track. One thing I don't think will be ideal is the specification of a separate Link-Template header to deal with these special URI references. There may be technical reasons beyond my current comprehension, but I think it would lead to a long term maintainability problem with respect to URIs and their references.


Sat, 2006-Jul-08

Platform vs Architecture

One of the challenges of working with older software is one of obsolescence. It is a challenge that I face in my professional life, and it appears to be something that is affecting the open source world. I write software in C++. Core GNOME developers write their software in C. We would all love to offer our platform in languages programmers commonly work with today. GNOME offers some libraries to languages other than C in bindings. This can be useful, but for some technically good reasons developers in Java like "pure Java" code. This can be true of other languages as well. Language bindings themselves can be a problem. Maintaining an interface to a C library in a way that makes sense in python is a full-time job in itself.

So what do we do with ourselves when the software we write doesn't fit into how people want to use it? What options do we have, and how do we maintain a useful software base as languages and technologies come and go?

To get the game into full swing, I would like to separate the notions of platform and architecture. For the purposes of this entry I'll define platform as the software you link into your process to make your process do what it should. I'll define architecture as the way different processes interact to form a cohesive whole. Within a process you need the platform to integrate pretty naturally with the developer's code. Defined protocols can be used between processes to reduce coupling, and reduce the need direct language bindings. From those base assumptions and definitions, whatever software we can keep out of the process is not going to have to sway with the breeze of how that process is implemented. This extricated software is a part of the architecture we can keep, no matter what platform we use to implement it.

The closest link I can find for the moment is this one, but discussion has cropped up from time to time over the last few years. It centres on whether Gnome is a software platform, or simply a set of specifications. Are the parts of Gnome going to be reimplemented in various languages, each with their own bugs and quirks? Will that be good for the platform or bad? Should these implementations be considered competing products, or parts of the same product?

The simplest answer for now is to sidestep the question. There are two approaches that allow us to do this, which I would characterise as model-driven or lesscode approaches. The model-driven approach involves taking what is common between the various implementations, and defining it in a platform neutral way. This can often and easily be done with the reading of configuration files or of network input. You define this model once, and provide individual mappings into the different platforms. These mappings may still be expensive to maintain, but it would allow developers to keep working on "common code" when it comes to real applications. A working example of this in the gnome platform is glade. Various implementations or language bindings for "libglade" can be created, or the widget hierarchy model can be transformed into platform-specific code directly.

Lesscode is an approach where we make architectural decisions that reduce the amount of platform-specific code we need to implement. Instead of trying to map a library that implements a particular feature into your process, split it out into another process. Do it in a way that is easy to interact with without having to write a lot of code on your side of the fence. The goal is to write less code overall, include less platform code, and implement more functions while we are at it.

While lesscode is something of an ideal, the tools are already with us. Instead of using an object-oriented interfacing technology, consider using REST. Map every object in the architecture to a URI. Now you only have to implement identifer handling once. Access every object in the architecture using a resource abstraction, such as a pure virtual C++ class or Java Interface. Find these resources through a resource finder abstraction.

What this does is put everyone on a common simple playing field. You no longer have to worry about which protocol is spoken at the application-level. Your platform reads the url scheme and maps your requests appropriately. The uniform interface means you only have to interface to one baseclass, not multiple libraries and baseclasses. The platform concept is transformed from an implementation technology into an interfacing technology.

Implementing REST in your system is not sufficient. GNOME is composed of a number of important libraries, not the least of which is gtk+ itself. Perhaps it is time to rearchitect, taking a leaf out of the web brower's book. Perhaps we should have a separate program dealing with the actual user interface. That handling could be based on a model just a little more expressive than that of glade's widget hierarchy. Desired widget content and attributes could be derived from back-end processes written in whatever way is most appropropriate at the time. Widget interactions could be transmitted back to back-end processes over a defined protocol. Perhaps Model-View-Controller isn't enough when expressed as three objects. Perhaps what is needed is two or more processes.

If a special interface is developed for speaking to this front-end process, nothing has been gained. It would be equivalent to providing the language bindings of today. What would be required is a general interfacing approach based around REST. The widget hierarcy model would specify where to get information from as URIs, and where to send notifications to as URIs. Alternatively, the model could simply leave its data open for subscription and leave it up to the other side to query and react to. The same RESTful signals and slots implementation could be used for interaction between all processes in the architecture.

My architectural vision is that each process incorporates a featherweight platform defined around RESTful communications. Which platform is chosen is irrelevant to the architecture. The fact that each platform implementation would be specific to the language or environment most suitable at the time would not be considered a problem. The features the platform implements are simply the essentials of writing all software. Specialty behaviours such as user interaction should be directed through processes that are designed to perform those functions. Linking in libraries to perform those interactions is something only a small number of processes in the system should be doing.

Web browsing is built around exactly this combination of lesscode and model-driven approaches. I think it is a template for the desktop as well.