Sound advice - blog

Tales from the homeworld

My current feeds

Tue, 2006-Apr-11

Namespaces and Community-driven Protocol Development

We have heard an anti-namespace buzz on the internet for years, especially regarding namespaces in XML. Namespaces make processing of documents more complicated. If you are working in a modular language you will find yourself inevitably trapped between the long names and the qnames and having to preserve both. If you use something like XSLT you will find yourself having to be extra careful to ensure you select elements from the right namespace, especially in any xpath expressions. It isn't possible in xpath to refer to an element that exists within the default namespace of the xslt document. It must be given an explicit qname.

Another hiccup comes about when working with RDF. It would be easy to produce compact rdf documents if one could conveniently use xml element attributes to convey simple literals. One thing that makes this more difficult is that while xml document elements automatically ineherit a default namespace, attributes get the null namespace. RDF uses namespaces extensively, so you will always find yourself filling out duplicate prefixes for attributes in what would otherwise be quite straightforward documents. This makes it difficult to both define a sensible XML format and to make it "RDF-compatible".

A new argument for me against the use of namespaces in some circumstances comes from Mark Nottingham's recent article on protocol extensibility. He argues that the use of namespaces in protocols has a social effect, and that the effect leads to incompatability in the long term. He combines this discussion with what he paints as the inevitable futility of "must understand" semantics.

Protocol development is fundamentally a social rather than a technical problem. In a protocol situation all parties must agree on a basic message structure as well as the meaning of a large enough subset of the terms and features included to get useful things done. A server and client must broadly agree on what HTTP's GET method means, and intermediataries must also have a good idea. In HTML we need to agree that <p> is a paragraph marker rather than a punctuation mark. These decisions can be made top-down, but without the user community's support such decisions will be ignored. Decisions can be made from the bottom up, but at some stage coordinated agreement will be required. Namespaces provide a technical solution to a social problem by allowing multiple definitions of the same term to be differentiated and thus to interoprate. Mark writes:

What I found interesting about HTML extensibility was that namespaces weren't necessary; Netscape added blink, MSFT added marquee, and so forth.

I'd put forth that having namespaces in HTML from the start would have had the effect of legitimising and institutionalising the differences between different browsers, instead of (eventually) converging on the same solution, as we (mostly) see today, at least at the element/attribute level.

HTML does have a scarce resource, in that the space of possible element and attribute names is flat; that requires some level of coordiation within the community, if only to avoid conflicts.

Dan Connolly is writing on obliquely the same subject. He is also concerened about the universe without namespaces, but his main concern is that protocol development decisions get adequate oversight before deployment. Dan writes:

We particularly encourage [uri-based namespaces] for XML vocabularies... But while making up a URI is pretty straightforward, it's more trouble than not bothering at all. And people usually don't do any more work than they have to.

There is a time and a place for just using short strings, but since short strings are scarce resources shared by the global community, fair and open processes should be used to manage them. Witness TCP/IP ports, HTML element names, Unicode characters, and domain names and trademarks -- different processes, with different escalation and enforcement mechanisms, but all accepted as fair by the global community, more or less, I think.

Both Dan and Mark end up covering the IETF convention of snubbing namespaces, but using a "x-" prefix to indicate that a particular protocol term is experimental rather than standard. It is Dan that comes down the hardest on this approach citing the "application/x-www-form-urlencoded" mime type as a term that became entrenched in working code before it stopped being experimental. It can't be fixed without breaking backwards-compatability, and there doesn't seem to be a good reason to go about fixing it.

Both Mark and Dan have good credentials and are backed up by good sources, so who is right? I think they both are, but at different stages in the protocol development cycle.

So let's say that the centralised committee-based protocol development model is a historical dinosaur. We no longer try to make top-down decisions and produce thousands of pages of unused technical documentation. So now how do new terms and new features get adopted into protocols and into document types? It seems that the right way is to the following process:

Marks suggests that using namespaces within a protocol may unhelpfully encourage communities to avoid that third step. The constraints of a short string world would force them to interoperate and to engage one another on one level or another and doesn't produce a result of "microsoft-this" and "netscape-that" littered throughout the final HTML document. Using short strings produced a cleaner protocol definition in the end for both HTTP and HTML, and forced compromises onto everyone in the interests of interoperability. If opposing camps are given infinite namespaces to work with they may tend towards diverent competing protocols (eg RSS and Atom) rather than coming back to the fold and working for a wider common good (HTML).

Dan criticises google's rel-nofollow in his article, saying:

Google is sufficiently influential that they form a critical mass for deploying these things all by themselves. While Google enjoys a good reputation these days, and the community isn't complaining much, I don't think what they're doing is fair. Other companies with similarly influential positions used to play this game with HTML element names, and I think the community is decided that it's not fair or even much fun.

I think that google is taking problably a less-community-minded approach than they may have done. Technorati is also criticised for rel-tag. Both relationship types started with a single company wanting to have a new feature, and there is foundataion for criticism on both fronts. Both incidents appear to have developed in a dictatorial fashion rather than by engaging a community of existing expertise. Technorati's penance was to blossom into the microformats community, a consensus-based approach with reasonable process for ensuring work is not wasted.

HTML classes are a limited community resource, just as HTML tags are. This resource has traditionally been defined within a single web site without wider consideration. Context disambiguated the class names, as only the css and javascript files associated with a single site would use the definitions from that site. Microformats and the wider semantic HTML world have recently taken up this slack in the HTML specification and are busy defining meanings that can be used across sites. The HTML elements list is not expanding, because that is primarily about document structure. HTML classes are treated differently. They are given semantic importance. Communities like microformats will spend the next five years or so coming up with standard html class names and do the same with link types. They will be based on existing implementation and implied schema, and will attempt not to splinter themselves into namespaces. Other communities will develop, and may collide with the microformats world. At those times there will be a need for compromise.

We are headed into a world of increasingly rich semantics on the web, and the right way to do it seems to be without namespaces. Individuals, groups and organisations will continue to develop their own semantics where appropriate. Collisions and mergers will happen in natural ways. The role of standards bodies will be to oversee and shape emerging spheres of influence in as organic a way as possible, and to document the results of pushing them through their paces.

Benjamin

Tue, 2006-Apr-04

Low and High REST

There has been a bit of chatter as of late about low and high REST variants. Lesscode blames Nelson Minar's Etech 2005 presentation for the distinction between REST styles. It pretty much amounts to the read-only web verses the read-write web, or possibly the web we know works and the web as it was meant to work (and may still do so in the future).

The idea is that using GET consistently and correctly can be called "low". It fits the REST model and works pretty well with the way information is produced and consumed on the web of today. Using other verbs correctly, especially other formally-defined HTTP verbs correctly, is "high" REST. The meme has been spreading like wildfire and lesscode has carried some interesting discussion on the concept.

Robert Sayre notes that the GET/POST/PUT/DELETE verbs aren't used used in any real-word applications. He says that low REST might be standardising what is known to work, but high REST is still an untested model. Ian Bicking calls the emphasis on using verbs other than POST to modify server-side state a cargo cult.

It is useful to look back at Fielding's Dissertation, in which he doesn't talk about any HTTP method except for GET. He assumes the existence of other "standard" methods, but does not go into detail about them.

I think Ian is hitting on an uncomfortable truth, or at least a half-truth. Intermediataries don't much care whether you use POST, DELETE, or PUT to mutate server state. They treat the requests in similar ways. If you were to use webdav operations you would probably find the proxies again treating the operations the same way as if you had used POST. Architecturally speaking, it does not matter which method you use to perform the mutation. It only matters that the client, intermediataries, and the server are all of the understanding that mutation is occuring.

Even that constraint needs some defense. Resource state can overlap, so mutating a single resource state in a single operation can in fact alter several resources. Neither client or intermediatary is aware of this knock-on effect. The only reason that clients really need to know if mutation is happening or not is for machines to determine whether they can safely make a request without their user's permission. Can a link be followed for precaching purposes? Can a request be retried without changing its meaning?

Personally I am a fan of mapping the operations DELETE to cut, GET to copy, PUT to paste over, and POST to paste after. I know that others like to map the operations to the database CRUD model: POST to create, GET to retrieve, PUT to update, and DELETE to delete. It amounts to the same thing, except that the cut and paste view steers us more firmly away from record-based updates and into the world of freeform stuff to stuff and this to that data flows. Viewing the web as a document transfer system makes other architectures simpler, and makes them possible.

I have mentioned before that I don't think the methods should end at that. There are specialty domains such as subscription over HTTP that seem to demand a wider set of verbs. Mapping to an object-oriented world can also indicate more verbs should be used, at least until the underlying objects can be retooled for easier access through HTTP. Robert Sayre points at this too, but I think he is a little off the mark in his thinking. I think that limiting the methods in operation on the internet is a bad thing, however limiting the methods a particular service demands clients use is a good thing. Every corner will have its quirks. Every corner will start from a position of many unnecessary SOA-style methods before really settling into the way the web really handles things. It is important for the internet to tolerate the variety while encouraging a gradual approach to uniformity.

We should have some kind of awareness of what methods we are using because it helps us exercise the principle of least power. It helps us decouple client from server by reducing the client requests to things like: "store this document at this location", "update that document you have with the one I have". By moving towards less powerful and less specific methods as well as less powerful and less specific document types we reduce the specific expectations a client will have of its server. Sometimes it is necessary to be specific, and that should be supported. However, it is a useful exercise to see how general a request could possibly fulfil the same role.

My issue with using POST for everything is that what we really often mean is that we are tunnelling everything through POST. I see it as important that the opertations we perform are visible at the HTTP protocol level so that they can be handled in a uniform way by firewalls and toolkits and intermediataries of all kinds. Information about what the request is has to be encoded into either the method or into the URI itself, or we are just forcing ourselves to interrogate another level of abstraction in the operation of our intermediataries.

You could take this discussion and use it to support making POST a general "mutate" method. If one mutation operation applies to a single URI then it makes sense to use a very general mutation method. In this case we are encoding information about what the operation is into the URI itself rather than selecting the mutation by the method of our request. Instead of tunneling a variety of possible operations through the POST, it is the URI that tunnels the information. Since that is managed by the server side of the request, that is the really best possible outcome. It is only when multiple methods apply to a single URI that we need to carefully consider methods other than POST and ensure that appropriate methods can be used even if they haven't been standardised. Future-proofing of the URI space may dictate the use of the most appropriate method available. Unfortunately, existing toolkits and standards push POST as the only method available.

In my view a client or intermediatary that doesn't understand a method it is given to work with should always treat it as if it were POST. That is a safe assumption as to how much damage it could do and what to expect of its results. That assumption would allow experimentation with new methods through HTTP without toolkit confusion. I am not a supporter of POST tunneling, and believe generally that it is lack of support for unknown methods in specifications and in toolkits that makes tunneling necessary and thus successful on the internet of today.

Benjamin

Mon, 2006-Apr-03

Linux Australia Update Episode #16

As many of you will know, I participated in a telephone interview some weeks back for James Purser's Linux Australia podcast. With his permission, here is a transcript of of that interview. Any errors or inaccuracies are my own. If you would like to refer to the original interview please see James' podcasts page. Questions asked by James are prefixed with "JP". Other text is myself speaking.

The major change that has occured in the project since this podcast is that Efficient Software has dropped the concept of a central clearing house for open source software development funding. We now encourage individual projects to follow an efficient software model by accepting funds directly from users. See the Efficient Software project page for more details about what Efficient Software is, and where it is going.

JP: Ok, getting right into our interview with Benjamin here he is explaining what Efficient Software is:

Well efficient software is a project idea that I have had for a few years but it has developed a lot since the AJ Market came online.

JP: Just before we go on, I'm sorry. For those who are listening who don't know what the AJ Market is, can you tell us what AJ has been doing there?

Anthony Towns is a Debian developer. He has opened up a market on his own website where you can submit money to him and it will drive (theoretically) the development he is doing in that week. So it is a motivating factor for him to get some money in his hands, and is also a way for the community to drive him into particular directions as to how he spends his time. He publicised that on his blog and on planet linux australia, so that spurred things along a bit. So this project is similar, a similar mindset. I would like to iron out some of the reasons why users would contribute and look forward a bit as to where we will be in ten years time or twenty years time when closed source software is possibly a less dominant force in the market.

So at the moment it is very little. It is a mailing list. It is an irc channel. We are in a phase of the project where we are just discussing things. We are talking about what this world will look like twenty years out from now.

JP: Ok, well let us in. What is this kind of world view you have got that is behind Efficient Software?

If we imagine a world where open source software has "won the battle", that it's freedom everywhere and everybody can get at the source code and free software is developed. You have to ask the question: Who is it developed by? We have pretty good answers to that at the moment. We have people who are employed by open source friendly companies. We have people who have a day job and they spend their weekends and free time doing software development for free software projects. They have the motivations and the itches to scratch to contribute. But there is a conflict, a fundamental conflict when you have people who are working part time, especially on software development. They have time and money in conflict. They need to earn money, they need to have a day job in order to have the free time to spend on open source. The idea is broadly that we want to be able to fund people as much as we can and we want to fund them as directly as we can potentially from the community itself. When you take out the closed software world a big segment of the actual day job marketplace disappears for software developers.

JP: Yeah. That would be say 80-90% of the employers of software developers, wouldn't it?

Yeah. So we can look forward and see this potential conflict approaching where open source adoption slows down because nobody is willing to give up their day job. They are afraid of contributing because they may want to keep their jobs. What I'm really looking to is how to solve that conflict between the time you want to be able to spend on open source and the money by aligning those two things. Being able to get a direct funding of the developers.

JP: Cool. So what would you be setting up with Efficient Software? What is the current sort of model you are looking at?

Well, I have a strawman and this is preliminary and this is mostly my thinking. I am eager to take on board comments and consider even radical alternatives to this. What I'm currently laying out is a kind of eBay-style website that essentially becomes a marketplace, a central clearing house for enhancements to open source software. The idea is that customers, or users, however you want to think of them... investors, because investment and contributing they are the same thing in open source really. If the customers will find particular enhancements they want, they will be modelled as bugs in a bugzilla database. They will have a definiate start and conclusion and close out and verification process assocated with them. The idea is that the community (that could include individuals and businesses or whatever interests there are the support that particular project) can contribute money to a pool that will be paid to the project once the bug is verified or once that release cycle is complete. So there is a clear motivation there to contribute, hopefully, to the project. You are going to get your bug fixed at a low cost. You can put in a small amount of money and hopefully other people will put in small amounts of money also, and it will build into a big enough sum to make it appetising to go and fix the bug. Then maybe the developer can still pay their bills that week. There is a motivation as well from the developer's side to go for the biggest paying bugs and to try and meet the market expectations of the software.

JP: Have you considered the other systems that we have currently got for paying open source developers, which is say Redhat's and any of the corporate Linuxes where you pay for support, or Sourceforge's donation system where you can go and actually donate to any of the projects?

I think that is a very interesting sort of case study. If you look at a business that is selling support, one of the interesting things I have found in open source development is that often the support you can get for a fee is not as good as you can get for free. It doesn't have the same kind of wide customer base that a genuine open source project has. In an open source world people contribute not only a bit of code, but they will also contribute a bit of time by helping other people out. The reason they do that is that they get a lot of support in response to that. They can put a small amount of investment in and get a great yeild off of that. Commercial support at the moment is good for making business feel very comfortable about their linux investments. You can buy a Redhat CD and install it on your machine and you have your support for that particular machine, and if you want ten machines you buy ten support agreements. It is very much the closed source software development model in that costs in developing the CD are returned after the CD is produced. They also have the support mechanisms in there which is a useful and still will probably still be an important part of the business, the economy of open source going forwards.

Sourceforge is another interesting one where they have opened it up for donations, and that has happened fairly recently. Over the last twelve months, I think. Any community member can contribute to a particular project that they like. My fundmanental concern whith that is there there is no real economic incentive to do that. There are two reasons I can think of economically to contribute to an open source project through that sort of model. One is that it you think it is on the ropes and you want to keep it ticking along so that the investments you have made already will continue to bear fruit as more people put contributions into that particular product. Also there is a sort of "feel good" factor. You might like the product and want to reward the developers. In that sort of situation it is very difficult to determine exactly how much you should actually put towards the project. It goes back to recouping costs after the development has taken place, and ideally we would like to be able to pay the developer as they develop the code rather than come along several weeks or months later and say "I like what you have done, here is a thousand bucks". I am interested in trying to find an economic basis for that relationship to exist between the customer and the producers of the software.

JP: As you mentioned before you have blogged a fair bit about Efficient Software. Including a discussion you had at HUMBUG at the last meeting. What has the response been like?

It has been very interesting. So far it has been fairly minimal, but at HUMBUG we had really good discussion about basically the prelimiary scoping of the exercise of the whole project. We got to talk through the issues of how you get from some other business model some greengrocer or a certain internet search engine and how you get that money feeding to an open source software developer to pay for their day job. We just started to map out the whole industry and work out where the money is coming from and how we can get as direct a flow of money to the developer and as most efficent flow as we can get that will reward the developer for meeting real customer expectations. We discussed a lot of other issues as well and I blogged about them and you can read that on https://planet.linux.org.au/ or my blog at https://soundadvice.id.au/. We have just gone through some preliminary scoping and we are still in very much a discussion phase about efficient software. What I put forward is a strawman, and it is not really intended to be the final model. I think there are some pros and cons which we should really work through and compare to other businesses in a much more detailed way.

JP: So if this were a software project you would really say you were in the Alpha stage of development?

Yeah, absolutely. It is all new. It is all fresh. I don't know if it will fly. I think there are reasons to believe it will succeed. I think there are economic reasons to think that open source software will always be more efficient and cost-effective than closed source software. Particularly due the forking capability of open source there is a very low barrier to entry. If I want to provide a whole new infrastructure for running the project I can just take your source code and I can run with it. If I am more efficient, if I am better at it than you, then I will be the one who ends up at the end. Most of the time what happens is that projects collapse back because there isn't enough of a customer base to really draw from. That includes time and skills of developers, of people supporting other people who are using the product. Ultimately the Efficient Software goal is just to extend that so that money is another thing that your community can provide in an open source sort of way. As they have an itch they can provide either their time, their skills, or some portion of money. I think as we move to less technical arenas in open source... open source really started in operating systems and tools and things. It has expanded a long way from that. We are getting to things like Gnome which which is really meant for the average desktop user. The average desktop user doesn't necessarily have the time or the skills to really put forward to code development. I think there is an untapped supply of funding for developers who are willing to take on that relationship with that sort of community: A community which is less technical and is more interested in their own way of life and their own day jobs.

JP: Do you see Efficient Software being able to benefit the whole range of projects, from your single man developer to your Apache, Gnome, or Linux kernel?

That's really my picture of where this will go. I think there is a barrier as to where this can penetrate, but the barrier is really about whether the software is mass-market and whether it has has mass-appeal. Those are the same sorts of barriers as open source hits anyway. I think this will most benefit non-technical community bases or less-technical community bases and will probably have a lesser impact on things like the Linux kernel where no desktop user will have a specific bug they will want to address in the Linux kernel, necessarily. There may be some flow through. Say Gnome were taking on this model, and they were acquiring a reasonable funding base from their community which is a more non-technical community they may have a reason to reinvest in the Linux kernel itself. Whether that be reinvesting of developer time and resources, or whether that indeed went upstream as money. As we reach out to less technical fields you will see a more money-oriented open source develoment and as we move to more the tehnical areas then it will be people who have the time and skills and are pushing those time and skills upstream.

Benjamin