Thanks to Robert Collins for your input on my previous blog entry about using the method name in a HTTP request as effectively a function call name. Robert, I would have contacted you directly to have a chat before publically answering your comments, but I'm afraid I wasn't able to rustle up an email address I was confident you were currenly prepared to accept mail on. Robert is right that using arbitrary methods won't help you get to where you want to go on the Internet. Expecting clients to learn more than a vocublary of "GET" is asking a lot already, so as soon as you move past the POST functions available in web forms you are pretty much writing a custom client to consume your web service. The approach is not RESTful, doesn't fit into large scale web architecture, and doesn't play nice with firewalls that don't expect these oddball methods.
My angle of attack is really from one of a controlled environment such as a corporate intranet or a control system in some kind of industrial infrastructure. The problems of large scale web architecture and firewalls are easier to control in this environment, and that's why CORBA has seen some level of success in the past and SOAP may fill a gap in the future. I'm not much of a fan of SOAP, and the opportunities that dealing with a function call as (method, resource, headers, body), or to my mind as (function call, object, meta, parameter data) are intriguing to me. Of particular interest is of how to deal with backwards and forwards-compatability of services through a unified name and method space and the ability to transmit parameter data and return "return" data in various representations depending on the needs and age of the client software.
I'm also interested in the whether the REST approach (or variants of it) can be scaled down to less-than-internet scale, and indeed less-than-distributed scale. I'm curious as to what can happen when you push the traditional boundaries between these domains about a little. I think it's clear that the traditional object model doesn't work on the Internet scale, so to my mind if we are to have a unified model it will have to come back down from that scale and meet up with the rest of us somewhere in the middle. I think the corporate scale is probably where that meeting has to first take place.
My suggestion is therefore that at the corporate scale a mix of restful and non-restful services could cooexist more freely if they could use HTTP directly as their RPC mechanism. Just a step to the left is actual REST, so it is possible to use it wherever it works. A step to the right is traditional Object-Orientation, and maybe that helps develop some forms of quick and dirty software. More importantly from my viewpoint it might force the two world views to acknowledge each other, in particular the strengths and weaknesses possessed by both. I like the idea that on both sies of the fence clients and servers would both be fully engaged with HTTP headers and content types.
I'm somewhat reticent to use a two-method approach (GET and POST only). I don't like POST. As a non-cachable "do something" method I think it too often turns into a tunneling activity rather than a usage of the HTTP protocol. When POST contains SOAP the tunnelling effect is clear. Other protocols have both preceeded and followed SOAP's lead by allowing a single URI to do different things when posted to based on a short string in the payload. I am moderately comfortable with POST as a DOIT method when the same URI always does the same thing. This is effectively doing the same thing as python does when it makes an object callable. It consistently represents a single behaviour. When it becomes a tunnelling activity, however, I'm less comfortable.
Robert, you mention the activity of firewalls in preventing unknown methods passing through them. To be honest I'm not sure this is a bad thing. In fact, I think that hiding the function name in the payload is counter-productive as the next thing you'll see is firewalls that understand SOAP and still don't allow unknown function names passing through them. You might as well be up-front about these things and let IT policy be dictated by what functionality is required. I don't think that actively trying to bypass firewalling capabilities should be the primary force for how a protocol develops, although I understand that in some environments it can have pretty earth-shattering effects.
In the longer term my own projects should end up with very few remaining examples of non-standard methods. As I mentioned in the earlier post I would only expect to use this approach where I'm sending requests to a gateway onto an unfixably non-RESTful protocol. REST is the future as far as I am concerned, and I will be actively working towards that future. This is a stepping-off point, and I think a potentially valuable one. The old protocols that HTTP may replace couldn't be interrogated by firewalls, couldn't be diverted by proxies, and couldn't support generic caching.
Thanks to Peter Hardy for your kind words also. I'll be interested to hear your thoughts on publish/subscribe. Anyone who can come up with an effective means of starting with a URI you want to subscribe to and ending up with bytes dribbling down a TCP/IP connection will get my attention, especially if they can do it without opening a listening socket.
Benjamin