Sharing libraries between applications so they can access common functionality is the old black. Talking to an application that emodies that functionality is the new black. It's failed before with ideas like CORBA, and looks to fail again with SOAP, but REST is so simple it almost has to work.
REST can work on the web in well-understood ways, but what about the desktop? I've pondered about this before, but I have a few more concrete points this time around.
The http URI scheme describes clearly how to interact with network services given a particular URL. The file scheme is similarly obvious, assuming you want to simply operate on the file you find. HTTP is currenly more capable, because you can extend it with CGI or servelets to serve non-file data in GET operations and to accept and respond to other HTTP commands in useful ways. File is not so extensible.
This is silly, really, because files are actually much more capable than http interfaces to other machines. You can decide locally which application to use to operate on a file. You can perform arbitrary PUT and GET operations with the file URI, but you know you'll always be dealing with static data rather than interacting with a program behind the URI.
How's this for a thought? Use the mime system to tell you which application to run for a particular URI pointing to the local file system. Drop the "net" part of a http URI for this purpose, so https://home/benc/foo.zip becomes our URL. To look it up, we start the program we associate with HTTP on zip files and start a HTTP conversation with it. Using REST principles this URI probably tells us something about the ZIP file, perhaps its content.
Now comes the interesting part. Let's extend the URI to allow the HTTP zip program to return URI results from within the file. Let https://home/benc/foo.zip/bar.txt refer to the bar.txt inside foo.zip's root directory. We download it via HTTP.
Now everyone who can speak this "local" dialect of HTTP can read zip files. I can read zip files from my web browser, or my email client. So long as they all know how to look up the mime type, start the responsible application, and connect to that application. A completely independent server-side application that offers the same interface for arj files can be written without having to share code with the zip implemenetation, and they can be plugged in without reference to each other to increase the functionality of programs they've never heard of.
The tricky bits are
- Deciding on a URI scheme.
The use of http may cause unforseen problems - Deciding how to connect to the program.
Hopefully looking up the mime rules will be simple and standard for a given platform, but you also need to connect to the app you start. It could be done with pipes (in which case you would need a read pipe and a write pipe rather than the traditional bi-directional network socket), or via some other local IPC mechanism. - Deciding how to connect to the program.
Maybe the actual mime side of things is more contraversial than I'm making out. To be honest, my suggestion is not 100% in line with how web servers themselves would handle the situation. There are probably suggestions to be made and incorporated into the model if it is to be quite as functional as these beasties. - Implementing it everywhere.
If the functaionlity provided by this interface was so killer compared to what you could achieve across the board by reusing libraries I think it would catch on pretty quickly. Just think about the hours wasted in building language bindings for C- libraries these days. Microsoft want to get rid of the problem by having everyone agree on a common virtual machine and calling convention. Others see a problem with using Microsoft's common virtual machine ;) Perhaps HTTP is so universal that it would see adoption on the desktop, just as it has seen universal adoption on the Internet itself.
Oh, and don't forget to include HTTP subscription in the implementation ;)
Benjamin