Sound advice - blog

Tales from the homeworld

My current feeds

Sun, 2004-Jul-04

Thin client, Thick client

A humbug post has kind-of-by-proxy raised the issue for me of thin and thick clients.

After reading that the project mentioned relies on PHP and javascript on a web server for its workings, I began drafting a response asking opinions from humbug members about what platforms they think are worth developing software for in the modern age.

It's not really something anyone can answer. I'm sure that at present there is no good language, api, or environment for software that's supported well across multiple operating systems and hardware configurations for GUI work. Instead of replying to humbug general I thought I'd just vent a little here instead.

The web server thing bugs me at the moment. You can write a little javascript to execute on the client, but basically you're restricted to the capabilities of HTML. Unless you dive into the land of XUL and friends or svg and flash there is not really anyway to create complex, responsive, and engaging interfaces for users to interact with. Worse, it's very difficult to interact with concepts the user themselves are familiar with in a desktop environment:

As soon as you put both a web browser and a web server between your software and your user it's very difficult to accomplish anything that the user can understand. If you save or operate on files they tend to be on the server-side... and what the hell kind of user understands what that is, where it is, and how to back the damn thing up?

You might have guessed that I'm not a fan of thin clients... but that's not exactly true. I think it's fundamentally a problem of layering, and thats where things get complicated and go wrong.

I think you should be able to access your data from anywhere. Preferrably this remote interface looks just like your local one. Additionally, you should be able to take your data anywhere and use it in a disconnected fashion without a lot of setup hassle. Thirdly, I think you need to be able to see where your files are and know how to get them from where they are to where you want them to be.

What starts to develop is not a simple model of client and server, but of data and the ways of getting at it. If we think about it as objects, it's data and the operations you can perform on it. Like a database, the filesystem consists of these funny little objects that have rules about how you can operate on them. Posix defines a whole bunch of interesting rules for dealing with the content of files, while the user-level commands such as cp and mv define high-level operations for operating on the file objects. Other user-level applications such as grep can pull apart data and put it back together in ways that the original developer didn't expect. Just as with databases, its important that everyone have the same perspective as to the content and structure of files and that ad hoc thing can be done with them.

To me, it is the files more than the client programs that need to be operated on remotely. You need to be able to put a file pretty much anywhere, and operate on it in the ways it defines for itself.

To start on the small scale, I think its important that you can see all your data files in your standard data file browser. This means they shouldn't be hidden away in database servers like postgres or mysql. On the other hand, you should be able to do all the normal filesystem operations on them as well so sqlite is a little deficient also. You can't safely copy and sqlite file unless you first get a read lock on exactly the appropriate place. Ideally, you should be able to use cp to copy your database and ensure that neither the source database nor the copied database are corrupt when you're done.

I feel there is a great wilderness here, where we're a step away from having a way of ensuring files are operated on in a consisten manner. It's like every file itself has an API and needs hidden code behind it to ensure appropriate action is always taken. Standard apis like the ones used to copy files or to issue sql queries or greps against them must exist and some fundamental apis every file must implement... but I keep feeling that we need more. Everyone needs to know how to operate on your file both locally and remotely. That inculdes nautilus. That includes your rdf analysis tool.

To my mind the fundamental layer we need to put in place is this architecture of file classification and apis to access the files. Past that, thick and thin clients can abound in whatever ways they are appropriate. I think that in the end thick clients will still win in the end, but they will resemble thin clients so closely in the clarity of definition of their environment that we won't really be able to tell the difference.