On the internet, services are long-lived and live at stable addresses. In the world of IPC on a single machine, processess come and go. They also often compete for address space such as the TCP/IP or UDP/IP port ranges. We have to consider a number of special issues on the small scale, but the scale of a single machine and the scale of a small network are not that different. Ideally, a solution would be built to support the latter and thus automatically support the former. Solutions built for a single machine often don't scale up in the way solutions built for the network scale down.
So where is this tipping point between stable internet services and ad hoc IPC services? The main difference seems to be that IPC is typically run as a particular user on a host, in competition or cooperation with other users. Larger-scale systems resemble carefully constructed fortresses against intrusion and dodgy dealings. Smaller-scale systems resemble the family home, where social rather than technical measures protect one individual's services from another. These measures sometimes break down, so technical measures are still important to consider. The problem is that the same sorts of solutions that work on the large fortified scale don't work when paper-thin walls are drawn within the host. Where do you put the firewall when you are arranging protection from a user who shares your keyboard? How do you protect from spoofing when the same physical machine is the one running the service you don't trust?
For the moment let's keep things simple: I have a service. You want the service. My service registers with a well-known authority. Your client process queries the authority to find my service, then connects directly. The authority should be provide a best-effort mapping from a service name to a specific IP address and port (or multiple IP address and port pairs if redundancy is desirable).
- The authority should allow for dynamic secure updates to track topology changes
- The authority should not seek to guarantee freshness of data (there is always some race condition)
- The service should balance things out by trying to remain stable itself
The local security problems emerge when different lifetimes are attached to the client and the service. Consider the case where a service terminates before the client software that uses it. An attacker on the local machine can attempt to snatch the just-closed port to listen for requests that may be sent to it. Those requests could then be inspected for secret information or nefariously mishandled. A client that holds stale name resolution data is at risk. Possible solutions:
- Never let a service that has clients terminate
- Use kernel-level mechanisms to reserve ports for specific non-root users
- Authenticate the service separately to name resolution
Despite advances in the notion of secure DNS it is the last option that is used on the internet for operations requiring any sort of trust relationship. In practice the internet is usually pretty wide open when it comes to trust. Most query operations are not authenticated. Does it matter that I might be getting information from a dodgy source? Probably not, in the context of a small network or single host's services. The chances that I could make damaging decisions based on data from a source that I can see is not secure will often be low enough not to really consider the issue further. Where real risks exist it should be straightforward to provide the information over a secure protocol that provides for two-way authentication.
So, let us for the moment assume that name resolution for small-network or ad hoc services is not a vector for direct attacks. We still need to consider denial of service. If another user is permitted to diddle the resolution tables while our services are operating normally, they can still make our life difficult. On shared hosting arrangements where we can't rule this sort of thing out, we still should ensure that only our processes are registering with our names. For this, we need to provide each of our processes a key. That key must match the key within the authority service for service additions or updates.
Applications themselves can take steps to reduce the amount of stale data floating around in the naming authority. When malice is not a problem, services should be able to look up their own name on startup. If they find records indicating they are already registered, they can attempt to listen on the port already assigned. No name update is required.
A dbus-style message router can be shown to solve secure update and and stale data issues well enough for the desktop, however DNS also fits my criteria. DNS provides the appropriate mapping of service name to IP address and port through SRV records. These records can also be used to manage client access to a machine cluster distributed across any network topology you like. Some clients will have to be upgraded to fit the SRV model. That is somewhat chicken and egg, I am afraid. DNS also supports secure Dynamic DNS Updates to keep track of changing network topologies. This feature is often coupled with DHCP servers, but it is general and standardised. If a DNS server were set up for each domain in which services can register themselves, clients should be able to refer to that DNS server to locate services.
DNS scales up from a single host to multiple hosts, and to the size of the internet. Using the same underlying technology, it is possible to scale your system up incrementally to meet changing demands. The router-based solution is unable to achieve this, and also ends up coupling name resolution to message protocol. Ultimately, the router-based solution is neat and tidy within a particular technology sweet spot but doesn't meet the needs of more complex systems. I believe that DNS can meet those needs.
One problem that DNS in and of itself doesn't solve is activation. Activation is the concept of starting a service only when its clients start to use it. DBUS supports this, as do some related technologies. That problem can be solved in a different way, however. Consider a service who's only role is to start other services. It is run in place of any actual service with an activation requirement. When connections start to come in, it can start the real process to accept them. Granted, this means a file-descriptor handover mechanism is likely to be required. That is not a major inhibitor to the solution. Various services of this kind can develop indepdenently to match specific activation requirements.
Ultimately, I think DNS is the right way to find services you want to communicate with both on the same machine and on the local network. Each application should be configured with one or more names they should register, a DNS server to register with, and keys to permit secure updates. If the process is already registered, it should attempt to open the same port again. If it isn't registered or is unable to open the port, it should open a new one and register that. Clients should usually look up name resolution data before each attempt to connect to the service. They should be aware that their information may occasionally be stale and be prepared to retry periodically until they succeed. Clients and services should also be ready to operate over secure protocols with two-way authentication when sensitive data or operations are being exchanged.
Benjamin