Sound advice - blog

Tales from the homeworld

My current feeds

Sun, 2008-Dec-14

Reuse of services in a Service-oriented Architecture

I have been reading up a bit on Service-Orientation, lately. Notably, I have been reading SOA, Principles of Service Design by Thomas Erl. I have not finished this book as yet, but have found it quite interesting an enlightening. I think I could reuse about 80% of it to produce an excellent primer for REST architecture ;) The objectives in my view are very similar.

Object-oriented and functional decomposition

One aspect that struck me, however, was the suggestion of layering services upon each other to effectively abstract common functionality. This book in particular describes a task, an entity, and utility layer. These layers describe a "uses" relationship, in that one service or layer cannot operate without the next layer down being available. A task layer service that implements a particular business process will not be able to operate unless the entity services on which it depends are available. The task and entity services will not be able to operate unless all of the utility services on which they depend are available. Service authors are encouraged to reuse capabilities present in lower-layered services in order to avoid duplication of effort.

This is somewhat of a classical Object-Oriented approach, and one that is mirrored in the object-oriented view of systems engineering (OOSEM): You talk to your stakeholders and analyse their requirements to derive an ontology. This ontology defines classes and the capabilities of these classes, alongside tasks and other useful entities. These become the components of your system.

However classical this may be in the software world, I understand the more classical model of systems engineering to be "function"-based. This functional model is again derived from conversations with your stakeholders and your analysis of their requirements. However, it follows a common (inputs, outputs, processing) structure for each function. The main logical diagram for this kind of decomposition is a data-flow diagram. Functions are then allocated to system components as is seen fit by the technology expert, and interfaces elaborated from the flows of data.

Perhaps it is true to say that the OOSEM approach focuses on the static information within the system and its capabilities, while the functional model focuses on the pathways of information around the logical system. In this way, perhaps they are complimentary styles. However, my instinctive preference is for the primary logical system model to based on data flow.

I think that the REST style is amenable to this kind of system design. The emphasis of a REST architecture is also on the transfer of information (ie state) around a system. Functions in the logical description of the system should map cleanly to REST services that consume, and in their turn produce data for onward consumption.

An ontological decomposition of the system should also be taking place, of course. These mappings of domain entity relationships will shape the set of URLs for a given service, the set of domain-specific document types, and onto the internal models of individual services.

Reuse of services and products

I think there is more life in the function-based or modular approach than software engineers might give it credit for. It explicitly doesn't encourage dependencies within the architecture. Architects are encouraged to allocate functions cleanly to components, encouraging cohesive functionality to be co-located and more easily decoupled functionality to be allocated to different services or components. I think it is reasonable to look at this as a highest-level view of the architecture, or a least a useful view at the same level as other architecture. A term that might click with SOA architects is "application". I think that it is important to be clear what applications your architecture is offering for its customers, even if these are not directly embodied in specific services.

I think it is also worth talking about a separation between product and solution spaces when it comes to discussing reuse. We obviously do want to reuse as much code and as many components as we can. However we do not want to do this at the expense of the future evolution of the architecture. Let's assume that part of the system is owned by organisation A, and another by organisation B. The solution maps to the services operated by the two organisations, while products support these services. Different services may be based on the same technology or different, and may have matching or mismatched versions. If there are dependencies between these two groups we need to be careful that:

I think there is an argument for avoiding these kind of inter-organisation dependencies, and to control inter-organisation interfaces with reasonable care. Code or component reuse between organisations can be expressed as products commonly used by both, rather than requiring a uses relationship to exist between organisations in the solution.

A product picked up and used as a service within multiple organisations will still need a development path that deals with customer enhancement requests. However, each organisation will at least be able to manage the rate at which it picks up new releases. The organisation will also be able to manage the availability aspects of the service independently of the needs of other organisations using the same product.

I guess this is the fundamental source of my discomfort with the kind of reuse being described. When some people talk about REST scaling, the mean caching and performance of a given service. I think more in terms of multiple "agencies", ie individuals or organisations who own different parts of a large architecture and deploy their components at times of their choosing based on their own individual requirements. A REST architecture is centrally governed in terms of available communication patterns and document types, but does not require central control over the services or components themselves or of their detailed interfaces and URL-spaces. This can be delegated to individual organisations or sets of organisations within the domain.


At the technical level at any given point in time, an optimum architecture will contain little to no duplication of functionality. However, we have to consider how the architecture will evolve over time and also consider the social and organisational context of an architecture. Reuse may not always match the needs of an architecture over time, and reuse of services would seem to exacerbate the barriers associated with achieving reuse. Individual application and service owners should at least be in a position to control their set of dependencies, and be permitted to develop or use alternatives when a supplier organisation is not keeping up with their needs.

Considerations of reuse should be balanced with the need to minimise dependencies and control interfaces between organisations. Reuse of underlying products should be considered as an alternative to direct reuse of services across current or potential future organisational boundaries.