Sound advice - blog

Tales from the homeworld

My current feeds

Sun, 2008-Dec-14

Reuse of services in a Service-oriented Architecture

I have been reading up a bit on Service-Orientation, lately. Notably, I have been reading SOA, Principles of Service Design by Thomas Erl. I have not finished this book as yet, but have found it quite interesting an enlightening. I think I could reuse about 80% of it to produce an excellent primer for REST architecture ;) The objectives in my view are very similar.

Object-oriented and functional decomposition

One aspect that struck me, however, was the suggestion of layering services upon each other to effectively abstract common functionality. This book in particular describes a task, an entity, and utility layer. These layers describe a "uses" relationship, in that one service or layer cannot operate without the next layer down being available. A task layer service that implements a particular business process will not be able to operate unless the entity services on which it depends are available. The task and entity services will not be able to operate unless all of the utility services on which they depend are available. Service authors are encouraged to reuse capabilities present in lower-layered services in order to avoid duplication of effort.

This is somewhat of a classical Object-Oriented approach, and one that is mirrored in the object-oriented view of systems engineering (OOSEM): You talk to your stakeholders and analyse their requirements to derive an ontology. This ontology defines classes and the capabilities of these classes, alongside tasks and other useful entities. These become the components of your system.

However classical this may be in the software world, I understand the more classical model of systems engineering to be "function"-based. This functional model is again derived from conversations with your stakeholders and your analysis of their requirements. However, it follows a common (inputs, outputs, processing) structure for each function. The main logical diagram for this kind of decomposition is a data-flow diagram. Functions are then allocated to system components as is seen fit by the technology expert, and interfaces elaborated from the flows of data.

Perhaps it is true to say that the OOSEM approach focuses on the static information within the system and its capabilities, while the functional model focuses on the pathways of information around the logical system. In this way, perhaps they are complimentary styles. However, my instinctive preference is for the primary logical system model to based on data flow.

I think that the REST style is amenable to this kind of system design. The emphasis of a REST architecture is also on the transfer of information (ie state) around a system. Functions in the logical description of the system should map cleanly to REST services that consume, and in their turn produce data for onward consumption.

An ontological decomposition of the system should also be taking place, of course. These mappings of domain entity relationships will shape the set of URLs for a given service, the set of domain-specific document types, and onto the internal models of individual services.

Reuse of services and products

I think there is more life in the function-based or modular approach than software engineers might give it credit for. It explicitly doesn't encourage dependencies within the architecture. Architects are encouraged to allocate functions cleanly to components, encouraging cohesive functionality to be co-located and more easily decoupled functionality to be allocated to different services or components. I think it is reasonable to look at this as a highest-level view of the architecture, or a least a useful view at the same level as other architecture. A term that might click with SOA architects is "application". I think that it is important to be clear what applications your architecture is offering for its customers, even if these are not directly embodied in specific services.

I think it is also worth talking about a separation between product and solution spaces when it comes to discussing reuse. We obviously do want to reuse as much code and as many components as we can. However we do not want to do this at the expense of the future evolution of the architecture. Let's assume that part of the system is owned by organisation A, and another by organisation B. The solution maps to the services operated by the two organisations, while products support these services. Different services may be based on the same technology or different, and may have matching or mismatched versions. If there are dependencies between these two groups we need to be careful that:

I think there is an argument for avoiding these kind of inter-organisation dependencies, and to control inter-organisation interfaces with reasonable care. Code or component reuse between organisations can be expressed as products commonly used by both, rather than requiring a uses relationship to exist between organisations in the solution.

A product picked up and used as a service within multiple organisations will still need a development path that deals with customer enhancement requests. However, each organisation will at least be able to manage the rate at which it picks up new releases. The organisation will also be able to manage the availability aspects of the service independently of the needs of other organisations using the same product.

I guess this is the fundamental source of my discomfort with the kind of reuse being described. When some people talk about REST scaling, the mean caching and performance of a given service. I think more in terms of multiple "agencies", ie individuals or organisations who own different parts of a large architecture and deploy their components at times of their choosing based on their own individual requirements. A REST architecture is centrally governed in terms of available communication patterns and document types, but does not require central control over the services or components themselves or of their detailed interfaces and URL-spaces. This can be delegated to individual organisations or sets of organisations within the domain.

Conclusion

At the technical level at any given point in time, an optimum architecture will contain little to no duplication of functionality. However, we have to consider how the architecture will evolve over time and also consider the social and organisational context of an architecture. Reuse may not always match the needs of an architecture over time, and reuse of services would seem to exacerbate the barriers associated with achieving reuse. Individual application and service owners should at least be in a position to control their set of dependencies, and be permitted to develop or use alternatives when a supplier organisation is not keeping up with their needs.

Considerations of reuse should be balanced with the need to minimise dependencies and control interfaces between organisations. Reuse of underlying products should be considered as an alternative to direct reuse of services across current or potential future organisational boundaries.

Benjamin

Sun, 2008-Jun-08

4+1 View Scenarios, and the rest

This is the final chapter in a series I have been running on my evolving understanding of 4+1 view architectural descriptions. This time around I am covering scenarios, and other things that might end up in an architectural description. We have already established the set of components in our architecture, and the functional interfaces to external systems. We have drawn links between components, but not really elaborated on where responsibilities lie between a given pairing of linked components.

Scenarios

The scenarios view seems to be the most flexible of the views, and seems to capture "other". I think I am sticking fairly closely to the Architectural Blueprints definition when I use this view primarily to convey behaviours of components and internal interfaces. I have been using collaboration diagrams showing component interactions in arrangements that roughly correlate to the Process View presentation. As with other views I have generally tried to align things to the Logical View classification, and attempted to come up with one or more scenarios for each logical function.

Sales Scenario: Record Sale

Recording of sales is a fairly straightforward transfer of Sales Records from Web Browser to the Sales Manager Component. Financial Transactions pertaining to these records are submitted to the General Ledger for further analysis.

It is possible that we could talk more about the user interface than I have in this diagram. We could talk about how the browser navigates to the page that lets them enter the Sales Records. I have deliberately kept things fairly simple here, trying to focus on clarifying roles within the architecture rather than getting too tied up in the detail of various interfaces involved. Even my method invocations don't look right from a UML perspective. They are really more along the lines of data flows, and I don't expect a literal analogue to exist for any of these method invocations.

Another way to approach the Scenarios View would be to take things back up a notch. Use cases may better capture the end user's intent, whether they be UML-style use cases or more general Systems Engineering -style use cases. I think there is a lot of flexibility in this area, which makes things difficult to nail down. I share the experience of how I have been approaching my specific problem domain, and hope that it leads to some useful light-bulbs going off in someone else's head.

Inventory Scenario: Stocktake

The Stocktake scenario is instructive because it tells us that Inventory Manager is directly involved in the stocktake procedure. It is not an idle bystander that accepts a bulk update at the end of the stocktake. The Stocktake GUI starts the process by marking all stock as outstanding. One by one items are ticked off from the list until the remainder must be considered shrinkage. This means that only one stocktake can be going on at a time, and that the work of stocktake might be undone if someone starts the procedure again. On the other hand, it means that a crash in the stocktake GUI won't lose us our current stocktake state. Does this division of responsibility strike the right balance? That's what this design process is intended to make us ask ourselves.

Shares Scenario: Enter buy or sell

We decomposed the Shares function more than most others, and this gives us an opportunity to see how a diagram with many components might work. Note that I have more arrows now than I have labels against those arrows. I have been using a convention that continuing arrows carry the same data as the last one unless indicated otherwise. In this diagram we have the Buy or Sell Record moving about essentially untouched until it gets to General Ledger Export. Only then is it converted into a Financial Transaction for recording in the General Ledger.

I find it interesting to pinpoint components that have little to do with altering the content that passes through them. From a REST/web perspective we might apply the End to End principle here. Not only should the interim components not modify the data passing through them, they should generally not try to understand the data more than necessary either. If communication is in the form of documents, they should ignore but pass on attributes and elements even if they don't understand them. Take care to clean documents up at system boundaries in order to be able to assign blame, but otherwise try to keep things free and easy. Even the document clean-up should ideally be done by free standing document firewalls that are easy to reconfigure as the architecture evolves.

I haven't included all of the images this time around. If you would like to look at the whole (but still limited) set of diagrams I put together, point your StarUML here.

The Rest

Clearly 4+1 is not the whole story. There are levels of detail beyond what I have covered in my examples this time around, including interface control documentation and module design specifications. There are also a great deal of additional details that may accompany an architectural description to support design decisions made or to provide more detail.

Perhaps the most obvious gap in the 4+1 as applied at this level is a lack of data models for logical entities, for configuration data, and for interfaces. These data models can be extremely useful, and the earlier you have a stab at them the less likely you'll have multiple teams misunderstanding each other.

Other gaps include objectives for the decomposition, rationale for choices made, descriptions of how the architecture would change if current assumptions and constraints were to change, and a host of other detail. Any one of these could lead you down the path of discussing something to the point where you stop and say to yourself "Hang on. That doesn't make sense. What if...". Those moments are half the reason to put an architectural description together, and some of the value of the document can be measured in how many "oops"es are caught as compared to the effort you put into the document.

The other side of the document is obviously to steer the ship and record consensus decisions. Communicating the reasons for the current set of design decisions and the set of alternatives considered can therefore be crucial in the maintenance of the document over the project lifetime.

In the end, everything on the project is linked. From schedule to team structure to components to the list of change requests against those components. If you start modelling you might just end up modelling the whole world. Depending on the scale of the project it might be worth going that far. If you are working on something smaller you should be doing the minimum and assessing the rewards for taking any further steps with the document set.

Conclusion

I come from a fairly heavy-weight process end of the industry. It involves a Systems Engineering approach that has foundations in things like defense projects. The kinds of descriptions I have been talking about may not suit everyone, or perhaps even anyone. However, I hope they will make some of us take a step back and be aware of what we as a whole are building from time to time.

Again I am not an expert, and I hope my lack of expertise is not a hindrance or false signpost to the gentle reader. I am a software guy who has been walking a path lately between traditional software experience and the experience of those around me working on various projects.

I'll now return you to your usual programming of REST Architecture, and everything that goes with it.

Benjamin

Mon, 2008-Jun-02

The 4+1 Development and Deployment Views

This is part four of my amateur coverage of the . I left my coverage last time with the logical and process views complete. The logical view is a system's engineering coverage of the system's functional architecture. My Process View closed out most of the design aspects of the system. We now have a set of software components and functional interfaces which we may further refine.

The Development View and Deployment (Physical) view close out main set of views by moving software components out of their final assembled forms and into other contexts. The Development View captures components "in the factory". In other words, it shows build-time relationships between components. The Deployment View captures components being deployed to the operational system. If we were continue clockwise around the views through Development and Deployment we would naturally reach the Process View again. In other words, components once built and deployed are assembled into processes.

The Development View

The Development View is where I like to travel to in telling my architectural story, after the Process View. We have seen the final assembly of processes, now let's go back to where it all began. If your architecture has a strong distributed object focus this view will be relatively simple, but it does serve an important function.

The Development View is where we make a break from whole Logical Functions, and narrow ourselves down to their core for deployment. Let's follow our example through:

Development Packages

The packages view isn't very interesting. It would perhaps be more so if I had included actual database software or other basic components to get the job done. Those would very likely appear at this level, alongside packages we discovered in the Process View. There are no build-time relationships in this set, only run-time relationships.

Sales (Development View)

The Development View for Sales includes all the components that made up the Process View diagram of the same name. These components are everything that is required to put together the Logical View Sales function.

Something has changed, though. This time around our packages are in the diagram itself. We can see a Sales package, and it clearly doesn't contain all of the software components involved. When we deploy "Sales" it will be this core set of components (only one in this case) that will be deployed. The Sales Manager component will be deployed as part of General Ledger. The Web Browser will be deployed as part of its own package.

Inventory (Development View)

Inventory shows multiple components within a functional core. Inventory contains both its client- and server- side components.

Shares (Development View)

Shares shows that some of the run-time relationships found in the Process View are linked together at build-time. The Portfolio Manager controller component depends on and links together all of the other constituent parts.

Tax Reporting (Development View) Tax Reporting (Development View) Tax Reporting (Development View)

The remainder of these diagrams are fairly boring. However, they do ask us to consider what libraries or other components we might have missed in our Process View analysis. This will be important contributors to the final set of software components and may significantly influence planning and estimation on a large monolithic project.

The Deployment (Physical) View

The Deployment View is about the final installation of components into the operational system. It is theoretically concerned both with the mapping of software components and processes to physical devices. However, again mine didn't quite turn out that way. I ended up focusing on packaging. Here is my top-level diagram:

Deployment (Physical) Packages

This image shows the physical structure of final Accounts system. We have a Stock Handling workstation with a Scanning device attached. It contains the thick inventory client.

On the right we see a single Accounts server on which we plan to deploy the entire server workload. A generic client machine carries a Web Browser to render the various HMIs.

We are limited in what we can do with a UML deployment diagram like this. We obviously can't dive too far into the nature of the hardware components in what is really a software diagram. We can't draw the racks or get too far into the detail of the network architecture. These facets seem to fit outside of the 4+1 views.

Sales (Deployment View)

The Deployment View for Sales is significantly slimmed down from previous logical-aligned views. For the first time we see the non-core components such as the Web Browser stripped away and absent. What is left is a pure server-side component, so we put it in a package of that name and deploy it onto the Accounts Server class of machine.

Inventory (Deployment View)

Inventory is more interesting. We have a client and server side to this functional core. They must be deployed separately. Even at this late stage we can see potential holes in our design: Did we think about the client/server split? Can we see components that need extreme bandwidth or latency settings, and should be co-located?

Shares (Deployment View)

I haven't shown all the dependent components for Portfolio Manager under the Shares function core. I assume they are all included at build time and do not require special attention when it comes to this level of packaging.

Tax Reporting (Deployment View) General Ledger (Deployment View) Web Browser (Deployment View)

Again, the rest of these diagrams are pretty boring due to lack of intelligent detail.

Conclusion

That concludes the main four views. I hope this coverage has been useful, and expect to see a number of "no, this is the way it is done!" posts in response. My approach has not been entirely kosher, and is not based on a broad enough base of experience. However, it does what I set out for it to do. It identified a set of software components and functional interfaces. It aligned the set through multiple views to the end user's functional requirements. It examined the set from multiple angles to see how they would play out:

It did not decompose components or interfaces any more than necessary, hopefully making the necessary high level decisions without boxing individual component authors in unnecessarily.

This approach is suitable for architectures of size, both REST and non-REST. Again, it might surprise some that REST has not specifically appeared at this level.

The way I view it is that this is an approach to selecting components to build and identifying the demands on interfaces between them. Each component will still be built according to best practice and will conform to appropriate standards. If I were to build a circuit breaker for the electric power industry, or an impulse fan for environmental control, I would supply digital inputs and outputs of a standard type for connections to RTUs, PLCs, or other field devices. If I am to build a software component, I will supply URLs to allow it to be connected freely and generally also. I will follow industry best practice for loose coupling, maximum scalability, and maximum evolvability. These views told me what to build. REST tells me how to build it.

To some extent I want REST advocates to see this other side of things. Whether you do this formally or informally and by what ever technique: There is a process of deciding what to build and what interfaces are required in software engineering. There is an architecture role that is about the specifics of what is needed in a particular system. REST is a technique that informs this level of architecture, but is subservient to it. I sometimes wonder if this is the main reason for disconnect between the REST and SOA camps. One is talking about which components to build, while the other is talking about how to connect them together. Little wonder that we can't agree on how to do "it".

I plan to round out this series with a discussion of the Scenarios View, and about other supporting documentation that might be needed to support your architectural decisions.

Just a reminder: You can find the StarUML model I used in this example here.

Benjamin

Sat, 2008-May-31

The Process View

This is part three in a series that checkpoints my evolving understanding of for architectural descriptions. I have already provided a description of the Logical View. This view captured End-user Functionality in a process based more deeply in Systems Engineering than in Software Engineering. Today I cover the Process View and its links into the Development View.

Let's revisit the set of views as a whole:

4+1 Views Diagram

We can see the Logical View tracing independently to the Process and Development views. Each of these views then traces independently to Deployment. I had a lot of trouble figuring out how to make this work for me, from a UML purist and a tooling perspective. I finally settled on a slightly different approach based on .

I have ended up including the final set of components in each of the "design" views. The same set of components appear in the Process, Development, and Deployment (Physical) Views. This achieves the goal of relating different design views together. It also fits with the ideas in the UML Superstructure specification:

The component concept addresses the area of component-based development and component-based system structuring, where a component is modeled throughout the development life cycle and successively refined into deployment and run-time.

The process view I came up with is based around a UML component diagrams, where components from the upcoming development view are show assembled into running processes. This suits the kinds of designs I am doing. It may need tweaking to suit your own needs.

Let's dive into an example with a quick review the logical view functions:

Logical View Functions

I have elaborated on the logical view functions based on the process decomposition as follows:

Process View Main

The logical functions of Sales, Inventory, Shares and Tax Reporting are still present. New are the Web Browser and General Ledger packages. These packages are the result of design decisions in implementing each function. Let's look at Sales:

Sales

This diagram captures the whole Sales function, a theme I will follow throughout this and subsequent views. The constant relationship to the Logical View helps establish the link between requirements and all aspects of design. The Sales Ledger Updates interface is still present from the Logical View. This is served by the Sales Manager component in the identically named process. A Web Browser is used as part of the HMI, and General Ledger is used as part of the Historical Sales Register.

The Sales Ledger Updates interface is confronting right up front. Am I not a advocate? Isn't defining an interface like this against the principles of REST? Well, no. I fully expect this to be a REST interface. However, I am approaching this design from the customer focused logical perspective. From this angle, the most important thing to know is what information is transferred. An Interface Control Document will be required to identify the set of URLs that Sales Manager provides, and specify exactly which REST operations are necessary for each URL.

On the other hand, we could easily make the design decision at this point that the interface will not be based on REST. It could be a less constrained SOA. It is for this reason that I feel I can talk about REST being a constrained subset of SOA, part of the SOA family, or "SOA done right". At this point in the design process SOA and REST are indistinguishable.

The General Ledger is the next thing that jumps out. That wasn't in the logical view. There was no requirement for a General Ledger. Instead, Sales was supposed to make data available to Tax Reporting for processing. Here I have used my design experience to say that we should have a General Ledger (a summary record of any type of financial transaction). There is no point requiring the processes of Sales, Inventory and Shares to appear as part of the Tax Reporting function. The General Ledger allows us to put these functions at arm's length from each other.

The Web Browser is fairly obvious, but looking at Sales Manager: Why haven't we decomposed it further?

The answer is that in this case I want to give the developer of this service maximum flexibility in how they implement it. If there were a library that is common to other functions it would need to appear (which begs the question of why there are no such libraries... have we missed something?). Internal structure of a process that has no need for Intellectual Property separation or for other forms of separation might as well stay part of the same component.

Inventory

Inventory follows a similar pattern to Sales. However, we do have two distinct GUIs identified the logical view. It makes sense to keep these separate, because they have quite different usage profiles. I have decided to use a thick client approach, here, rather than a Web Browser. Part of the overall Inventory HMI is a scanner, and the thick client grants me better access to the scanner's feature set than the browser would afford.

The obvious design question arising from this diagram is how Stocktake and Inventory Scanning GUIs coexist within the same process without any apparent coordination whatsoever. Have we missed a navigation GUI that allows us to select one mode or another? Should they simply be in different processes? Do they need a common library of any kind to communicate with Inventory Manager or drive the scanner?

Shares

Here I show the interior detail of a process. The Portfolio Manager is constrained by this design to follow a Model-View-Controller approach internally. The Views are fed to a Web Browser, or to the General Ledger by export. The Model is updated by its HMI, or from the Stock Quote Client driven by Periodic Execution. All of this is coordinated through the Portfolio Manager.

Tax Reporting

Tax Reporting extracts data from the General Ledger using a Reports Engine, and again uses the Web Browser for final display. The Tax Reports give me the opportunity to show a software component that is really simple configuration data. It is clearly separate from the Reports Engine, and uses a dependency relationship on the engine. The dependency direction could be viewed in either direction, however it is easier to trace requirements to a lower level with the arrow as stated. This diagram says that the Tax Reports use the Reports Engine to do their job, as opposed to the other way around. That means that we can talk about the Tax Reports as having to achieve certain GAAP requirements while leaving the Reports Engine itself with fairly generic requirements such as "must be able to query SQL".

General Ledger Web Browser

The final images are fairly boring, just to show that the packages we discovered in our journey through the Process View will generally get their own diagrams and description. You could show the processes of components they connect to, but that information should already be present elsewhere. These non-functional packages will be flowed through the Development and Deployment (Physical) Views in due course.

You could argue that the inclusion of components the in above Process View means that there is really no design left for the other views. You would be right. The main objectives of this architectural description are met: To define the set of components and interfaces in the architecture. The subsequent views are relatively boring, compared to the exciting leap from systems-based logical to software-based process views. However, they each bring their own charm and provide useful checkpoints to discover flaws in your architecture.

I suppose another question-mark at this point is the detail of the internal interfaces between software components. I have identified links, but not tried to establish a functional baseline for these links. For this we would likely need to go through a round of requirements allocation and decomposition and follow the process of the logical view again. I defer the specific work on functional interfaces to the next level of design document down.

,

Benjamin

Thu, 2008-May-22

4+1 Logical View

I want to make clear at the outset that I am not an expert in 4+1. I have spent the last few months working with systems engineers on an Australian rail project, and this is the sum total of my systems engineering experience. I am reasonably happy with my understanding of both concepts as they apply to my specific situation, but this approach my not apply to your problem domain. I am using StarUML for my modelling, and this pushes the solution space in certain directions.

2008-09-08: So that was then, and this is now. What would I change after having a months to reflect? Well, I think it depends on what you are trying to get out of your modelling. The approach I initially outlined is probably OK if you are trying to cover all bases and get a lot of detail. However, now we might step back into the product space for a while. What we want to do is get away from the detail. We want to simply communicate the high level concepts clearly.

Overview

The first view of the for architectural descriptions is the Logical View. A classical computer science or software engineering background may not be a good preparation for work on this view. It has its roots more in a systems engineering approach. The purpose of this view is to elaborate on the set of requirements in a ways that encourages clear identification of logical functions, and also identifies interfaces to other systems or subsystems.

The general approach that I have been using for this view is a combination of and diagrams. The elements in these diagramming techniques are similar, but both are constrained in my approach to serve separate specific purposes. Robustness diagrams are drawn by working through an input requirements specification. Out of this process "functions" are discovered and added to a functional data flow diagram. Finally, a context diagram is constructed as an extraction from and elaboration on the function data flow diagram with external systems identified.

Interfaces are identified at every stage through this process, including Human-Machine Interfaces. The structure of robustness diagrams make it relatively simple to identify missing interfaces and other gaps in the analysis. The interfaces identified are reflected in the following Process View.

The first thing I would change here is that I would split the concept of a 4+1 Logical View and the systems engineering -style functions diagram. The second thing I have been doing has been to try and limit my content to a single diagram. I'm trying to contain the urge to reach for crinkly goodness in favour of saying "just enough".

Context and Function Data Flow Diagrams

Data Flow Diagrams are very simple beasts, made simpler by the constraints I apply. For the purpose of identifying functions we use only circles for functions, and directed associations for data flows. Other features such as data sets or identified interfaces are banned in the function and context data flow diagrams. Data flows are labelled with the kind of data being transferred, and functions or systems are named.

Some guidelines for the definition of functions:

The set of functions should describe the system as the user sees it, rather than how it is built. The data flow diagrams describe relationships between systems or functions, not between software or hardware components. As additional constraints on these data flow diagrams I do not allow the expression of data stores at this level. They are described in the Robustness diagrams where necessary. This level is purely about flows of data between units of behaviour. The flows do not discriminate between REST-style, database-style, API-style, or any other style of interface. How the interface itself is expressed is a design decision.

Unfortunately, I was on crack when I wrote some of this. In particular, I have the concept of functions almost completely backwards. What I ended up calling functions are more like subsystems and... well.. you get it. I have since been corrected in a number of areas.

Here is how I would draw the robustness diagram today:

Updated Context Diagram

The context diagram remains the same, and I would continue to include it. I have started to prefer describing interfaces in terms of the information they communicate rather than trying to preemptively name and collate them based on other factors.

Updated Functions

Here is a functions diagram, where I have collected the functions back down into a single figure. The key interfaces are still present, and I have two basic sets of data in the system. I have the ledger data, which is based on pure accounting principles. The other data is the historic share prices.

The share prices are acquired and stored based on live stock quotes. This information is combined with ledger-based accounts data for the shares investments in order to generate appropriate reports.

Other sources of data for accounts include the daily sales ledger acquired from the Point of Sale system, internal movement of warehouse stock, and input from stocktake operations.

Per-function Robustness Diagrams

Robustness diagrams consist of boundaries, entities and controls. Boundaries are interfaces to your system, including Human-Machine Interfaces. These are functional interfaces, so typically identify broadly the kind of information required or the kind of control exerted through the boundary. The set of boundaries may be refined over subsequent iterations of the logical view, but do not be concerned if they do not line up exactly with the set you expected. Multiple functional HMI boundaries may be combined into one user interface to provide the best user experience. Multiple software or protocol interface boundaries may be described in a single Interface Control Document.

The process of building up the functions involves reading through requirements one at a time. Look for vocabulary in the requirements that suggest a system boundary or entity. A single requirement may appear as an entity in your diagram, or many requirements may map to a single entity. Only a poorly-written requirement should require multiple controls to be associated with it. Entities are identified when one requirement talks about producing some kind of data to be used by another. An entity is a set of data, such as a database. We are not trying to build an Entity-Relationship model (I draw entity relationships outside of the main 4+1 views). Once the data set is identified it is not elaborated further in this view. An entity may be temporary or permanent data.

Some guidelines for the Robustness Diagram

This is where I would really start to split from my original description of the Logical View, and move closer to what Phillipe originally suggested. That is, an object-oriented approach. We design a set of classes that we would implement if we didn't have to worry about any of the details. We wouldn't worry about redundancy. We wouldn't worry about performance. We wouldn't worry about client/server separations that might be present in the real system. This is an idealised software-centric view of the solution.

Example

The example I am using is of a very simple accounting system. I haven't written up formal requirements, but let us assume that the following model is based on a reasonable set. I will work though this example top-down, generally the opposite direction to the direction the model would have been constructed. The source model can be found here (gunzip before opening in StarUML).

Context Diagram

The context diagram shows the Accounts system in context with an adjacent Point Of Sale System to acquire a daily sales ledger, and a broker system to provide live stock quotes. If I were presenting this in a formal document I would re-draw it in visio. Packages would be drawn as "function" circles, and interfaces would be removed (leaving only data flows). Since we are all friends, however, I'll leave it in the model form for the purpose of this entry.

Already we can see potential gaps. We get live stock quotes from the broker's system, but no buy or sell orders. Is this intentional, and the buy/sell orders are handled entirely within the Broker System or have we missed it from our analysis?

Functions Diagram

The set of identified functions tracks these external interfaces down to a lower level. The stock quotes from that broker system are being fed into a shares investing function. Sales receives the daily sales ledgers. Inventory is a self-contained function within the system. Tax Reporting uses data from all of these functions.

It is possible that a general ledger function could appear at this level, but so far in this analysis we have not determined the need for it. The system is used for shares investing, sales and inventory tracking. Any general ledger containing the combined records of all of these activities is a design decision at this stage. Tax reporting requires that we bring the data together in some form, but whether we mine it directly or convert for easy access remains unconstrained by this logical systems engineering process.

Sales Function Robustness Diagram

The Sales function has two main controls: To import the daily sales ledger from the POS System, and to generate reports. A historical sales ledger entity naturally appears out of this interplay, and becomes the boundary between this function and the Tax Reporting function. We discover a HMI is required to request report generation. Does this HMI need to do anything else? Is printing required? If so, do we need to be able to cancel print jobs? Are these functions of the system?

Inventory Function Robustness Diagram

The Inventory Function has four controls. Two are relating to scanning stock in and out of stores. Another two are related to stock-take. Stock is re-scanned and compared against Inventory. Once all stock is scanned, the Stocktake HMI accounts for any shrinkage. Both the Inventory Scanning and Stocktake HMIs can take a number of forms. There might be keyboard and VDU entry. With any luck there will be a bar code or rfid scanner in the mix. The same scanner might be used as part of both HMIs, and both HMIs may be combined into the one end user HMI.

Shares Investing Function Robustness Diagram

Investment Tracking involves recording of purchase and sales of stocks. We can run reports based on the book value of these stocks (ie, how much we paid for them), or based on their current market value. In order to do the latter we need to keep a historical record of stock prices. Maintenance of this record has a clear entity boundary at Historical Stock Prices that makes it a candidate for separation into a new function. We still might do that if this function becomes unruly. I have placed a kind of stub interface in here for periodic execution of stock quote updates. However, this obviously needs more thought. Where is the HMI to control how often these updates occur? Can they be triggered manually?

Tax Reporting Function Robustness Diagram

I foreshadowed that the entities from other functions are referred to directly in the Tax Reporting robustness diagram, and here it is so. This does not directly imply that the Tax Reporting function has access to the databases of the other functions. It simply means that the set of information (or relevant parts of that set) is transferred to the tax reporting function at appropriate times. This could be by direct database access, by an ETL into the tax reporting function's own database, by a RESTful data exchange, or by some other means.

And this is how I would do it, now. This is a very basic diagram, but you can see that it is software centric in that it balances data with functionality, and views the world as a set of classes and objects. I would generally start with an entity-relationship or a class diagram relating to domain-specific vocabulary from the requirements specification, and work from there.

Updated Logical Diagram

To some extent I find this diagramming technique freeing. I don't have to worry about the borderline between software and systems engineering. I don't have to worry about components. I can just draw as I might have in my youth. It will feel familiar to software developers, and a software developer should be able to judge whether or not it works to convey the appropriate information.

Conclusion

The Logical View is somewhat airy-fairy, and the temptation is always there to introduce elements of the design. Resist this temptation and you should end up with a description that clearly separates constraints on your design and design decisions you have made. The set of interfaces your system has with other systems is a real constraint, so all of these interfaces appear here in some form and behaviour required of those interfaces is identified. You may need to round trip with the process view (and other views) in order to fully finalise this view.

It is likely that the Logical View (especially its robustness diagrams) will identify some inconsistency in vocabulary in your source requirements. It is worth putting at least a draft of the design together before finalising requirements for any development cycle.

I think that the context and function diagrams are quite useful in helping flesh out the scope of a system as part of a software architecture. The Object-Oriented nature of the real Logical View is a help to software developers, and the description of the problem domain vocabulary in this context should help stakeholders get a feel for how the system will address their problems.

,

Benjamin

Fri, 2008-May-02

4+1 View Architectural Descriptions

I have been working lately on a number of , and have been using the as part of these descriptions. This has been an interesting experience for me, because I have previously worked more in the technical sphere. These descriptions remain quite abstract, to the point that they end up exposing a level of (or perhaps just component-orientation) but do not reveal that is the architectural style used for implementation.

The 4+1 style was originally proposed by Philippe Kruchten of Rational (and later IBM) fame. 4+1 is consistent with IEEE 1471's advice that architectural descriptions be broken up into multiple views. The principle is that it is impractical and confusing to use one diagramming technique or one style of description in summarising an architecture. Multiple views are required to meet the needs of different stakeholders.

Philippe's 4+1 main views are:

4+1

These views all typically describe static arrangements of elements. The +1 view is Scenarios, allowing for the demonstration of example modes or uses of the architecture in a more dynamic view. The Logical View is all about requirements analysis. This is the view for exploring and expanding on this design's requirements baseline. The other views provide a narrative that relates this requirements analysis to the final design, and views that design from a number of different perspectives.

The main goal of my architectural descriptions has been to come up with a well-formed list of components and interfaces for further expansion in other documents. In my case components are either configuration data, libraries, or application software. Interfaces could be internal, external, or human/machine. To this end, I have defined a set of components across the design views to show how they appear in terms of their runtime, build-time, and deployment-time relationships. I have used the functional decomposition of the Logical View to guide and partition the design through the other views.

The Logical View covers the functional requirements of the system, and is closely related to the approach of Systems Engineers to the problem of design. The Process View is intended to show how threading and multiplicity work to achieve non-functional requirements. I use this view as the "main" design, showing run-time relationships. Build-time relationships appear in the development view, and essentially model the Configuration Management environment in which the software is developed. I have used the final Deployment View to plan how components are packaged up for deployment onto particular classes of machine. This view can also be used to show a final (but abstract) view of how target hardware is connected together in service.

There isn't a great deal of information on 4+1 out there on the Web, so I plan to produce a series of articles covering the different views and diagramming techniques I have personally been employing. I have adapted 4+1 slightly to my own needs, and where I have knowingly done this I will attempt to distinguish between the "real" 4+1 and my own tweaks.

Benjamin