Hitchhiker's Guide to Software Architecture and Everything Else - by Michael Stal

Friday, October 24, 2008

Integration

Integration represents one of the difficult issues when creating a software architecture. You need to integrate your software in an existing environment and you typically also integrate external components into your software system. Last but not least you often even integrate different home-built parts to form a consistent whole. Thus, integration basically means to plug two or more pieces together where the pieces were built independently and the activity of plugging requires some previous efforts to make the pieces fit together. The provided interface of component A must conform to the required interface of component B.  If these components run remotely, interoperability and integration are close neighbors.

For complex or large systems, especially those distributed across different network nodes, integration becomes one of the core issues.

Integration necessities of external components or environments should be treated as high-priority requirements, while internal integration often is caused by the top-down architecture creation process. Examples for external component integration comprise using a specific UI control, running on a specific operating system, requiring a concrete database, or prescribing the usage of a specific SOA service. Internal integration comes into play when an architect partitions the software systems into different subsystems that eventually need to be integrated with each other.

Unfortunately, integration is a multi-dimensional problem. It is necessary to integrate services, but also to integrate all the heterogeneous document and data formats. It is required to integrate vertically such as connecting the application layer with the enterprise layer and the enterprise layer with the EIS layer. Likewise, we need to integrate horizontally such as connecting different application silos using middleware stacks such as SOAP.  In addition, shallow integration is not always the best solution.  What is shallow integration in this context? Suppose, you have developed a graph structure such as a directory tree. Should you rather create one fat integration interface with complex navigation functionality or better provide many simple integration interfaces for all nodes in the tree?

What about UI integration where you need to embed a control into a layout?

What about process integration where you integrate different workflows such as a clinical workflow that combines HIS functionality (Hospital Information System) with RIS functionality (Radiology Information System)? Or workflows that combine machines with humans? 

What if you have built a management infrastructure for a large scale system and now try to integrate an addition component that wasn't built with this management infrastructure in mind?

Think about data integration where multiple components or applications need to access the same data such as accessing a common RDBMS. How should you structure the data to meet their requirements? And how should you combine data from different sources to glue data pieces to a complete transfer object?

Semantic integration is another dimension where semantic information is used to drive the integration such as automatically generating adapter interfaces for matching the service consumer with the service provider.

And finally, what about all these NFRs (non-functional requirements)? Suppose, you have built a totally secure system and now need to connect with an external component?

It is needless to say that integration is not as simple as it often appears in the beginning. Even worse, integration is often not addressed with the necessary intensity early in the project. Late integration might require refactoring activities and sometimes even complete reengineering if there are no built-in means for supporting "deferred" integration - think of  plug-in-architectures as a good example for this.

Architects should explicitly address integration issues from day 1. As already mentioned, I recommend to place integration issues as high-priority requirements into the backlog. Use cases and sequence diagrams help determining functional integration (the interfaces) as well as document/data transformation necessities (the parameters within these interfaces). During the later cycles in architecture development, operational issues can also be targeted with respect to the integrated parts. Last but not least, architects and developers need to decide which integration technologies (SOA, EAI, middleware, DBMS, ...) are appropriate for their problem context. Integration is an aspect crosscutting through all your system!

Agile processes consider integration as a first-class citizen. In a paradigm that promotes piecemeal growth, continuous integration becomes the natural metaphor. From my perspective, all software developments with high integration efforts must inevitably fail when not leveraging an agile approach.

To integrate is agile!

Friday, October 03, 2008

Trends in Software Engineering

It has been a pretty long time since my last posting. Unfortunately, I was very busy this summer due to some projects I am involved in. In addition, I enjoyed some of my hobbies such as biking, running and digital photography. For my work-life balance it is essential to have some periods in the year where I am totally disconnected from IT stuff. But now it is time for some new adventures in software architecture.

Next week I am invited to give a talk on new trends in software engineering. While a few people think, there is not more to discover and invent in software engineering, the truth is that we are still representatives of a somehow immature discipline. Why else are so many software projects causing trouble? I won't compare our discipline with other ones such as building construction  because I don't like comparing apples with oranges. For example, it is very unlikely that a few days before your new house is built, you're asking the engineer to move a room from the ground floor to the first floor. Unfortunately, those things actually happen in software engineering on a regular basis.

A few years ago I wrote an article for a wide spread  german IT magazine on exactly the same issue. I built my thoughts around the story of a future IT expert traveling back with a time machine to our century. When he materializes, a large stack of magazines falls on his head - unintentionally - and he cannot remember anything but his IT knowledge. How would such a person evaluate the current state of affairs in software engineering?  Do you think, Mr. Spock from spaceship Enterprise will use pair programming or AOP? Wouldn't look very cool in a SciFi series, would it?

To understand what software engineering is heading for in the future, it is helpful to understand that all revolutions always are rooted in continuous evolution. At one point in time, existing technology  cannot scale anymore with some new requirements which forces researchers to come up with new ideas. Mind also the competition challenge. The first one to address new requirements may be the market leader. Learning from failure is an important aspect in this context. So, what is currently missing or failing in software engineering? Where do we need some productivity boosts? An example for such new requirements might be new kinds of hardware or software. Take Multi Core CPUs as an example. How can we efficiently leverage these CPUs with new paradigms for parallel programming.

As another example I consider the creation of software architecture. In contrast to many assumptions, most projects develop their software architecture still in an ad-hoc manner. We need well educated and experienced software architects  as well as systematic approaches to address this problem of ad-hoc development. Especially, the agility issue is challenging. Software engineers need embracing change in order to survive. Architecture refactoring is a promising technology in this area, among many others, of course.

Another way of boosting productivity is to prevent unnecessary work. Here, product line engineering and model-based software development are safe bets. If the complexity of problem domains grow we cannot handle the increased complexity with our old tool set, We need better abstractions such as DSLs. And I simply cannot believe for the same reason that in a world with an increasing amount of problem domains, general purpose languages are the right solution. Wouldn't this be another instance of the hammer-nail syndrome? Believe me, dynamic and functional languages will have increasing impact. 

In a world where connectivity is the rule, integration problems are inevitable. In this context, SOA comes immediately to my mind. With SOA I refer to an architecture concept, not to today's technology which I still consider immature. New, better SOA concepts are likely to appear in the future. SOA will evolve such as CORBA and all other predecessors did. Forget all the current ESB crap which already promises to cure all world problems.

There is a lot more which will influence emerging software engineering concepts. Think of grids, clouds, virtualization, decentralized computing! Look at CMU SEI's document on Ultra-Large Scale Systems Linda Northrop was in charge of. You'll find there sufficient research topics we have to solve before ULS will be available.

For us software engineers the good news is that we are living in an exciting  (although challenging) period of time.