If you are a software engineer: DON'T PANIC! This blog is my place to beam thoughts on the universe of Software Architecture right to your screen. On my infinite mission to boldly go where (almost) no one has gone before I will provide in-depth coverage of architectural topics, personal opinions, humor, philosophical discussions, interesting news and technology evaluations. (c) Prof. Dr. Michael Stal
Wednesday, October 12, 2011
Architecture Governance
This is an example for reuse, albeit one which rather serves as a war story. How come? Imagine, the next version of the mobile platform is going to be developed. How can the domain engineers ever evolve a system that was modified by various product development groups in different ways? The product line has been literally transformed in a bunch of one-off applications by unsystematic reuse. One approach to prevent such architectural war stories is what we call Architecture Governance.
Architecture Governance is a systematic approach for managing architectures and controlling all modifications in order to ensure quality and sustainability. This holds for all modifications, those for developing the system and those for evolving it.
But who is in control? On one hand, there should be an architecture control board in charge of the technical architecture, and on the other hand, another control board should be in charge of the business and business strategy - Luke Hohmann once coined the terms "tarchitecture" and "marchitecture" in this context.
It s important to think about governance in general. Architecture governance is no island. It must be balanced with IT Governance, SOA Governance, and other classes of governance. Governance is about preferring control and monitoring over trust.
WTF does "control" mean? In the product line example, it means we need to introduce an owner of the platform. And we need someone responsible for business & strategy. Only these two "persons" are allowed to decide about modifications of the platform in terms of business and architectural sustainability. Of course, the business control board is supervising the architecture control board. Eventually, it is the business that drives the development.
For architecture control, guiding principles should provide policies for modification and evolution. Obviously, the internal and external quality of the system as well as its expected behavior must never be compromised. That's why we need tools to check and enforce policies, tools to assess the architecture, and test suites to obtain respective information. This is where monitoring comes in. By the way, "tools" in this context also include reviews. And mind the gardening activities! How can a system be systematically prepared for modification. This is where reengineering and in particular refactoring become important.
But there are even more details to bother about:
We need to address legal and regulatory issues. Any change must not violate such standards. Think of safety features for medical products as an example.
It is important to care about quality of service. Assume, a modification leads to a breakdown of KPIs or SLAs.
Don't ignore or neglect quality attributes in general. Does a modification influence sensitivity or tradeoff points? Will it introduce new risks?
An issue might also be patent scanning. Is there any new code introduced such as an Open Source Software that violates intellectual property rights?
But it is not only about tools. It is also about a governance process as well as assignment of responsibilities to roles and persons. For example: Who is allowed to change, configure, update, or add what and when. What measures are necessary in order to guarantee quality and sustainability. How does information flow between the different actors. What happens in the case of policy violations.
Note: This does not only hold for product lines but also for one-off applications. Unsystematic modification almost always leads to potential problems, because of unwanted side affects, accidental complexity, and lack of transparency.
What does this imply for the design of new architectures? It is the same issue: we need a business owner, an architecture owner, a design and implementation strategy, test-driven development, refactoring, and so forth. I happen to have introduced all these topics already in my blog :-) Modifying and evolving a system is only a special case of designing it.
By the way: An agile development process supports architecture governance in that its iterative incremental approach already introduces control and monitoring points.
In practice, there are many ways to deal architecturally with architecture governance. I cannot describe all of them for the sake of brevity. But let me give you some examples:
Using the Layers patterns in a strict sense helps to protect subsystems. Think about Safety segregation.
Clean Coding represents a good way to make control and monitoring easier.
Well documented software architectures are easier to modify and control.
There are many architectural means to foster governance.
Keep in mind that architecture governance is not about controlling architects or developers. Instead, governance helps architects and developers to keep in control. It requires additional efforts but its RoI is very high. Ignoring governance leads to project failure, overcomplex systems, buggy implementations, and design erosion. And eventually to dissatisfied customers and users (and dissatisfied architects and developers). Don't let the software govern you. Govern the software!
Thursday, October 06, 2011
Steve Jobs, 1955 - 2011
When Steve had to leave Apple in 1985 he created the NeXT which has been an extraordinarily successful machine, not so much from a business perspective but from a creative one.
But his popularity exploded eventually after Steve Jobs rejoined Apple as CEO. Steve Jobs stands for creativity, vision, courage, leadership, charisma. He honestly lived and worked literally day and night for Apple and its users. In contrast to many competitors he gave products not only functionality but personality and style.
What only few people know is that Steve also had a strong influence on software. Not only on UI topics but also on programming languages, operating systems, and many other aspects.
I personally have never been an Apple "fan boy" before I bought my first iPod, but now I love their products. However, I must confess that other mothers also have beautiful daughters. Nonetheless it is evident that Apple has always been the driver of amazing innovations that were frequently copied by competitors.
That's the reason, why I want to thank you, Steve. Many things we take for granted simply wouldn't exist without you. I fell deep respect for you and your work. I am sure, you'll always be unforgotten as one of the leading personalities in the IT Hall of Fame. And I really hope, your spirit will always persist within Apple.
The last thing I learned from you, although this has been a sad and bitter experience is: Carpe Diem! And that we should see the people behind and in front of all these nice products.
Send my greetings also to Doug Adams. So long, and thank you for all the apples!
Tuesday, September 20, 2011
Using War Stories
Friday, September 16, 2011
Fractal Design
If you are going to design a system, you basically need to identify prioritized main use cases and quality attribute scenarios among other forces to design the system. Together with a problem domain model that shows the inner constituents, and a context model that integrates the system into its environment, the design can evolve in a evolutionary way. As a result of the design activities, architects are able to introduce subsystems as well as their relations according to functional or qualitative drivers. And, of course, interfaces start to appear, each of them defining one specific and explicit role of the subsystem.
A subsystem is itself integrated into an environment – the enclosing system under development. So the subsystem can also act as a system. Thus the same principles apply for the subsystem acting as a system itself – you may even define use cases and quality scenarios for the subsystem with use cases and scenarios being derived as a subset of the “outer” use cases and scenarios. After this step, “subsubsystems” are created, often coined “components”.
What we do here is applying the same design principle in a top-down manner.
But is it useful or possible to apply the principle infinitely? No, because at some level the solution domain is shining through. Solution domains tend to introduce their own composition techniques such as assemblies, bundles, EJBs, services, classes and objects. If the top-down design approach reaches this level, designers must map the architecture artifacts to the solution domain. Maybe, we should call this level architectural twilight zone or the problem-solution-boundary
Side remark: To overcome the twilight zone, we could introduce DSLs and use Model-Driven Software Development.
As a rule of thumb, we typically obtain 2 sublevels (subsystems, subsubsystems) as architects. If less, the design is too abstract and vague. If more, we are introducing too many details.
One of the core challenges in this context is the fact that there might be different strategies to view a domain and thus different ways to cut a system into subsystems.
Partitioning a system into subsystems independent of the hierarchy level is influenced by functional aspects and the problem domain. Thus, subsystems should introduce meaningful subdomains of the surrounding problem domain. With other words: methods like Domain-Driven Design together with some extensions can help nicely.
No matter what you do, there will always be crosscutting qualities and topics. The same observation can be made when structuring an organization into divisions, departments, groups. Have you ever seen an organization without overlapping units? The introduction of crosscutting concerns may introduce new sub^nsystems, add new interfaces or even change their implementation depending on the invasiveness of the concrete concern. Each concern can be considered a subordinate view mixed into the subsystem respectively its domain.
Architecture design is basically fractal design up to two levels of depth. The priorities of use cases and quality scenarios as well as their properties (strategic versus tactical) define how and in which order the functional model needs to be refined hierarchically by integrating scenario-based views.
Of course, one person’s solution domain can be another person’s problem domain which is why exceptions to the aforementioned rule might apply.
Thursday, September 08, 2011
The Telephone Test for Software Architecture
Simplicity implies that the software architecture only addresses inherent complexity without introducing accidental complexity. Since, there are typically several ways to solve a problem, there is no simplest architecture available. Instead, there rather are solution architectures that follow one specific solution path with a minimal number of artifacts. As some quotes propose, simplicity is achieved if you cannot take something away from your system without failing to meet its specification.
Expressiveness implies that the artifacts of your architecture are easy to understand. That is, artifacts should have expressive names, and each responsibility should be assigned to one artifact. Thus, components with a multitude of responsibilities are often a bad idea such as are responsibilities spread across multiple components. However, it is particularly difficult to achieve the latter goal due to cross-cutting concerns. An additional step to achieve expressiveness is having role-based, explicit interfaces with concrete contracts.
But how can you test simplicity and expressiveness? There is a good low-tech suggestion for doing this: Let a software architect explain the architecture to an engineer not involved in the project, for instance using a phone call. Limit the time to - let's say - 10 minutes. If the other engineer gets a good idea of the architecture, it is an indication that the architecture is simple and expressive. Of course, I am assuming that the software architect is a person good in communication as I expect from architects, anyway.
Some might argue that design metrics could also help in this context. Indeed, metrics provide some insights. But we shouldn't forget that metrics analyze the structure, not the semantics. Thus, they are not capable of deciding about expressiveness.
Location:Eduard-Schmid-Straße,Munich,Germany
Wednesday, September 07, 2011
Apples and Oranges
How often do we hear statements like how immature software engineering is compared to other engineering disciplines? Of course, construction building is a very old craft where engineers could collect huge amounts of knowledge, methods, and experiences over thousands of years.
On the other hand, there is a huge gap between traditional engineering disciplines and software engineering:
- While other engineering disciplines are focused on specific domains, software engineering is supposed to support countless problem domains.
This is why we came up with technologies that are more general in terms of problem domains such as:
- UML
- Architecture Definition and Specification Languages
- Generic Design Patterns (GoF)
- Components and Services
However, there is a huge trap using such approaches. Due to their general and generic nature, they are far away from the realities of the problem domain. Thus, communication between software engineers and non IT-knowledgeable stakeholders does suffer.
It is interesting how many experts still emphasize those general-purpose tools and technologies. So to speak, they are addicted to the One-Size-Fits-All drug.
For instance, look at architecture description languages that introduce general components and connections. The problem is that for a concrete problem domain, such generality simply does not work. It is a difference, whether you are dealing with a medical imaging modality or a VoIP platform. Yes, we can … call all building blocks of such systems components or subsystems. And, yes we can … call all interaction paths connections. But, here we are unifying different concepts under one common umbrella.
What software people often forget is that there are two universes:
- the Solution Domain with all its supporting tools and technologies. In this domain, general-purpose approaches as the aforementioned make sense,
- AND the Problem Domain.
In the Problem Domain we can not naively leverage concepts like UML, Components or General Design Patterns. Instead, we need to follow a Problem Domain First Approach. Thus, it is necessary to start with concepts of the problem domain. What does that mean in practice?
- Introduce DSLs and Domain-Driven-Design to cover the problem domain instead of relying on UML. As a matter of fact we can use the underlying meta-model of UML as a base. It is possible to evolve such a language in an iterative approach.
- Think about the basic building blocks not as components and subsystems, but use the terminology of the problem domain. You can define your own problem-specific components, subsystems, or services for this purpose. This is exactly what we did for a very large Enterprise Communications System.
- Consider the availability of Analysis Patterns for your problem domain and subdomains. Use them if available.
Building the Software Architecture then consists of mapping the problem domain to the solution domain. How could we do that? I mentioned the Onion Model several times. So steps could be:
- Understand the problem domain and build a model of it jointly with domain experts
- Leverage use cases to understand what the system under development is supposed to deliver from a black-box view.
- Use the problem domain model to map the use cases to the problem domain artifacts introducing further functional aspects.
- Stepwise extend the architecture by using strategic quality-attribute scenarios with descending priorities.
- Prepare the architecture for tactical requirements by using the tactical quality attribute scenarios with descending priorities.
- Do all this just considering three abstraction levels to limit complexity.
- Apply architecture patterns to structure the overall system.
Mind the Change! Unlike in many other disciplines, software is considered so soft that it should even support strategic late changes. This is like switching your project goal from creating a coffee machine to creating a power plant. Unfortunately, customers are rarely aware of this problem because they are being constantly told by sales and marketing that changes are no problem. Thus, make customers and maybe even more sales and marketing aware of this fact. And use an agile approach to embrace change. But also have courage to deny changes. This is another reason why the Problem-Domain-First approach is so important. How else could you know what changes or extensions are typical for a problem domain? Think of the tax legislation in your country. It is not helpful to only think of interception and extension hooks or Strategy pattern instantiations in this context.
All general approaches for software engineering have their value and can be used as the underpinnings of your systems. But they should not be used for covering the whole software development from problem domain to solution domain. This is also what we can substantially learn from other engineering disciplines.
Thus, don’t mix apples and oranges. Otherwise, the result may disappoint you.
Sunday, September 04, 2011
Emergence
What do we need for implementing emergence?
We require
a set of active agents,
a (set of) communication mechanism(s),
a common goal the agents implicitly or explicitly share,
an environment,
cooperation strategies of agent (optionally),
a method to determine when to stop (optionally).
The agents are active in that they execute some functionality in order to reach a goal or subgoal. They communicate with other agents to transfer information. And they recognize when their work is done or when they should stop.
There is a lot of possible variation here:
Agents might all share the same role (behavior) or they may have different roles.
Agents might be organized in a peer-to-peer fashion or hierarchically.
Agents might share the same goal or have individual or role-based goals.
Control might be distributed across all agents, or only be assigned to one or a multiple of them.
Agents might be preinstantiated at start-up or adapt their number or roles according to environmental conditions or achievement (being born, living, dying).
Agents might be able to give life to new agents. They may also be able to terminate other agents, even themselves.
There even could be some environmental conditions that result in death or birth of agents.
Communication might be one-to-one or one-to-many or abide to any other message exchange pattern.
Communication might be synchronous or asychronous.
Agents might adapt their behavior including communication due to environmental changes or interaction with other agents.
Which implies, the environment might change.
Agents might "move" in their environment.
There could also be diverse neighbor environments.
In a Map-Reduce system we have a controlling agent and subordinate agents. The controlling master might instantiate slaves depending on problem complexity. Goal of the system is to reach a common goal such as finding all specific entities in an environment. The master communicates with the slaves and provides a subgoal to each of them. The slaves solve the subgoal and may act as masters for their subgoal, thus recursively applying the pattern. Slaves send their findings back to their master which then recombines partial results to the common result.
In Leader-Followers, the leader is always the controlling agent. When it receives an information from its environment, it promotes a follower to become the leader, and switches its role to worker. A worker who is done with a job, switches its role into follower and enters the followers list.
All these systems reveal emergent behavior. And this also resembles swarms like ant populations. There are different roles such as queen, warrior, worker, etc. The population's goal is to sustain by ensuring the survival of the population and their offspring as well as giving birth to new populations. When searching and locating food sources, ants use pheromons for communication. They show warrior strategies for fighting enemies. And they always adapt to their environment in terms of food, war, weather conditions. Ants might even adapt distribution of their roles such as creating more warriors when necessary. The whole population appears as a kind of smart creature.
A human might also be considered as a whole consisting of smaller agents (cells) and this is how evolution actually developed complex life forms.
And if you think about it, the Web itself is just an emergent system.
When we are going to build such an emergent system, we need to consider all these aspects defined above such as roles, hierarchies, communication styles, goals, controls, environment, adaptation, cooperation strategies.
One promising way to implement such systems are Actor-based approaches.
If we had a toolkit for emergent system that allows to configure the different variation points, we could play around with the concept. Akka is one of the excellent frameworks that could serve as a base for such a toolkit.
So, eventually emergent systems are developed using emergent design approaches. Until then, there is a large journey ahead of us.
Wednesday, August 31, 2011
The Whole is more than the Sum of its Parts – Creating Complex Systems from Simple Entities
The Internet represents an ubiquitous infrastructure that enables complex systems such as clouds, SOA-based systems, e-mail, or Web 2.0 applications. Despite of its overall complexity, the basic ingredients of the Internet are rather "simple" and easy to understand. Think of parts like TCP/IP, HTTP(S), HTML, URIs, DNS, IMAP, SMTP, REST, or RSS, to name just a few.
Another example though not software-related is Lego. Children (and adults :-) can build the most sophisticated systems from very simple Lego bricks. And even in biological systems like ant populations or in physical systems - like the whole universe itself - we can make similar observations.
Infrastructures such as the Internet with high inherent complexity consist of such simple constituents. Why are we still capable of providing new kinds of innovative software applications based on technologies that are more than 20 to 40 years old? What do all of these examples have in common?
Well, they basically combine simple building blocks with a set of interaction-, adaptation-, and composition-mechanisms. Moreover, they allow to compose higher level building blocks (from simpler parts) that support their own abstract set of interaction, adaptation, and composition. Think of Layered Architecture! And mind the fact that the building blocks in this context are not only static entities but (might) also include behavioral aspects making them active objects.
Interestingly, the composition or interaction of parts leads to cross-cutting functionality that wouldn't be possible otherwise. Most recently, there have been scientific papers claiming that the human personality is "just" a result of cross-cutting behavior - but that what I'd like to mention as an interesting side-remark.
In general, approaches are summarized under emergent behavior or emergence when we speak about (inter)active parts. In this context, I am considering the term "emergent" as a broader concept also including passive ingredients.
But why should we care? What can we software engineers learn from the principle of emergence, evolution or composition?
It is possible to build very complex systems based on simple building blocks. That is very obvious because eventually all physical systems are composed from simple elements. However, we need to address the challenge, how, why and when to apply such composition techniques to our own software. And we also need to address the issue of quality.
While it is theoretically possible to build every system using very small, atomic parts, this would lead to unacceptable development efforts, bad maintenance and low expressiveness. Think about using a Turing Machine for implementing a word processor. Hence, the gap between the behavior we are going to provide in resulting applications respectively artifacts and the basic building blocks used for their construction shouldn't be too wide.
On the other hand, we’d like to support the creation of a whole family of applications in a given domain. (If you are knowledgeable about Product Line Engineering, this should sound familiar.)
There are three fundamental possibilities to construct such systems:
- Bottom Up, i.e., creating a base of entities that spans a whole domain. We can view the domain as a “language” with the base acting as a kind of “grammar”.
- Top-Down: i.e., thinking about what kind of artifacts we’d like to create and trying to figure out a set a set of entities from which these artifacts can be composed via one or more abstraction layers.
- Hybrid: maybe, the most pragmatic approach which combines Top-Down and Bottom-Up.
To make things even more complex, the “grammar” could be subject to evolution which might change the “grammar” and/or the language.
Very theoretical, so far. Agreed, so let me show a practical show case.
A prominent example is the Leader-Followers pattern. Roughly speaking, an active leader is listening to an event source. Whenever the leader detects an event, it turns into a worker and actively handles the event, but only after it selected one of the followers to be the next leader. As soon as the “new born” worker has completed its event handling activity, it turns into a follower. And then the whole story might repeat again. This pattern works nicely for applications such as a logging server. Its beauty stems from the fact that active agents comprise a self-organizing system, leading to non-deterministic behavior.
Engineers often prefer centralized approaches with one or more central hubs or mediators. That is, we like to introduce a central control. There must be someone or something in control to achieve a common goal, right?
In fact, this assumption is wrong. In some cases, it is necessary to have a control, but this control does not need to be a central component but can be distributed across several entities. Let me give you a real life example:
Some day, you are visiting a shopping mall. After having a nice time, you are leaving the mall. But you forgot where you left the car. Thus, all members of the family start searching in the parking lot until someone locates the car and notifies everyone else. Normally, at least in some families, this search activity would be rather unorganized. Or each family members would agree to search in a specific area, but no one would prescribe how the search pattern looks like. Nonetheless, all share a common goal and will finally succeed. (Let me just assume, the car hasn’t been stolen :-)
An architecture concept that resembles this situation could be active agents that communicate with each other using a blackboard. Just assume, we are building a naive web crawler. On the blackboard you could write down the link where you’d like to start the web crawling. An agent takes the token (i.e., the link) and starts to search the Web page. On the Web page it finds further links which it writes to the blackboard, each link representing a separate token. Now, all active agents can access the blackboard, grab a token, analyze the corresponding page and write the links they find on the blackboard. But, wait a moment, when does the search end? The problem boils down to the question how we handle duplicates (Web sites the system already parsed). For this purpose, we could introduce a hash table where we store pages the agents have already visited. If an agent reads a token which represents an already visited URL, it just throws the token away and tries to grab another one. As soon as there are no more tokens available, the crawling is completed.
Sometimes, we also need (single or multiple) controls. For instance, in most Peer-to-Peer networks, the search for a resource starts using an initial set of controls that aggregate information about their environment. But the search or the registration of resources are conducted in a self-organizing and decentralized way. Note, that the introduction of controls or hubs is not necessary for functional reasons, but helps to increase scalability and performance.
A Sensor (or Robot) Network is an example, where we might even need no control at all. Just consider the case, that we distribute sensors across a whole area to measure environmental data. Even if some of these sensors fail, we are still able to gather sufficient data, given a certain amount of sensors is available. But, of course, one could argue, that there must be someone central who monitors al the sensors. By the way, the network of weather stations is an excellent show case for this strategy.
To summarize this part, in such scenarios with active components, we can have the full range of centralized control, decentralized control, or no control at all. Anyway, the important thing is that each active entity has a predefined goal and there is communication between these entities either using a Peer-to-Peer communication style or one (or more) information hub(s).
As already mentioned, we could add evolutionary elements by letting the agents adapt to changing environments using strategies like survival of the fittest and uncontrolled modification. Genetic algorithms are the best example for this architectural style. Like in neuronal networks, the main challenge for a software engineer is the fact that such systems mostly reveal their functionality using cross cutting and non-deterministic behavior. This makes it difficult to create such systems in a top-down way. Most of the time, such systems are composed bottom-up leveraging a “configurable” evaluation function. With the evaluation function, systems can measure how good the results are and adapt dynamically until the results reach a required level of quality.
Now, I’d like to move to more “conventional” software architectures. As we know, all software architectures are idiomatic, usually providing a stack of idiomatic layers and composition of idiomatic subsystems and components. (Whatever, “subsystem” and “component” mean in this context :-) Note, that this also holds for patterns and especially for pattern languages.
By “idiomatic” I am referring to the point that architectural artifacts offer a kind of API which define syntax and semantics from their consumers’ perspective. By composing these artifacts, higher level functionality can be addressed using lower level functionality. Artifacts at the same level cooperate to achieve composite behavior. Again, we are introducing entities, composition, and communication. Cross cutting behavior addresses composites that reveal specific functionalities or quality attributes. To this end, even quality attributes could be considered functional: they may imply additional functionality or comprise cross cutting behavior.
With other words, in software engineering we face the challenge of hierarchies and composition respectively integration of languages.
- The more complex these languages are, the more complex it is to use them.
- The more complex the compositions of languages are, the more complex it is to integrate them into a whole.
Thus, we need to balance between complexity of these languages and the complexity of their composition. This balance of forces can be achieved by building an appropriate hierarchy of he languages.
Thus, yet another definition of “Software Architecture” could be:
Software architecture is about the hierarchical integration of artifacts that balances the idiomatic complexity of its constituents and the complexity of their composition. In order to meet the given forces, it addresses strategic design decisions using different viewpoints, prepares mandatory tactical aspects, and abides to common guiding principles.
The secret of good design is appropriate partitioning in core parts which are then composed to build a whole such that it can easily address inherent complexity, without introducing accidental complexity. In a platform or product line, this can be really challenging, because we must enable application engineering to create each member of the product family by using the common parts, while allowing to deal with variability. As we are dealing with hierarchies of artifacts, this turns out to be even more complex. When extending the scope of a product line, there might be implications on all aspects of the reference architecture.
The goal of architecture design should be to design complex functionality by composing simple artifacts in a simple way. The tradeoff is between complexity of artifacts and complexity of composition as well as finding an appropriate hierarchical composition. All approaches such as AOP, model-driven design, Component-Based Development, or SOA try to achieve this nirvana. However, introducing basic artifacts and composition techniques is beneficial not sufficient. Instead, it is crucial to cover the problem domain and its mapping to the solution domain.
This is what all the well-known definitions of Software Architecture typically forget to mention :-)
Thursday, August 25, 2011
Make it explicit
An appropriate approach to address at least parts of this issue is to conduct a KANO analysis where engineers will define features all customers would expect, but also excitement features that may surprise the users in a positive way. Take the iPhone as an example: Expected features include the ability to make phone calls, to maintain a contact list, or to send SMS messages. If a product lacks one of these, customers will be disappointed, but even the coverage of all expected features won't make a customer excited. Excitement features of the iPhone include the Touch Screen and the possibility to add functionality using apps. It is important to mention that today's excitement features will become expected features in the future. Obviously, developers of a mobile phone need to know the expected features in order to prevent any disappointment. However, in many cases these expected features are only known implicitly which works fine if you are familiar with the problem domain, but leads to trouble if this is not the case. Thus, the first conclusion is to make these requirements explicit. The introduction of a problem domain model and of a feature tree prove to be excellent ways to understand the domain and the features as well as their dependence and interaction.
So far I mentioned functional aspects, but implicit assumptions about quality attributes should not be neglected either. Suppose, it would take minutes to find a person in the contact list or to initiate a phone call. Such cross-cutting concerns may not appear in the requirements specification, but nonetheless are essential for successful development. Thus, also the quality attributes must be made explicit which does not only turn out to be important for design but also for the establishment of a test strategy and test plans.
Besides requirements the business strategy and business case often make implicit assumptions such as the kinds of users or market segments or features or future evolution. Whenever architects are not informed about these assumptions, they can neither check their feasibility or validity nor can they provide the right solution. How could someone provide a solution for a vaguely understood problem anyway? Conclusion: always make sure that you obtain sufficient information from management. And mind the ambiguity problem! For instance, I once asked several product managers about their usability requirement. It turned out that every single one of them had a completely different understanding of usability. This is the reason, why a common model and effective communication are so important.
Another problem area is the mapping from the problem domain to the solution domain. Developers may use particular technologies, because they find these the right ones for addressing the problem. Or managers may prescribe technologies without knowing their implications. Architects may enforce decisions without giving a rationale. And developers may make implicit but wrong assumptions about the architecture.Thus, making decisions based on explicit assumptions is inevitable for successful design. For example, architects and developers should verify assumptions and document decisions explicitly. If you don't believe this, think about gifts you gave other persons based on your assumptions.
A benefit of explicit information is that it can be cross-checked and allows traceability. In a development project all the groups of stakeholders should not consider themselves as disconnected islands. Instead, they should share knowledge by using effective communication strategies. Also, they should make their decision process and decisions transparent to the other groups. Basically, it is like having one common information repository where stakeholders may have different views. But these different views must map consistently to each other. This is the reason why architects are the right persons to enforce making everything explicit. In particular, the development of platforms and product lines is very sensitive to implicit information, because failure to consider this information has an impact on many applications.
- Posted using BlogPress from my iPad
Thursday, April 21, 2011
SEACON 2011 Architecture Day in Hamburg
This year the German event SEACON 2011 is taking place from 27th to 29th June. The venue is the Atlantik-Hotel in Hamburg, a very nice location famous for its permanent guest, German Pop/Rock Tycoon Udo Lindenberg.
I was responsible to organize the architecture day on 29th June. And we managed to get excellent speakers covering a lot of interesting architecture topics. Jim Webber from Thoughtworks is going to give the keynote address: “Lessons learned in large HTTP-centric Systems”. Attendees will also have the opportunity to visit half-day tutorials on Dynamic Software Analysis (Johannes Siedersleben) and Software Quality (Erik Dörnenburg). In addition, the agenda is filled with excellent talks by renowned speakers and practitioners.
For software architects this will be a highlight in 2011. The event SEACON Architecture Day might become an established event on Software Architecture. It is up to you. Attend and enjoy!
Saturday, February 05, 2011
The hard way to Distribution and Concurrency
The Web is hot. So is Cloud Computing. And Multicores might even become hotter. Whenever there are additional capabilities, software developers will immediately start to exploit them. Why else do we always require new hardware for new operating system versions?
In particular, dealing with distribution and concurrency is pretty simple. Just start a thread in Java or .NET, and the force will be instantaneously with you. Or click on some menu items of your favorite tools to distribute functionality elegantly across the network.
Unfortunately, many projects fail in providing distributed or concurrent functionality. How comes all these nice programming platforms and libraries don’t suffice?
And the answer of course is software architecture. Especially for cross-cutting concerns such as distribution and concurrency the programming level offers the wrong abstraction layer. It is not sufficient to address concurrency programmatically. Foremost, we must address the architectural layer, then the design aspects before thinking about programming idioms.
Let me investigate concurrency as an example. For this purpose I’ll consider different domains with respect to concurrency:
- In home automation systems there is a large variability of different services with various durations. Think about switching lights on/off, or controlling the climate system. Typical home automation systems are reactive systems that use a central component for event notification and event handling. As many events might occur in parallel, the Concurrent Reactor Pattern offers the right architectural approach.
- For an image processing system we need a completely different architecture. Here, image data is created, processed and transmitted between subsequent components. Hence, the Pipes & Filters architecture pattern with concurrent filters fits nicely into the domain.
- When dealing with control systems or Web Servers with asynchronous events arriving all the time that must be handled in a synchronous way, a Half Sync / Half Async architecture provides a message queue which decouples the synchronous from the asynchronous world.
- In Servers that process single messages such as Log Servers, a Leader/Followers patterns applies perfectly. The leader is in charge of receiving the next event. If it receives an event, it promotes one thread in the followers chain to the next leader, while becoming itself a worker that after work completion will become a follower. A nice example for a self-organizing infrastructure.
- In a warehouse system we’d like various actors to store or load items at the same time. Thus, the warehouse core cannot only provide one single interface. Instead, multiple interfaces operate on the warehouse core logic. They can synchronize their data by using a Half Object plus Protocol pattern.
- In a mobile network or an AI system multiple, concurrent agents may need to coordinate their activities to achieve a common goal. For such systems, patterns like the Blackboard pattern are applicable.
- For divide-and-conquer processing the Master/Slave pattern helps structuring our Map/Reduce problem to sets of interacting components, some of them responsible for the Map, most of them responsible for the Reduce functionality.
We got various example domains, all of which had to use a different concurrency architecture approach due to their diversity of forces such as existence of sessions, variety of services, synchronization necessities, and many others.
Our first conclusion is: different problem contexts require different concurrency architectures. There can’t be a single concurrency architecture for all problems. In addition, we need to deal with architectural issues before introducing finer grained design aspects.
For the next step of refinement it is essential to solve issues at the design level. Think about Half Sync / Half Async where we need to determine how to organize the synchronous worker threads of the Half Sync layer and how to let them access the shared queue of incoming events in a coordinated way, to address only a few aspects.
Therefore, we need to ask questions like:
- How will we manage shared resources such as threads or connections? Thread pools and other design tactics come into my mind.
- How do we efficiently deal with sharing and locking of data? We might use monitor objects or Software Transactional Memory approaches.
- How can concurrent entities communicate with each other and coordinate each other? Actor-based programming patterns are one option here.
- How can we efficiently access data? Think about caching, eager evaluation and lazy evaluation?
- Is there a way to efficiently handle I/O? Asynchronous processing such as Proactors, Active Objects, or Asynchronous Completion Tokens come to our rescue.
Basically, all of those design aspects are on the component level. Fortunately, there are many design patterns that help us in implementing this abstraction level.
Our second conclusion: whenever we conduct fine grained design (i.e, tactical design), design patterns and design tactics offer guidance.
Only in the final step, we are going to leverage all programming language and library features. Eventually, we end up using methods for thread creation and management, mutexes, agents, semaphores, and all those other idiomatic features modern programming platforms provide.
In the best case, a DSL and/or a library could help increasing the abstraction layer of the implementation by hiding details of the underlying hardware system. Think about OpenMP, PLINQ, Akka STM, and many other solutions.
What we learn from this example is that cross-cutting concerns like infrastructure or quality attributes require a sound fundamental architectural approach which is refined using design tactics and design patterns, and then implemented with the available tools offered by the programming platform. For instance, creating a concurrent solution requires an architecture-first instead of an implementation-first approach. For the same reason, SOA approaches favor interface-first over implementation-first styles.
Sure, programming might be much more enjoyable and exciting than drawing architectural boxes and lines, but at the end ad-hoc implementation of cross-cutting concerns causes more harm than fun. And don’t forget: creating a sound software architecture is sexy – at least for software engineers.
Sunday, January 23, 2011
Where we can meet
I will participate in the OOP Conference 2011 in Munich which starts on Monday, the 24th January.
- Tuesday morning I will talk on Design Tactics
- Tuesday evening I will present Actor-based Programming
- Wednesday evening my Scala tutorial will take place
- Tuesday and Thursday I will also have the Meet-the-Editor events each 30 minutes where you might talk to me in my role as editor-in-chief of JavaSPEKTRUM
- Tuesday evening I will act in the IT Stammtisch organized by the unique and nice Nikolai Josuttis. So, if you would like to see me very uncomfortable, you should definitely attend this panel show.
I will also attend the QCon 2011 in London (7th to 11th March) where I will
- have a tutorial on functional programming with Scala and F# on monday
- host a track on Software Architecture Improvement at wednesday (talking myself in the introductory session)
- present the Onion model in Floyd’s track about models
Hope to meet you!
Sunday, January 16, 2011
The Beauty and Quality of Software
In my spare time I often enjoy digital photography. Capturing people, streets, buildings, wildlife or landscapes involves a lot of fun and is a relaxing experience. After taking photos and when processing images with Adobe Lightroom, I am used to differentiate those pictures I consider decent from those I am going to throw away. The obvious question for a photographer is whether there are some guidelines or indicators that help taking better photos and also help rating photos. And, indeed, there actually are such quality indicators.
Independent of the content of a picture, there exist some common properties I consider important when rating a photo e.g.,
- Does the photo clearly focus on one specific object or does it rather confuse by showing various things without any focus?
- Is the horizon depicted horizontally? Even if there is only 1 degree deviation the human watcher won’t feel comfortable.
- Are there some interesting kinds of symmetries in the image?
- Does the photo contain a chaos of many different things or does it concentrate on one specific detail?
There are many other indicators for the internal quality such as the rule of thirds. Interestingly these rules are really helpful. However, they are not proofs for good or bad quality of a photo, but mere indicators. For example, sometimes deliberately breaking symmetry or violating the rule of thirds may lead to much more expressive photos. No rule without exception.
How does all of that relate to software architecture? The question is whether we can apply some indicators to get a first impression about the internal quality of an architecture, something I consider as architectural beauty. When I once read Dave Cutler’s book on the design of Windows NT, it really opened my eyes. All parts of the architectural design were easy to understand. The architecture was expressed in a kind of idiomatic way, where the same principles have been applied to different parts. The partitioning of the overall operating system in different layers and components with clear responsibilities helped me grasping the details easily. On the other hand, I have also experienced bad architectures with over-generic designs, where it took me weeks to get the slightest clue what they were trying to implement. So are there quality indicators for good or bad design?
Obviously, there must be such indicators. Otherwise, the widespread addiction to metrics wouldn’t make too much sense. Metrics combined with CQM tools (CQM = Code Quality Management) and Architecture Analysis tools can reveal issues such as insufficient cohesion or coupling. Metrics however must always be considered relative! For instance, there is no rules of thumb whether 5k LOCs are good or bad. It might be good in one case, but bad in another. Applying the McCabe Cyclomatic Complexity can lead to wrong conclusions (If you don’t believe me, calculate the cyclomatic complexity of an Observer scenario with 80 observers). Why is that? Metrics operate on syntactic level. Thus, they can only measure syntactic properties. They are not capable of dealing with semantics. Hence, all metrics must be set into the right context by human engineers. While a coupling of 5 itself might not be too valuable, the increase of the coupling from 5 to 10 after one iteration reveals a potential problem.
One of the more absolute indicators are architecture smells which also help figuring out the necessity of architecture refactoring. Let me give you a few examples:
- dependency cycles between architectural components imply that you cannot understand, test, change one component without addressing the other component in the cycle.
- Inexpressive component names prevent engineers to understand the architecture without digging deeper into more details.
- component responsibility overload means that a component is implementing too many different responsibilities preventing clear separation of concerns.
- Unnecessary indirection layers do not only negatively affect developmental qualities such as maintainability or extensibility but also operational quality attributes such as performance.
- Implicit dependencies often lead to a shift between desired architecture and implemented architecture. One notable example is violating strict layering, thus introducing unnecessary und unknown dependencies.
- Over-generic design: When there are dozens of Strategy patterns (or proxies, visitors, etc.) in your design, this often denotes a clue for a potential problem lurking in your design. A Strategy pattern basically means “I have no clue but want to defer the decision to another place”. Applying many Strategy patterns mean, the architects had absolutely no clue. Instead of opening the system according to the Open/Close principle for some variability points, they applied Strategy unsystematically. This is the best way to achieve less flexibility and less performance.
Architecture smells can often be detected using architecture analysis (tools). If we think about these smells, we recognize that there must be something more that gives us indication about the internal quality/beauty of an architecture. Something which resembles which I addressed in the introduction on digital photography.
I can not speak for the whole community of software architects in general. However, here are the quality indicators that help me assessing a software architecture:
- Simplicity: A software architecture is simple when it addresses the requirements with the least possible number of design artifacts (KiSS) and does not introduce accidental complexity by additional entities (aka design pearls) that are not required. There is the old rule: “an architecture is simple when you cannot remove anything without failing to meet some requirements”. Hence, a simple architecture is not a simplistic architecture. To achieve simplicity, one guideline is to root all architecture decisions in the architecturally relevant requirements. You can measure simplicity of an architecture using a simple test: ask the architect to introduce the architecture within at most 30 minutes only using a flip chart to draw the architectural baseline. I know, this also depends on the presentation skills of the architect but that’s another story. Nonetheless, a listener should be able to repeat the core architectural decisions. There is one caveat: Some “smart” software developers try getting rid of complexity by just hiding it in the guts of the implementation. For example, a system could theoretically consist of one single object with a doIt method. This is like some of the ESB and EAI products that just hide the mess behind some wrappers. This, however, does not introduce simplicity, but rather moves complexity to lower layers. The same simplicity test would definitely reveal that kind of bad trick.
- Expressiveness: An architecture is expressive when you can easily grasp the purpose of all entities and the whole architecture by looking at the architecture baseline. I don’t claim an architecture needs to be self-explanatory such as understanding it by just looking at the UML diagrams. This can’t work because diagrams fail to reveal leading design principles and the rationale of design decisions. What I mean: Just looking at the architecture design should give you enough understanding to grasp the whole architecture vision by diving into some additional details. How can you achieve expressiveness? First of all, give expressive names to all entities, so that another engineer gets the idea. For all components make sure: It should do one thing but it should do it right. And each component should only have one responsibility. Assigning lots of responsibilities to the same component leads to inexpressive architecture as do all responsibilities crosscutting through your design. Role-based design is a perfect tool to assign role-specific interfaces to components. Speaking about interfaces: Also add a contract to each interface that specified what (kind of protocol) the interface expects and what it provides. Aspect-oriented development may help dealing with crosscutting concerns. Another important issue in this context is to make all dependencies explicit in the architecture. That’s a no-brainer because implicit dependencies are simply not visible in the design so that an architecture reviewer won’t recognize them easily. Expressiveness can be verified by a simple telephone test. Ask an architect to explain his/her architecture via a 10 min telephone call. After the call you should be able to understand at least the coarse grained architecture.
- Behavioral Symmetry: Suppose, you got are viewing sequence diagrams of a transaction-based enterprise system. Although there is a method invocation called beginTransaction, you’ll recognize no endTransaction, commit or rollback. Won’t you feel uncomfortable with such an observation? Symmetry of behavior is a good indication, something might be wrong with the design. As it holds for all quality indicators, it is not a proof for bad quality. For example, in a distributed system the remote server might allocate memory for a new result object, while the client is supposed to free it after use.
- Orthogonality or Structural Symmetry: I once had to review a software architecture where they used several solutions for the same problem. Hence, I required to understand all these various solutions to understand the architecture. Structural Symmetry means, you are leveraging the same solution for all instances of the same problem. Even if MFC provided a handful of string classes, you should not use all of them in the same application. This is particularly essential when dealing with crosscutting concerns such as error handling, tracing, logging. Imagine, each developer would introduce his/her own error management strategy. In this case you need to cope with broken structural symmetry. To avoid structural asymmetry, you may introduce guidelines and conventions for crosscutting concerns or recurring problems. It is not sufficient to provide documents, though, but you should also enforce these guidelines actively in the project. When reviewing software architecture, ask for such guidelines and ask how they have been enforced.
- Pattern Density: We all have heard about the “not invented here” syndrome and may even have experienced it ourselves. Patterns capture the knowledge of experts how to solve recurring problems. Thus, a wise engineer would rather apply a pattern instead of coming up with a homegrown solution. The more patterns have been used, the better. Wait a second! That is not quite true. In this context, by using patterns I mean using them whenever they solve a problem you really have. Not like in the “hammer-and-nail” syndrome with an attitude like the following one: even I got no need for Observer in my system, I will change the system so that I can apply the only pattern I know. Applying patterns is not trivial. Patterns are not Lego building blocks you can add to your architecture. Rather, you need to adapt your design, map the pattern roles to your architecture and integrate all the patterns properly. In some hotspot areas of an architecture design, the same components may contribute to several patterns. Thus, even high pattern density could be malicious if the patterns have not been integrated properly.
- Emergence: In an ant colony all individual elements reveal very “simple” behavior, but the whole colony seems to act smartly. Another example for the whole being more than the sum of its parts (Aristoteles). The Internet and the Web are also good examples for this kind of emergence. They consist of simple elements such as Ethernet, TCP/IP, DNS, HTTP, SMTP, RSS which serve as the building blocks for a complex ecosystem. Emergence also means decentralization and self-organization. Many architects are completely addicted to centralization. Often when quality attributes such as scalability, availability or fault-tolerance have high priority such as in Cloud or P2P Computing, decentralized approaches are much more effective. Think of the Leader/Followers patterns as an example which introduces a self-organizing pool of threads that reduces resource competition by only allowing the leader to access the joint event source. Another indirect consequence of emergence is that you shouldn’t strive for fat APIs that implement all methods, but just simple APIs providing methods with which you can easily implement more advanced functionality.
- Partitioning and Spacing: The responsibilities in your design should be mapped systematically to components and subsystems. For example, subsystems should only contain functionality that reveals some semantic coherence. That is, the closer two different kinds of functionality are semantically related, the closer should they be aggregated in the design. Layering is a good way for separating different levels of abstractions with top layers being more abstract (close to the problem domain) and low layers being less abstract (close to the solution domain). Basically, you need some kind of systematic top-down design and problem decomposition so that the resulting subsystems and components will offer an adequate partitioning of responsibility and a sufficient spacing (different kinds of concerns should be mapped to different subsystems, layers, components). In addition, architecture entities should offer a clean partitioning into role-based interfaces instead of relying on one bloated interface per component or subsystem. An example could be an additional interface that enhances the testability of the system. Or an interface for configuring a component. If possible commonly used interfaces such as configuration interfaces, should be defined uniformly for the whole system. Otherwise, configuration would need to deal with many different configuration approaches (see the Component Configurator Pattern for more details).
There are definitely some more qualities I could add to this list of indicators. However, in my experience these aforementioned quality indicators already offer you an excellent mental tool for assessing the internal quality and thus the beauty of a software architecture.
Mind the gap: As mentioned several times, each of these quality indicators just serves as an indicator, not as a proof. But if the design under consideration does not provide one of these internal quality properties, the responsible architects should at least offer you a good and convincing rationale why this property is missing.
Needless to say, that quality indicators should not only be leveraged for assessing existing software designs. Likewise, you should consider them valuable when designing a new software architecture.
Friday, January 07, 2011
Qcon 2011 in London: 100 British Pounds discount
I will host a track on Architecture Improvements at QCon 2011 with a row of excellent speakers. In addition, I will give a talk on Software Architecture Design as well as a tutorial on Functional Programming with Scala and F#.
If you’re interested in attending the conference just use the promo code STAL100 for receiving a 100 BP discount.
Here’s the URL for the QCon.
Monday, January 03, 2011
Small is better - really?
- appropriateness to solve a specific class of problems
- simplicity
- expressiveness
- orthogonality of language structures
- approach of least surprise
- emergence of features
- avoiding implcit dependencies and influences (such as temporaries in C++)
- availability of powerful language idioms
There are several conclusions we could draw from that qualities:
- Even if a language offers most of these qualities at the beginning, it might erode over time when it is evolved in an unsystematic or improper way
- Several problem classes might imply various languages. So, we have to choice between a best of breed approach for each problem class as a polyglot programmer would prefer, or we could try to indentify some multi-paradigm language. Note, however, that multi-paradigm languages are those which are often very likely to erode, especially when more and more paradigms are added as an afterthought. However, this is a risk, not a predefined fact. For instance, multiple paradigms were cleanly integrated in Scala from day one, while C++ started as a structured language with additional classes.
- The language core might be excellent but it won't be helpful if the support libraries and tools do not reveal the same qualities mentioned above. A fool with a tool is still a fool. So, even for the best languages, APIs of libraries need to integrate tightly.
- If you follow the Pragmatic Programmers advice to learn a new language every year, you will also be able to extend your knowledge about new solution paradigms and idioms. This will drastically improve your skills as programmer and architect. Mind the gap: it is not sufficient to just learn a language, you also need to practice it for a while.
Personally, I have learned a lot of languages in my career: x86-ASM, Pascal, Modula2, VB, Java, C, C++, Ruby, Lisp, Clojure, Scala, C#, F#, CIP, Haskell, D, Axum, ... All of these languages have their purpose and their strengths, but also their weaknesses and problem areas they cannot address well. Some of them are large and easy to learn such as Scala, while others are small and difficult to learn such as Lisp which is why they invented Scheme :-)
Size does not necessarily matter, the problem you are going to solve does.