Sunday, July 22, 2012

Reviewing systems - the sum is more than its parts

The profession of a software or system architect is not only about creating new systems, it is also about assessing and improving existing systems. If change is the rule and not the exception, architectures grow and change continuously. To keep them sustainable, review and assessment techniques are of uttermost importance.

An architecture review consists of different phases:

  • In the clarification or scoping phase we need to define the goal of the architecture  review as well as the one to three key questions the review is supposed to answer. For example, a goal could be to validate that our new system architecture implements the desired dependability quality attributes appropriately. One of the key questions could be: “can the software architecture achieve five nines of availability?”
  • In the analysis or information phase we need to read documents, interview stakeholders, check code and test cases, watch demos, among other things, so that we are able to answer the key questions.
  • In the evaluation phase, reviewers investigate the strengths, weaknesses, opportunities and threats imposed by the current system and related to the review goal. If there are risks in the system, reviewers should define recommendations to mitigate these risks.
  • In the feedback phase, all information (findings, recommendations) provided by the review is returned back to the team responsible for system development.   

These phases roughly correspond to the model specified by RUP (IBM Rational Unified Process): Inception, Elaboration, Construction, Transition. 

Unfortunately, there are many different types of reviews and review techniques, qualitative and quantitative reviews, scenario-based and experience-based reviews, code, design and architecture reviews. So which one should we choose for which purpose?

In my experience, it is often best to conduct an experience-based review as outlined  in the description of the phases, but integrate other review techniques as well, for example:

  • If quality attributes are in the main scope of the review, ATAM (Architecture Tradeoff Analysis Method) could be added to concretize quality attributes using scenarios and compare them with the actual architecture, thus determining the risk themes.
  • For obtaining information from stakeholders on architecture issues ADRs (Active Design Reviews) are a complimentary means instead of relying on interviews only.
  • Quantitative assessments such as metrics, prototypes, simulators help to obtain more detailed information about the system under review and its capabilities and limitations.
  • Code and design reviews help reviewers gain more insights about the details of the system. Of course, the code and design reviews are constrained to the parts relevant for the overall review goal.

The toolbox of the software or system architect should contain the relevant review techniques. Whenever architects are involved in a review, they should determine in the clarification phase which techniques they will use in the subsequent phases.

Note, that reviews do not need to follow a waterfall model. You may and should use an agile approach with answering the most relevant key question or its most important aspects first using time-boxed increments. 

Such reviews may require weeks, but can also be conducted as flash reviews on one day. The more you are able to narrow the review scope, the less effort it takes. If you conduct regular reviews after each increment, the architectural deltas are typically small which lets you narrow the scope of your regular reviews which can then be done in a day. The findings are used to refactor and improve the system architecture.

Reviewers should be experienced in software/system architecture as well as in review techniques. For building up a review culture in your organization, start enabling the lead architect to conduct reviews. Then, use the master-apprentice model to constantly increase the review skills in the rest of the team.

But as mentioned earlier: do not rely on one single review method but establish a toolbox of review and assessment techniques that can be used in combination to enforce architecture sustainability.

Saturday, January 07, 2012

Trapped

DSLs are currently being promoted by a large number of activists. If you ever read the seminal Pragmatic Programmers book, you'll also find such a recommendation there. So, will the software engineering universe soon turn into a paradise? Let me tell you two examples from industrial practice: Once upon a time, in the logistics domain, a smart and lazy software engineer introduced a new language just for himself, in a very ad-hoc manner. When they saw the result, all his colleagues got very excited about the concept and started extending the language for their own purposes. The language inevitably grew and grew. Soon, an increasing number of system parts were depending on the language. Unortunately, when our smart inventor reached a high age, he had to leave and let his colleagues alone. But now, there was no one left in the company who really understood the language concept and its realization. It is claimed that the language keeps still growing. Other theories assume the system had to be completely rewritten in the meantime. In another universe and another time, another software engineer was convincing his fellows to introduce DSLs. All the IT dwarfs went crazy inventing their own languages. Yes, they even made competitions who could come up with the most fasinating or useful language. After a while, the DWARF software system was nothing but an ocean of languages. Only wizards were able to generate a working solution. When a furious dragon (= customer) attacked their town, everything imploded. No one has a clue where the remains of the dwarf town are located. What can we learn from these failures? DSL design is (like) architecture design. An ad-hoc language will lead to design erosion, and cause more harm than good in the long run. DSLs represent strategic design tools that should only be in the hands of experienced designers. There are purposes for which DSLs are a perfect fit. But there are also circumstances where they shouldn't be applied. Overdoing it increases accidental complexity. It is like component-based re-use. Each DSL should only have one responsibility. Plan to grow the DSLs systematically, refactor it and verify its correctness. Communicate with all stakeholders that are affected by the DSL. Consider the design choices for the language syntax. Should it be a text-based language, a graphical language, or both? Mind the XML hell - long, long ago almost everyone was striving for XML based languages. But in some rare contexts XML made writing configurations and documents ineffective, uncomprehensible, and tedious. I've seen systems wirh thousands of XML files. Let experts build and grow languages. It is amazing how (often) our discipline gets addicted to all kinds of panacea. Once you are lost in the buzzword jungle, all systems and stakeholders will suffer fom DSL paranoia. DSLs are an awesome means for boosting productivity. However, used by the wrong persons or with the wrong motivations, it is easy to shoot yourself in the foot.