Software Architecture Reviews represent one of the most important approaches for introspecting software design. The best way for introducing such assessments is to establish regular evaluation workshops in all iterations. Architects and designers will review the software architecture focusing on internal qualities such as unavailability of dependency cycles and quality attributes such as performance or modifiability. If they find any issues, they will define action items and means for getting rid of the challenges (also known as problems). My own experience shows that this is definitely the recommended way, because regular reviews will detect any issues as early as possible before they cause further harm such as accidental complexity or design erosion. If introspection tools such as Odasa or SotoArc are available, detecting internal quality issues is often surprisingly easy. And if architects compare quality attributes with architecture decisions in early stages, they will reduce the probability that an important nonfunctional requirement was neglected or designed in a wrong way.
Unfortunately, many architecture reviews are initiated very late. This makes problem detection and architecture refactoring much more complex. Nonetheless, such architecture reviews are always a good solution to get rid of architectural problems, especially when the organization could not handle such issues successfully in the development project. These reviews should have a clear scope. Otherwise, the review will take place for large time frames which is not acceptable in most cases. Scoping in this context means to define a main objective of the review as well as 2-3 key areas where potential problems supposedly lurk in the guts of the design. The result of such a "large" review should be a document with key findings, e.g., weaknesses and strength of the architecture. However, it should not only contain the weaknesses but also appropriate solutions for addressing these weaknesses. Some review methods don't consider solutions mandatory in such a report. I definitely do. Even more, I consider ways to get rid of the weaknesses the most important result of a review.
Such a report should reveal the following structure:
- Scope of the review and review method
- Documents (project documents) and information used (for example: interviews, test plans, source code, demos)
- Executive Summary
- Overview of the software architecture under review
- Findings and Recommended Solutions
- Summary
While regular, iterative reviews can be conducted by project members such as architects and developers, larger review projects should be done by "external" reviewers. There are two main reasons for this recommendation: First of all, external reviewers often see more. And secondly, higher management representatives are often inclined to accept external recommendations more than the ones from their own project members.
Interestingly, an architecture review is not constrained to software architecture challenges. It might also reveal problems in the organization, development processes, roles and responsibilities, communication, technologies, tools, business. To be honest, only rarely the results will address design problems only. That is another reason why architecture reviews should have external reviewers.
There is a whole selection of different review methods such as CBAM, SAAM, ATAM documented in literature. We at Siemens have developed our own method called Experience-based Reviews. While the former 3 methods are scenario-based (which I will explain in a later blog post), experience based methods are driven by the experience of the reviewers and are less formalized.
Such a review normally consists of the following phases:
- A Kickoff Workshop where reviewers and project stakeholders meet to introduce the review method, illustrate the software architecture, and define the review scope
- In the Information Collection phase, all available information is collected by the reviewers such as documents, test plans, source code, demos. Further information is collected by conducting interviews with the project's stakeholders, 1 hour for each and constrained to one interviewee. Information is kept anonymous in order to establish a relationship of trust between reviewers and interviewees.
- In the Evaluation phase all information is evaluated. Strengths, weaknesses, opportunities, and risks are defined by the reviewers. So are all the possible solutions for the weaknesses and risks. At the end the reviewers create a review report draft structured as mentioned above. They then send this report to all stakeholders, leverage their feedback, and disseminate the final report.
- A Final Workshop helps to summarize the key findings and discuss open issues.
This approach works extremely well if the reviewers are experienced. We normally will use the Master/Apprentice pattern to educate architects how to conduct an experience-based review. Senior reviewers will lead the reviews and teach junior reviewers by training-on-the-job.
All of these approaches define qualitative assessment methods. In addition, architects and developers should also leverage quantitative methods when actually creating design and implementation. Tools for quantitative assessment consist of but are not limited to feasibility prototypes, simulations and metrics.
Architecture assessment methods are a bit like testing or refactoring. Many engineers think they can survive a project without these disciplines. But in all but trivial projects this proves to be a wrong assumption. The more you invest in early and regular testing, refactoring, architecture assessment efforts, the better your RoI will be. Do less in the beginning, pay more in the end!