Thursday, December 23, 2010

Books, books, books and tools

In vacations I often dedicate some time to reading new interesting books as well as experimenting with programming languages. For a software architect I consider it essential to stay in sync with technology evolution. Honestly, I do not believe in ivory tower architects who just know UML and Powerpoint. So, what are my plans for this christmas season?

  • One of my favorites is Martin Fowler’s and Rebecca Parson’s book “Domain-Specific Languages”! This book explains DSLs but also educates how to practically leverage internal and external DSLs.
  • From the Pragmatic Programmers I received “Seven Languages in Seven Weeks” by Bruce A. who provides a hands-on tour on Clojure, Haskell, Io, Prolog, Scala, Erlang, and Ruby. If you are a fan of programming languages, this is definitely a good read.
  • Of course, I am also enjoying fiction books as well such as Michael Crichton’s “The Great Train Robbery”. I recently started to read Crichton’s novels and really like his work. In addition, I am reading some layman books on the universe and quantum physics. I am currently  too lazy to reread Feynman’s Lecture Notes on Physics.
  • For my preparation for OOP 2011 where I will give three talks, one on actor-based programming, one on Scala, and one on design tactics, I have programmed a lot with Scala 2.8.1, sbt, and Akka.   But I am also currently trying to dive more into Clojure. I used Lisp in the university, recovered my knowledge a few years ago, and now starting to get familiar with Clojure. This is where monads come in. Interestingly, there are many definitions and explanations of monads but none of the is really decent. Maybe, I will give a try in the future, and also fail Smile

So many new things to learn in such little time. Unfortunately, the times are over where I could learn the whole JDK or .NET in a weekend. Software engineers have to get more and more focused. It is even impossible to keep yourself up-to-date with enterprise technologies alone, or concurrency techniques, or mobile computing, you name it. Nonetheless, systems are gaining more and more complexity, becoming bigger, using more technologies. Interestingly, we are always testing our limits. No matter how excellent our tools are, we try building systems that are more complex than our skills allow. With other words, technology and complexity are always ahead of us. Software architectures do not only fail to master inherent complexity but become beasts filled with accidental complexity. Our capabilities do not scale with our problems. Did I mention Henri Petroski’s book “Learning from Failure”. The essence of this book is that we always will go over the limits, but the best thing to survive is to learn from failure. This basically resembles what Thomas Kuhn once said about technology evolution and revolution.

Fortunately, it also means a lot of fun to learn new things or technologies, try them out, learn their strengths and weaknesses, increase your mental toolset. It opens the horizon to some new ideas and ways. This is why learning is one of the most important skills for software engineers.  

So, what are your plans for the rest of this year?

Friday, November 19, 2010

Understanding software architecture

While creating software architecture is a  very creative activity which I really enjoy, analyzing existing software systems is much harder and happening frequently.

There are several reasons for this observation:

  • Often, the designed and the implemented architecture differ dramatically. In these cases, the documents are nothing than fairy tales.
  • In some cases, I am encountering a kind of documentation hell. The information surely can be found somewhere, but it is almost impossible to figure out what could be the right document. I am here to find not to search.
  • Another “highlight” are insufficient abstraction levels in the documentation. No one can read end even less persons can understand an architecture document that covers 1000 pages. Likewise, the other extreme isn’t helpful. For example, I once received a software architecture document consisting of one page filled with UML diagrams. 
  • What I really dislike are documents that only show the WHAT but not the WHY. However, I definitely need to know the rationale of all those decisions. All design pears that pop up from the void of the universe are worthless to all those mediocre reviewers who must understand them.

So how you could approach such an endeavor.

  • Firstly, get an idea about the domain, the development organization and the business, that is, the ecosystem, in which the software system lives.
  • Get an overview of the coarse grained architecture, from the user view and the developer view. I prefer strategic architecture design documents, guidelines, programming conventions, presentation slides, and even marketing material for this purpose.
  • Get a knowledge about the tools and technologies of the solution domains that have been used.
  • Interview some designers and developers in order to understand the implemented architecture (and the designed architecture) as well as its weaknesses and strengths. The more documents additionally capture the design essence the better. Both information sources will provide you with a good idea about the architecture.
  • Use architecture analysis tools to understand the internal quality of the systems (and also its pain points).
  • Try to change or implement yourself in the codebase of the system to stay close to the ground and thus to reality.
  • At the end, you should yourself be able to explain the architecture to other persons.

Basically, what you need to do to understand an existing software system is similar to conducting the first phases of an architecture review.

Don’t be shy and ask any questions that come to your mind. Often, these Q&A settings help architects figure out potential problems in their design, and help you understand the architecture. So, it can lead to a Win-Win situation.

Sunday, August 29, 2010

Software Architecture Reviews

Software Architecture Reviews represent one of the most important approaches for introspecting software design. The best way for introducing such assessments is to establish regular evaluation workshops in all iterations. Architects and designers will review the software architecture focusing on internal qualities such as unavailability of dependency cycles and quality attributes such as performance or modifiability. If they find any issues, they will define action items and means for getting rid of the challenges (also known as problems). My own experience shows that this is definitely the recommended way, because regular reviews will detect any issues as early as possible before they cause further harm such as accidental complexity or design erosion. If introspection tools such as Odasa or SotoArc are available, detecting internal quality issues is often surprisingly easy. And if architects compare quality attributes with architecture decisions in early stages, they will reduce the probability that an important nonfunctional requirement was neglected or designed in a wrong way.

Unfortunately, many architecture reviews are initiated very late. This makes problem detection and architecture refactoring much more complex. Nonetheless, such architecture reviews are always a good solution to get rid of architectural problems, especially when the organization could not handle such issues successfully in the development project. These reviews should have a clear scope. Otherwise, the review will take place for large time frames which is not acceptable in most cases. Scoping in this context means to define a main objective of the review as well as 2-3 key areas where potential problems supposedly lurk in the guts of the design. The result of such a "large" review should be a document with key findings, e.g., weaknesses and strength of the architecture. However, it should not only contain the weaknesses but also appropriate solutions for addressing these weaknesses. Some review methods don't consider solutions mandatory in such a report. I definitely do. Even more, I consider ways to get rid of the weaknesses the most important result of a review.

Such a report should reveal the following structure:

  • Scope of the review and review method
  • Documents (project documents) and information used (for example: interviews, test plans, source code, demos)
  • Executive Summary
  • Overview of the software architecture under review
  • Findings and Recommended Solutions
  • Summary

While regular, iterative reviews can be conducted by project members such as architects and developers, larger review projects should be done by "external" reviewers. There are two main reasons for this recommendation: First of all, external reviewers often see more. And secondly, higher management representatives are often inclined to accept external recommendations more than the ones from their own project members.

Interestingly, an architecture review is not constrained to software architecture challenges. It might also reveal problems in the organization, development processes, roles and responsibilities, communication, technologies, tools, business. To be honest, only rarely the results will address design problems only. That is another reason why architecture reviews should have external reviewers.

There is a whole selection of different review methods such as CBAM, SAAM, ATAM documented in literature. We at Siemens have developed our own method called Experience-based Reviews. While the former 3 methods are scenario-based (which I will explain in a later blog post), experience based methods are driven by the experience of the reviewers and are less formalized.

Such a review normally consists of the following phases:

  • A Kickoff Workshop where reviewers and project stakeholders meet to introduce the review method, illustrate the software architecture, and define the review scope
  • In the Information Collection phase, all available information is collected by the reviewers such as documents, test plans, source code, demos. Further information is collected by conducting interviews with the project's stakeholders, 1 hour for each and constrained to one interviewee. Information is kept anonymous in order to establish a relationship of trust between reviewers and interviewees.
  • In the Evaluation phase all information is evaluated. Strengths, weaknesses, opportunities, and risks are defined by the reviewers. So are all the possible solutions for the weaknesses and risks. At the end the reviewers create a review report draft structured as mentioned above. They then send this report to all stakeholders, leverage their feedback, and disseminate the final report.
  • A Final Workshop helps to summarize the key findings and discuss open issues.

This approach works extremely well if the reviewers are experienced. We normally will use the Master/Apprentice pattern to educate architects how to conduct an experience-based review. Senior reviewers will lead the reviews and teach junior reviewers by training-on-the-job.

All of these approaches define qualitative assessment methods. In addition, architects and developers should also leverage quantitative methods when actually creating design and implementation. Tools for quantitative assessment consist of but are not limited to feasibility prototypes, simulations and metrics.

Architecture assessment methods are a bit like testing or refactoring. Many engineers think they can survive a project without these disciplines. But in all but trivial projects this proves to be a wrong assumption. The more you invest in early and regular testing, refactoring, architecture assessment efforts, the better your RoI will be. Do less in the beginning, pay more in the end!

 

Friday, June 18, 2010

I have been upgraded

Because I got so many questions on this: Most recently, my name got a little bit longer. The University of Groningen in the Netherlands (RuG, Rijksuniversiteit Groningen) has appointed me Professor Extraordinarius. Thus, I am now member of the Bernoulli Institute of Mathematics and Natural Sciences where I will give lectures and more.

If you now ask: this is in addition to my main profession at Siemens which by the way supported me in achieving the position. Of course, I am very glad to cooperate with Paris Avgeriou who is a professor himself at the RUG. Another personal friend and promoter, Jan Bosch, is member of the same institute. Needless to say, that Paris and Jan are internationally renowned software architecture experts with whom it is a lot of fun to work together. I am quite impressed by their knowledge and expertise, they also use to distribute to their students.

My appointment was greatly supported by Nicolai Petkov, another professor at RuG, whom I really like and respect. After the deans of all Computer Science Instititutes in other Dutch Universities  had agreed to my appointment after checking my C/V and my References I finally received the appointment letter.

The RuG is quite famous for its research on software engineering, in particular in Product Line Engineering. Not to forget all the RuG offspring such as Jan Ommering and a lot of other great Product Line gurus. I do not only enjoy that but also the Dutch mentality and great hospitality whenever I am in Groningen. Only a few weeks ago I’ve been there to give lectures to students as I already mentioned in my last blog post.

Thus, I will be able in the future to support Ph.D students, master students as well as postdocs. They may learn from my experiences, but I will also learn a lot from them. You could call that a Win-Win situation.

So that’s the big news.

Friday, June 11, 2010

Teaching Students in Software Architecture

This week I have been giving lectures on software architecture for master and PhD students at the Rijksuniversiteit in Groningen (RuG). It worked excellent and it was a lot of fun. I teached them systematic software architecture design, design tactics and introduced a pattern system for distributed and concurrent systems. What worked particularly well was the split into a presentation part and a tutorial part - each lasting 2 hours per day - where participants were designing example projects in groups of up to 5 people. This way, they would not only learn about architecture design, but also about group interaction. In fact, many problems in software architecture are caused by people issues such as lack of (efficient) communication.

They will also get a group assignment and an individual assignment for the time until end of June.The former one will comprise completing the architecture design and providing an architecture description while the individual assigment requires students to create a design essay on an architectural quality (in the context of their example project).

I received positive feedback that I did not introduce just a bunch of unrelated patterns but narrated a pattern story of how to apply patterns for the design of middleware and distributed systems. From my viewpoint, this is the best method to educate students that patterns are not just disconnected islands of code, at least they shouldn't be. Most of the power of patterns comes from using systems of patterns. Pattern languages would be even more attractive, but unfortunately there is only a bunch of them and the ones existing are really complex.

For the lecturer it is important to be constantly available for the exercise groups during the tutorial parts. I got a lot of questions about architecture design, modeling and documentation. It is pretty close to architecture enforcement. You cannot simply throw an (exercise) specification over the fence and expect attendees to exactly understand all issues. Management by walking around is much more effective.

As material for the course I relied on the excellent Software Architecture in Practice book, 2nd edition, by Len Bass,  Paul Clements and Rick Kazman and of course - needless to say - on our POSA book series (Pattern-oriented Software Architecture). Using POSA patterns is beneficial in that it shows  that the pattern community does not only depend on the GoF pattern book.

In summary, the mixture of conceptual parts and exercise parts works nicely to educate students about software architecture.
Looking forward to providing more lectures in ths future and keeping in touch with the students.

Monday, April 12, 2010

Functional Programming

In the last months I have intensified my skills in functional programming. My favourite programming languages have been Scala and F# so far. Only a few years ago I recovered my Lisp abilities when I had been ill for some weeks.  It is interesting how many functional languages currently evolve. Guess, it is closely related to the availabilty of virtual platforms such as the JVM and the CLR. No need to develop a whole bunch of APIs or IDEs anymore. just use the Java SDK or the .NET Framework Classes, and provide some plug-ins for Visual Studio, Eclipse, NetBeans, or any other IDE of your choice. A real paradise for language developers. Reminds me of the Eighties when all those OO languages appeared such as C++, Smalltalk, Objective C, or Eiffel. I anticipated several years ago that this would happen. Interesting, you might ask, but how is all this related to architecture? A lot if you ask me. Remember the book on object-oriented design patterns by Erich Gamma, Ralph Johnson, Richard Helm and the unforgettable John Vlissides. This book (as well as hopefully our POSA books) have changed the world in  that they brought software architecture in people’s minds. The same will happen through functional programming.

Functional programming enforces another way of thinking. Functions are first class entitites that can be assigned to values, passed as arguments to higher-order functions, or just created anonymously (called: closures). In pure functional programming there are no variables anymore, just immutable values. All of this was already specified by Church & all in the Thirties.  While the first functional languages such as Lisp, ML were difficult to understand for everyday programmers, new functional languages deviate from the academic approach and turn into hybrid languages that integrate object-oriented features, and permit mutability by supporting imperative programming styles.

Sometimes mutability is important such as for I/O and GUI operations or logging/tracing. But there are many situations where mutability is like shooting yourself in the foot. Think of race conditions in multithreaded applications. Most accidental complexity is caused by mutable state. No mutable state implies no race conditions. Or think of side-effects that make your application hard to understand and debug. These side-effects are simply missing in functional languages. Think of testing or validating your functionality. Guess why Joshua Bloch spends so much space in his book on Effective Java illustrating how to make Java classes immutable.

In contrast to object-oriented approaches functions are used to abstract functionality and function composition is the basic means of composing artifacts. That’s a perfect extension to object oriented abstraction and composition.  Hybrid languages such as Scala use this combination to make building internal DSLs surprisingly easy. For instance, the Actor functionality looks like an integral constituent of the language, while in fact it is just a library. If you wonder what actors are: they can be considered abstract tasks or objects running on their own threads. They do not provide any possibility for reading or changing their internal state. The only way to communicate with actors is by exchanging messages´with them. If these messages are immutable values, there is no chance of side-effects.  What a perfect way for implementing multithreaded applications.

Another famous example for functional features used to provide an internal DSL is LINQ (Language Integrated Query) in Microsoft .NET. Using extension functions, closures, anonymous types, and type inference helped to seamlessly integrate LINQ into languages such as C#.

Sometimes you need to deal with state. Languages like Clojure also support Shared Transactional Memory, a good way to encapsulate all state changes within transactions.

What makes the functional paradigma so effective, is it’s succinctness. As an example we’ll use F#:

let rec fib x =

     if x < 2 then 1

     else fib (x–1) + fib (x-2)

let results = Array.map fib [| 1 .. 40 |]

The code above introduces a recursive function for calculating fibonacci numbers, then builds an array containing the first fourty fibonacci numbers. Try this with Java, C# or C++.

Many problems can be described in a much more natural way using functional features.

Thus we already know several aspects where functional languages are beneficial for the architect:

  • introducing internal DSLs
  • Parallel and Asynchronous Programming (e.g, targeting Multicores)
  • Implementing some parts of an architecture in a much more succinct way
  • Improving testability
  • Understandability of design (expressiveness, simplicity)

As mentioned previously, functional approaches do not represent a panacea. Neither does Object-Oriented Programming. But they  surely offer an excellent extension to imperative or object-oriented design and implementation. Consider them a perfect extension of Object-Orientation.

It is time to follow the advice of the Pragmatic Programmers and learn at least one additional language per year. My recommendation: focus on functional languages for this purpose.   

Monday, March 29, 2010

Implicit versus explicit

In a very larg telecommunications application I was supposed to review, the software architects had introduced a strict layering to shield different abstraction of functionality from each other. One of the lower layers comprised the database system, while the topmost layer consisted of the application UIs. Whenever something in the DBMS layer changed, the UIs broke. This shouldn’t happen according to the software architecture document. Thus, I asked the UI developer whether he abided by the strict layering. His answer was no, because he got the order to optimize performance which required him to access the data directly. What we see, is a typical example of implicit design decisions. When designers or developers break the architecture or use it in an unanticipated way without communicating the issue to software architects, this will inevitably lead to severe problems which are typically hard or even impossible to detect.

In another project the development organization built a GUI for a monitoring framework. After the application had been relaesed, customers started to complain about a feature they were expecting but which was missing. This illustrates what happens if requirements engineering or customers have implicit expectations. Unfortunately, these requirements change over the years. While 20 years ago almost every developer was satisfied with command line compilers and had to explicitly specify she would prefer more IDE-like environments, today this feature is considered an implicit requirement. You don’t need to mention it explicitely anymore. KANO-Analysis is a perfect tool for such investigations. For programming environments this is very obvious bút what if you’re working in a domain where you don’t know about all these hidden requirements. Thus, mind all invisiple traps.

There are two conclusions we can draw from these examples:

  1. In a software development project, any information should be made explicit, even facts we normally believe every stakeholder should know. If communication among stakeholders is the most important asset in such projects, nothing shall remain implicit. Implicit information is unavailable information!
  2. In every project we need enforcement. Requirements engineering must enforce that the development team really knows and uses all requirements. In a specification all implicit requirements must be turned into explicit requirements, because implicit requirements are typically swallowed by black holes. Software architects must enforce the architecture by closely cooperating with developers and must ensure quality by closely cooperating with testers.

Work based on implicit information can only possibly work in small teams that use an agile approach and where every team member has an explicit knowledge of implicit information. But do you dare to take the risk that all information gaps are really closed?

Always be an explicit software architect without a hidden agenda!

Friday, March 12, 2010

Over the Fence

It is surprising how many software architects still create their design, throw it over the fence, and then expect others to follow the divine strategies implicitely hidden behind the lines and diagrams.

This behavior also holds for different teams with key developers or subsystem architects that are ought to coordinate design activities, while in practice they are happily inhabiting their isolated islands.

I’ll never forget a telecommunications project where the architects had introduced excellent guidelines and a decent strategic design. However, I was called for an architecture review, because the implementation had turned out to be not quite decent, to say the least. When I searched for the root of all problems, it became clear very soon that no one in the project had really cared about any of the architecture guidelines. And even worse, the strictly layered architecture design had been seriously violated. For example, whenever the database schemas were evolved, the clients broke, although they were totally isolated from each other by a “firewall” of 3 layers. The key developer of the GUI told me in an interview, that he considered performance optimization as his primary directive so that it semed to be totally acceptable to directly access the database.

We can learn from these war stories that good communication is the (most) critical aspect in any project. Without good architecture enforcement, there is no good architecture. But how can architects  pragmatically ensure high quality as well as conformance with the architecture design? Management by walking around does the trick. You should frequently visit the different teams and make sure, they really understand the infinite wisdom and essence in your architecture design and architecture guidelines. This is the best way to get their buy-in and support. In addition, you ‘ll detect any architectural shift early. In particular, when developers add accidental complexity by introducing design pearls that are not motivated by requirements. I wholeheartedly believe, agility implicitely requires architects to work this way. But it is not the case that architects are inerrable, (although we’d really like to see the universe from this perspective). Sometimes, even clever wizards fail. As an architect I am definitely better off when developers show me where I was plain wrong. Believe me, this is much better than testers, reviewers or customers happily finger pointing to your design flaws. Thus, architecture enforcement is a bidirectional interaction between architects and implementation, with architects being the drivers.

As Lenin once said: trust is good but control is better. Think about this before throwing your design over the fence in the next project. Some things tend to backfire heavily.

Sunday, February 21, 2010

The Extensibility Syndrome

In one of the Facebook groups I am member of there is currently a discussion going on how to deal with extensible design, especially how to prove to a client that a system meets its extensibility expectations.

Focus on extensibility is important but can also be a big hazard, in particular, when engineers overdo it. For example, consider the application of the Strategy pattern. Basically, applying the Strategy patterns implies some kind of uncertainty which algorithm actually should be used. If many Strategy pattern instances are used, this almost always means that the engineers have no clue in which direction the system should be extensible. Of course, the same holds for overuse of other extensibility patterns as well. Such systems become a nightmare for developers and users.

Thus, make clear with requirements engineering or customers what kind of extensibility they really need.

What needs to be extended:

  • Should an algorithm be exchanged?
  • Is it a whole subsystem or layer that is subject to change or extensibility?
  • Is it a component that needs to be changed or extended?

When will it be extended?

  • Now,
  • Mandatory in the future,
  • Optionally in the future?

Which binding time?

  • source code,
  • compile time,
  • link time,
  • runtime,
  • maintenance time?

Who will be in charge of extending?

  • operator,
  • user,
  • developer

You should only consider those extensions that are mandatory, none of the possible or optional ones that may or may not appear in the future.

In the beginning of software design introduce change scenarios as proposed by the book Software Architecture in Practice, 2nd edition (Bass, Clements, Kazman). Here you’ll learn how to describe modifiability scenarios using scenario diagrams and rating them using so-called utility trees.

You may also use design tactics diagrams that deal with how to implement the different kinds of extensibility mentioned above.

If you follow the principles of my Onion Model, then each architecture design step will only be driven by requirements and their priorities. This way, you can establish requirements traceability within your architectural design.

For extensibility and change scenarios, a Commonality/Variability analysis can be leveraged, especially when dealing with a wide variety of extensions such as in platform or software product line development.

In contrast to common believe extensibility or change is only applicable after you’ve designed those parts of your architecture that are subject to extension or change. You can either extend some functional entities or operational qualities such as scalability mechanisms. This implies that all extensibility and change design activities follow after functional and operational design and not before.

When opening your functional & operational design for extensions or change always use the Open/Close principle to prevent unwanted side effects.

To prove that a software architecture is extensible such as expected by the customer you got different means.

  • An end-user or operator might like to see the implemented change scenarios and how they actually work in the implementation.
  • Another engineer might like to see how the architecture and implementation support extensions.
  • A requirements engineer may like to see the mapping from requirements to architecture decisions.

Modifiability aspects (e.g., extensibility and changeability) are much more difficult than they seem at first place. When the amount of extensibility is overwhelming, you’ll often find wizards or configuration automatisms to hide the mess underneath (which does not imply that wizards or configurators are indicators for bad design).

As Heraklitus once said: panta rhei – everything changes. Thus, be prepared!

Friday, February 12, 2010

Does Cloud Computing have an Architectural Impact?

Let’s suppose you have good reasons to leverage a Cloud infrastructure. Will this decision hava any impact on your architecture? The answer is quite easy: It depends :-)

  • If you just use virtualization to exactly simulate a physical environment like in Amazon EC2, then your applications will be oblivious to the Cloud infrastructure.
  • If you leverage SaaS (Software as a Service) like Salesforce.com then it’s a SEP (Somebody Else’s Problem).

However, if you use PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) for your applications, then things definitely will change.

It is not so much the domain-specific logic that is influenced, but infrastructural and non-functional requirements will experience an impact.

  • In a Cloud environment different storage means are supported. In general, you’ll find Blobs and NoSQL databases. Designing the persistence infrastructure must take these issues into account unless you are still relying on a mainstream DBMS running somewhere in the Cloud.
  • For messaging Message Queues are available. Thus, message-oriented communication between Cloud-aware subsystems is a reasonable approach for connecting islands with each other. Often, SOA style communication protocols such as RESTful communication or SOAP/WS-* are supported. These protocols alone imply different architecture paradigms - designing the communication and distribution infrastructure can be a challenge.
  • All operational qualities such as performance, availability, scalability, fault-tolerance need to be implemented with the SLAs in mind the Cloud provider offers. If, for example, availability zones are supported in the Cloud you better plan how to leverage them according to your needs. In a public cloud security is even a more critical issue. You cannot naively trust the Cloud provider to keep your mission-critical data secret. Thus, you have to provide means like encryption almost everywhere. For performance reasons, you might want to consider the Cloud as a Grid such as in Apache HADOOP. But how can you partition your system in a suitable way. Is Mapreduce what you need or, otherwise, does your application require a different kind of approach? There are many different concurrency architectures like Master/Slave, Pipes & Filters, Concurrent Reactor that could be appropriate. In contrast to typical application development you are much more constrained when addressing these operational qualities, because many decisions the Clound infrastructure has already made for you. It is YOU who needs to integrate with the Cloud, not vice-versa.
  • Developmental qualities such as modularity, modifiability, maintainability can be hard to achieve in such an environment. First of all, you should introduce some kind of governance approach. Secondly, you need to modularize in such a way that governance issues live in harmony with the kind of modularization propagated by the Cloud, in particular by the PaaS APIs. For command & control purposes you need to think about how to administer the Cloud applications. There might be 3rd party products for this purpose. It, however, might also be the case you have to implement your own tooling.
  • If, for some reason, you need to use different cloud infrastructures, interoperability might be a big issue given that today’s approaches almost inevitably lead to vendor-lock-in.

Not always, you will be able to start with a green field project, but rather need to deploy an existing application ecosystem or parts of it to the Cloud infrastructure. Migration is not straightforward as we already saw earlier. You may need to combine reengineering and refactoring activities for this purpose.

This posting describes only the tip of the iceberg. I hope, people will develop some best practices (patterns) for how to leverage existing Cloud platforms. I’d be rather sceptical if someone would come up with such patterns soon. You surely remember, patterns must be independently used in three different applications to make them patterns. I cannot imagine, we already obtained that much experience in practice.

So, if you are going to design for Amazon EC2, Google Web Engine or Microsoft Azure don’t forget: Mind the Cloud!

Thursday, February 11, 2010

Requirements traceability – The Holy Grail

It often occurs that I am supposed to review an architecture document – that’s one important part of my job. After having read the document, I always try to summarize what I just read. And in many cases, this proves to be rather difficult. One of the reasons is that the subsequent parts of the document seem to be only loosely connected such as graphical models popping up without textual explanation or architectural & technical decisions appearing without any hint what the design rationale could be. However, a reader who is dependent on rumors, insider knowledge, and vague assumptions will see the hodgepodge of syntax but never fully understand the semantics behind this syntax. If the reader is a tester, requirements engineer, fellow architect, or developer such situations need to be considered harmful. Yes I can see the answer is 42, but was the question respectively problem that lead to this result?

The big secret is requirements traceability. Each and every architectural decision must be strongly derived from forces (i.e., requirements and risks). This also helps to keep the architecture simple, expressive and balanced, because this way we can get rid of all these design pearls smart developers and architects typically invent.

From the forces elicitated at the beginning of a project architects derive architecturally relevant requirements. This includes requirements clarification and priorization. Remember: Garbage-in-Garbage-out. If the initial assumptions are wrong, the resulting implementation can under no circumstances meet the expectations.  Of course, we do not expect requirements to be fully available at project startup. We have become extraordinary agile wizards, anyway, and can also handle requirements dropping in. Needless to say, we can not gurantee the same perfect software & system architecture we would have come up with if all these requirements had been fully available from day 1.

As we already got those long-term experience, we are using the onion model.

  • For all use cases (part of user requirements) we are deriving a black box view of our system. From this descriptions we also obtain actors and required/provided interfaces. As we see, this step is driven by requirements.
  • The domain model is another force. It is a means to understand typical entities in the underlying domain as well as their relationships and interactions. They will turn the black-box view into a white box view. Now you can start to map the use cases to sequence diagrams which is also requirements driven. All of a sudden the domain-specific subsystems, components, interactions, interfaces are disclosed refining our external model. Eric Evans is an excellent source how to deal with such domain-driven design.
  • So far we have focused on the domain entities. It is time to add infrastructural requirements such as communication, persistence and concurrency infrastructures as well as operational requirements such as performance, safety, availability. This we do by starting with the highest priorities- that’s why priorization is so essential – and ending with the lowest priorities. Each architecture decision is strongly rooted in requirements.
  • The same is done with the mandatory developmental requirements like modifiability, maintainability and then with the optional developmental requirements.

All these decisions are depicted in diagrams, but also explained in the textual descriptions.

What are the benefits of such an approach:

  • We get requirements traceability: Firstly, we know which requirements lead to which architectural decisions, thus also obtaining a mechanism to check whether we have already covered all the necessary requirements in our design. Secondly, we know which effects an architectural change such as a refactoring will have on requirements, thus preventing or at least lowering the risk of accidental complexity. Thirdly, all decisions are made explicit. Implicit parts lurk like invisble ghosts deep in the guts of the architecture hurting us when we are least prepared.
  • We are driven by priorities. If we miss our milestones, we skip the less important requirements instead of the really important things.
  • Readers of our documents will not only understand the “what” but also the “why” making it much easier to get their buy-in. This is what we call design rationale.
  • If we additionally come up with leading guidelines and principles for cross-cutting concerns in the beginning we will increase internal qualities such as symmetry, orthogonality, simplicity.

The journey is the reward in software architecture. Don’t tell readers about the target to aim for, but also about the way how you get there. On each junction, explain why you prefer one over the other road.

Writing of such documents can be such a fun. And reading them even more. For example, read Dave Cutler’s book about the creation of Windows NT.

A architecture document shall contain guidelines and design rationale, but no labyrinths.

Saturday, January 30, 2010

Back from the OOP 2010

This year’s OOP conference in Munich has been a lot of fun. I met Markus Völter, Stefan Tilkov, Floyd Marinescu, Jan Bosch, Eric Evans, Philippe Kruchten, Roman Pichler, Peter Hruschka, Kersten Auel, Rene Schönfeldt,  Klaus Rohe, Matthias Bohlen, Gernot Starke, Jutta Eckstein, Nico Josuttis, Gregor Hohpe, Robert C. Martin, Matthias Bohlen, Kevlin Henney and my good ole friend Frank Buschmann. Especially the networking part was overwhelming. So many speakers and participants you can learn from.The talks and keynotes are definitely helpful to hear about new trends, perspectives and experiences. But the personal communication is the essence of such conferences. That’s the reason why I don’t believe in online events. Sure, it’s a nice add-on but it cannot substitute the “human” factor. In the middle of the week we had a meeting of the Siemens Senior Software Architects where I enjoyed meeting some of the aforementioned celebrities but also the smart senior architects within Siemens who do excellent work day by day. They might not be that much in the limelight, but those people are  the real heroes of software engineering among all those “nameless” other heroes such as development heads, team leads, testers or developers. As Jan mentioned: there are two types of architects. Those talking about software architecture and those practicing it. Fortunately, I am a hybrid belonging to both types.

This year’s keynotes  offered excellent quality. I’ll never forget Uncle Bob’s entertaining talk on the polyglot programmer where he dived into the history of programming languages in a very entertaining way. Or two speakers from Zühlke Engineering who presented a great keynote on functional programming. Not to forget Gernot Starke with his talk on software architects and stealing in your neighbor’s garden. Unfortunately, I could not make it to his keynote but from what I heard it was one of the highlights.

Most of the topics presented in talks and tutorials covered the typical topics you’d expect. Cloud Computing and Functional Programming as well as Agility represented the hot topics. What I liked most were all these more practice-oriented talks offering some insights for practioners and also revealing some real life show cases. For instance, I enjoyed Kurt Höpli’s talk on building and controlling environmental sensors in a project in Switzerland. And I also experienced a lot of fun in Markus Völter’s talk on how to apply Model-Driven-Software-Development  within an embedded systems context. Never I will forget how Eric Evans and Hans Dockter presented Domain-Driven-Design convincing some attendees to participate as actors. And I’ll remember also Lothar Mieske who presented the real look-and-feel of all those Cloud Computing platforms. I could add much more but this should give you an impression and suffice as a pars pro toto.

Personally, I have been very busy the week. First, I gave a full-day tutorial on Software Architecture – From Requirements to Architecture. It was the tutorial with the largest numbers of attendees. This makes interaction somewhat difficult but nonetheless worked much better than expected. My talk on Hitchhiker’s Guide to Cloud Computing had about 80-100 attendees. To my pleasure the Global Technology Field Cluster Lead of my organization also attended the talk. For me, it was the biggest surprise that the talk “Introduction to Scala” had attracted so many people. I was totally electrified and I mean this literally cos’ static charge hit me whenever I pressed the page-down key. When people are interested, I will record this talk with Camtasia and publish the video clip in YouTube and other media. All in all, I got really overwhelming feedback. Thank you to all attendees for their nice evaluation sheets :-)

Next year the OOP will celebrate its 20th anniversary. Hope that many of you will join the event. It is really an extraordinary experience.

Thursday, January 21, 2010

Stakeholder-specific Communication

Communication is one of the most importants aspects of software architecture design. The architecture itself is a means of communicating design decisions to customers, product managers, project managers, testers, integrators, and developers. However, these stakeholders have different expectations regarding the software architecture. While developers need a clear guideline how to implement the architecture design, customers may be more interested how (well) the software meets their requirements. The software architecture documentation and all other means of communication such as presentations must cover these various perspectives. To address such broad spectrum of interests, architects could provide a specific document for each particular kind of stakeholder which leads to an unacceptable amount of effort and time. This is why documents should rather explain the architecture decisions - the "what", the "how" and the "why" - in a top-down approach and add a guide-to-the-reader motivating who (i.e. which stakeholder) should read what in which order. For presentations or other forms of communication it might be worthwhile to introduce a specific variant for each stakeholder. A project manager or customer is typically not that interested in all those strategic and tactical design destails, while developers need all the information they can get. Thus, communication between architects and other stakeholders should take the stakeholder-specific interests into account. This implies, that architects need to have a clear understanding of these interests before starting documenting or communicating. Otherwise, they won't get buy-in for their software architecture by these different stakeholders. Personally, I have experienced so many projects where managers couldn't follow and understand what the architects told them overwhelmed by details they did not (have to) care about. And I also have been in so many meetings where software architects introduced high-level abstractions that could not provide the details developers and testers expected.
As a prerequisite it is necessary to ensure a common understanding of terminology. Did you ever attend a meeting where different participants had a different understanding of terms? Communication will not work and succeed when one person is speaking english, while the communication's target can only understand spanish. This might be obvious for real languages, but it is often underestimated for domain-specific languages. Did you once try to read and understand (!) a treatment record of your physician? He speaks english, you speak english, but this won't help you much. As a consequence, introducing a domain-specific language is valuable and essential. It does not have to be a full-blown language in the strict formal sense, but often it makes sense to offer more than just a glossary. The core functional architecture should represent the problem domain.
As a consequence, all communication between architects and stakeholders must happen in a stakeholder-specific way. When giving a presentation on the software architecture, make sure you know the target audience and their interests. Don't assume, other types of stakeholders understand software architecture the way you do. You are the doctor and they are the patients, so-to-speak. Communication may succeed or fail. The risk of failure can only be minimized if you can take and understand the perspective of your communication partners.
Obviously, these statements also hold for other kinds of communication. Have you ever tried to explain to your spouse why you need that cool new gadget for several hundred bucks? Try that by telling her/him the technical specifications. Won't work very well, right? What we take for granted in such situations (i.e., adapting the communication to your communication partner), represents also a conditio-sine-qua-non in software engineering. So, better mind the stakeholder-specific communication in the future :-)

Saturday, January 09, 2010

Real Software Engineers

  • Real software engineers don’t need safety nets. Instead, they are ready to face all dangers and risks without any fear. For this reason, they dislike (unit) testing, because (unit) testing is for pussies (i.e., those who do not trust their own code.) Since real software engineers never make mistakes, testing would be a mere waste of time.
  • Real software engineers won’t document their design, because these designs contain an incredible amount of design pearls other mortals would not be able to comprehend, anyway.
  • Nor do real software engineers read other design documents nor manuals, either because their fellow engineers did not provide documentation or because they do not trust these documents.
  • Instead, their main guidelines are:
    • All you need is code
    • Bubbles don’t crash
  • Real software engineers do not check or prioritize requirements. Instead, they got intuition for customers’ needs which drives all their activities.
  • Real Software Engineers do not communicate (much) with other stakeholders, because other stakeholders are just obstacles on the way to project completion. That’s also the reason why real software engineers hate meetings.
  • Real software engineers do rarely speak English, German, …. They rather express themselves using C#, Java, C++, …
  • Real Software Engineers feel more comfortable in the solution domain. Their goal is to provide solutions instead of getting lost in problem domains. This is why they prefer playing around with solution technologies.
  • Real software engineers can turn any problem into a nail for which they can use their favourite hammer.
  • Real software engineers do not need software architecture because a predefined architecture would just limit their creativity. A good architecture is always created ad-hoc for solving real problems in a pragmatic way.
  • Real software engineers do not foster re-use. First of all, re-use has never worked in practice. And secondly, re-using design or code typically requires more resources than reinventing the wheel. This is often due to the fact that re-usable artifacte were created by other real software engineers who failed to document their work.
  • Real software engineers won’t read this blog because in the time needed for reading, they could add another feature to their software.

Thursday, January 07, 2010

Architecture Documentation Revisited

Did you ever read a novel which later has been used as foundation for a movie? I bet, you were rather disappointed how the movie differed from your reading experience and thus from your expectations.

Did you ever read an architecture document – yes, there is vague evidence and hope that something such as architecture documents exists – and then compared the documented design with the implemented system?

In software architecture design, documents are not just by-products created for write-only brain dumps.  Instead, they are supposed to be read by someone – a fact that might be surprising for some engineers.

Architecture documents serve as the basic means for communicating architectural decisions, i.e. they explain the “what” and the “why” – aka design rationale. Target audience of architecture documents are foremost all stakeholders, not just the authors themselves.

In addition, those documents should be up-to-date. Did you ever struggled through 100s of pages of a design document, and right after you finished, someone told you, the document was outdated?

Does this mean, we should adapt the documentation after each change? No, it only implies, architecture documents should be updated regularly. Consider versioning architecture documents in this context!

An architecture specification should also be subject for testing. Don’t take testing too literally! What I suggest is that documents are being reviewed for their usability/readability, internal consistency and completeness, as well as their consistency with the actual implementation. Needless to mention, that the reviewers shouldn’t be the authors but a selection of stakeholders.

Always use the right style of documentation depending on purpose and target audience. A user document that describes a system should contain tutorials as well as a reference manual. If no one understands it, the best design ever won’t help you much.

My strategy for documenting is to walk to the other side of the fence thinking about how a specific stakeholder would like to get the material presented. I am not writing the document for myself but for others, anyway. And I always take care of updating the documentation whenever the system has been updated to a new (minor or major) version. Readers should not be punished by reading my document but consider this as an entertaining and profitable activity. Reading design documents is fun or at least should be! Who claims, IT documents must always be written in a technical and boring style?

Remember: the most important capability of a software architect are communication skills. Documenting is one particular way of communicating.