Hitchhiker's Guide to Software Architecture and Everything Else - by Michael Stal

Thursday, August 23, 2007

Gravestone Inscriptions

Imagine a kind of e-cemetry for software projects that failed. How would the gravestone inscriptions look like?


"In affectionate remembrance of projext X which departed this life January 21st 2007 aged 2 years."


Or maybe:


"In loving memory of Windows 95, descendant  of Windows 3.11. Died 1st January 2003 aged 8 years."


We would probably congregate to bid farewell to our failed project with the quality manager being in charge to deliver a funeral eulogy, reminding us of all the nice aspects of the project but concealing all the more negative issues - learning from failure doesn't seem appropriate in the context of a funeral. 


And then  in quiet remembrance of the software system we would think of the last words of the project manager: Something  like "Why, the hell, didn't CMMI save our project?" or, maybe, something close to Ether Allans' last words (1789):  "Waiting are they?  Waiting are they?  Well--let 'em wait."


Hopefully, no one will demand hanging the architects and engineers.


Tuesday, August 21, 2007

From Dusk Till Dawn

Suppose, in the future faults disappear from software systems. With other words, every software system will be completely fault free. Hmmmh, you might say, this sounds like a beautiful dream. But how could we ever make such a dream come true?

If we specify the architecture of a software system in a high-level language such as UML or a domain-specific DSL or a mixture of these, then the first assumption could be:

Instead of manually implementing code, apply a verified code generator that automatically creates an error-free implementation. Of course, there are several hidden preconditions in this context. The generator itself must be verified which certainly is a very tough job, but needs to be done only once. And after all, our implementation depends on the correctness of its runtime environment, i.e. all libraries or 3rd party components being used, the operating system, application server, even the underlying hardware, and so forth. So far, we didn't discuss the implementations model-driven generators create from models. How could we improve their quality? Patterns and pattern languages are the answer as they represent reusable and well-proven architectural solutions for recurring problems.

The next problem consists of guaranteeing the correctness of the models the developers specify and pass to the generator. Here, correctness means two things: correctness of syntax and semantics. While the former issue can be easily ensured using tools (e.g., editors), the latter one is much harder, because it implies the correct consideration of ALL requirements, functional, developmental and operational ones, as well as their composition. Even worse, for meeting operational requirements we need to take into account the whole infrastructure and environment under which our software system is supposed to run. That implies that all 3rd party vendor need to provide this kind of information in some structured and predefined way.

One solution could be to model our software system from different perspectives, for example, from a functional (domain-specific viewpoint) as well as from various operational viewpoints (e.g., specifying a performance model). Such approach requires the availability of different cross cutting DSLs - one for each perspective - as well as tools that merge all those DSLs to one implementation. Today, we would leverage Aspect-Oriented Software Development for exactly that purpose. Needless to say that we now have to prove the correctness of all DSLs and tools and infrastructures and 3rd party components and composition techniques, and so on.

How could we ever verify that a DSL consistently and completely describes a particular domain?

For widespread domains such as finance, automation, healthcare we could introduce standardized languages that are as "reliable" as programming languages. However, it is rather unlikely that we're capable of covering all relevant domains this way. In addition, such DSLs comprise an important IPR of a company which it seldom likes to share with competitors. Hence, we would need to specify our own domain languages or language extensions. Proving their correctness, however, is far from being simple. Language design requires some expertise. Anyway, domain driven design will be an important discipline. Of course, Software Product Line Engineering is another breakthrough technology in this context as it enables us to support a family of similar applications.

Ok, let us take for granted that we were able to master the aforementioned preconditions. What about requirements engineering? Well, many stakeholders might not be very familiar with software engineering. How can they manage to specify their requirements in a correct way? Contradiction or missing precedence of requirements might be detected by tools such as generators, given that all requirements can be expressed in some formal way.

But what about expressing stakeholder's intent? This could be formalized if stakeholders are able to understand some kind of mathematical model. If that's not the case, the only solution left is specifying requirements on behalf of stakeholders. This would need to happen in close cooperation between architects and stakeholders. Unfortunately, such approach only works if stakeholders know what they want and if it is possible to build what they want. Let me give you an example. How would you formally describe a word processor and all the features you expect? How about usability? How about cool UIs in this context? The problem (and the fun) with human communication is its lack of formalism. Thus, you always have to know the person and the person's context to understand its communication. Now, take issues such as hidden agendas into account. Eventually, another problem arises. Requirements are subject to constant evolution. We don't live in an island where changes seldom happen. Rather we live in metro poles where infrastructures and environments permanently evolve. Let us be realistic: Only for very small and stable domains that significantly impose constraints on possible applications, we can introduce formal methods that will ensure correctness. For all other domains, often some kind of guessing will be necessary, i.e. some assumptions that hopefully represent stakeholder intent.

There is another issue left. Of course, the process which we apply to achieve results must support our goals.

There must be a predescribed approach to cover the whole application lifecycle. This approach must take into account how to enforce fault-free software. For instance, each refactoring mustn't change correctness. Quality Assurance measures are required on all levels, from domain language design, architecture up to 3rd party components. Testing for correctness is essential. For this purpose, tests must cover all aspects. And tests must be correct themselves. Of course, they need to be automated. Note that tests are even important when we could verify the correctness of our tool chain such as the generator. As soon as the environment changes, we need to test!

What conclusions can we draw from these discussions?

  • For safety critical environments a formalization of software development where each formal step is being verified can be (and already is) an appropriate way.
  • Even if correctness woild be possible, it requires large efforts in terms of time and money. For some application development projects these efforts might not be feasible.
  • We might never achieve fault-free implementations in software engineering. I am sure this will be the case as the same observation also holds for other engineering disciplines such as building construction or car manufacturing. But we will be able to introduce more structure to software development, i.e. by adding some formalism and verification mechanisms. I mentioned examples such as DSLs.
  • We will never be able to express stakeholder content consistently and completely. In addition, stakeholder intent will change over time. Thus, we always will address some (slow or fast) moving target.

The world is not perfect and never will be. Let us keep this in mind when designing software systems.


[Meta] I am now using QUMANA

This represents another meta posting which means I will discuss blogging but not software architecture :-) I have switched to use a tool called QUMANA for editing my postings. Previously I wrote my postings directly to my blogger site which turned out to be not very comfortable, to say the least. I found QUMANA just by doing a Google search. If anyone has additional recommendations for such tools, just go on and add a comment here.  


Friday, August 17, 2007

Star Trek 2.0

I really love Star Trek. In my childhood I enjoyed the original series with Spock, Captain Kirk and Scotty. And of course, I kept watching all episodes of later series such as Star Trek - The Next Generation. To be honest, it was SciFi such as Star Trek that motivated me to become a computer expert - although I also considered physics as a possible alternative. I can even remember that I once developed my own Star Trek game on my Tandy Radio Shack TRS-80 computer. The computer offered a Z80 CPU and 16k RAM memory with a built in ROM-based Basic interpreter. When I bought it in the only computer shop in Munich (back in the early eighties!), a staff member seriously asked me why I needed such an incredible large amount of memory. After several days I had designed my own Star Trek game and had typed in all the 16k of code reaching the limits of my TRS-80. This had been such a pleasant but also very exhausting activity. For some recreation I decided to go playing soccer with some of my friends. When I returned back to home, I went into my room and had a shocking experience - the computer had lost power. It turned out that my mother assumed I had forgotten to turn the computer off, and pulled the power plug. All of the code had vanished into the vast darkness of cyberspace. The only thing left was the design documentation. Yet another hint why you should value your architecture and, of course, why backups are so important.

Star Trek has always been so inspiring in terms of science and engineering that even books were written dealing with the Star Trek Universe. Think of Beaming, Warp speed, or Holodecks! Famous scientists are still discussing what could be possible and what won't be possible in the future. Suppose, you're a software engineer in the 22nd century responsible to design software for space ships and star bases. How will the engineering process look like? Which technologies and tools will be available? Unfortunately, SciFi movies do not offer any clues. They never show the neon babies (also known as software engineers), but only their products. For instance, a holodeck requires some sophisticated software systems. Obviously, most SciFi authors consider the life of a software engineer rather boring. Do you have any idea why this is the case?

Nothing is more difficult to forecast than the future. Unfortunately, my crystal ball is currently not usable due to a graphics card failure. Maybe, we should therefore try the opposite approach instead. What if the 22nd century software engineers would have to use all our current technologies. Suppose, Captain Kirk needs to communicate with the earth-based command center. Will he use Instant Messaging or maybe a Web X.0 user interface for this purpose encountering some 404s from time to time? Will everything abide to service-oriented principles? Maybe, the reason that we call it Enterprise Service Bus is related to the fact that within the Enterprise starship all communication will take place via an ESB. Another question arising here is whether companies such as Microsoft or IBM will still be around in the twentysecond century, eventually providing specialized versions of Linux or Windows to starship construction plants. If Mr. Spock needs to look up some scientific information in the databases, will he leverage Google Search and Oracle for that purpose? Does Scotty require to use a future version of Word or Excel for writing his logs? Will the weapon control system be provided by Electronic Arts? Will Mr. Sulu use Google Maps and Google Galaxy to navigate through space and time? What happens if an alien lifeform intends to take over control of a starship? Will they send a virus or trojan or maybe a root kit that can adapt itself to whatever system architecture is available? Maybe, they'll try phishing attacks intercepting HTTP packets that travel through the galaxy. Of course, they will leverage future versions of Ruby, Java, or C# for that purpose.

I guess, we all agree that these scenarios do not appear to be very likely. Of course, the future will provide much more advanced technologies and technologies that we can dream of today [if mankind survives]. Just remember the state of science and technology in the beginning of the twentieth century and compare it with our current accomplishments. Nonetheless, I wholeheartedly believe that at least some software architecture principles will remain the same in the future. For example, as we all know, patterns represent proven solutions for recurring problems. Thus, they capture experience and knowledge which can then be re-used across different software engineers. I bet, that future engineers such as Scotty or Mr. Spock will still leverage patterns instead of constantly reinventing the wheel. Domain Driven Design and Model-Driven Software Development are going to become substantial parts of software development projects. Instead of handcrafting each bit in a time-consuming way, engineers will specify their software in a domain-specific language from which advanced generators will produce sophisticated programs. And I am also pretty sure that the Enterprise crew won't apply a waterfall approach when they need to come up with a new solution. The future is agile! In one of my previous postings I already could prove that the success of the Borgs is primarily related to their agile processes. Future software engineers will still have to face the challenges of operational and developmental requirements. Maybe, a smart software engineer can come up with some sophisticated concepts how to inject such properties into a concrete software architecture.

In summary, software architecture will still be one of the essential topics in the 22nd century. Therefore, we can finally draw the conclusion that being a software architect will remain an important profession. It means to boldly go where no one has gone before. It also means according to Douglas Adams' "Hitchhikers Guide to the Galaxy" that software architects unlike other species such as hair-dressers and telephone cleaners will not be on board of the "rescue spaceship". But that is a completely different story.

Enjoy your life as a software engineer, because you are and always will be a Very Important Person. All other colleagues should keep in mind: don't forget to praise and worship your software architects. You depend on these strange but smart lifeforms!

Thursday, August 16, 2007

Now for Something Completely Different- SIEMENS

Many of you may know that I am working for Siemens for almost 16 years now. I got employed back in 1991 by the Siemens R&D unit in Munich where I started my career as a software engineer and researcher. Some of you may also be aware of the fact that there was a high frequency of news about my company in the last months. Unfortunately, most of the news were rather on the negative spectrum. And even worse, as Douglas Adams once said, there is only one thing that can travel faster than light: bad news! My friends and relatives keep constantly asking all the time what is going on at Siemens.

To be honest, I am really unhappy with this kind of questions. Thus, It is time to shed some light on the many achievements and historical facts Siemens has provided over more than 160 years since it was founded in 1847 by the inventors Werner von Siemens and Johann Georg Halske. People seem to ignore all these great achievements which by far outnumber all the negative reports you may have heard recently. Some of the early innovations made by Werner von Siemens himself include a special telegraph, the trolley bus, or a dynamic transducer. But this is only the tip of the iceberg in terms of Siemens innovations. Thus, Siemens always has been a great place for research and engineering from day 1.

Werner von Siemens happened to be not only a great inventor but also a very kind person. He was the first entrepeneur to introduce a system of different social insurances for his employees.

Siemens constantly grew until today where we have reached a number of about 480000 employees working in almost all parts of the globe and creating a large range of high quality products such diverse as trains, plants, automation systems, medical systems, communication systems, automotive or lighting solutions as well as IT solutions. Did you know that scientists like Gustav Hertz or Walter Schottky were employed by Siemens?

What makes Siemens an excellent place for software engineers is the mixture of advanced and innovative products that require more and more IT centric constituents and a very creative setting biased towards innovations which is the reason why the company considers itself a global network of innovations. As the Siemens groups require a whole bunch of different software technologies and tools, I consider it a great opportunity to be part of the Corporate Research and Technology division of Siemens. Here, I am able to cooperate with colleagues from all Siemens-relevant applications domains. And as you might guess Software Architecture is one of the fundamental disciplines in such a context. All the books I co-authored and research results I achieved on architectural topics would't have been possible without such a creative and supportive environment where I am able to closely connect with people such as Frank Buschmann or cooperate with smart experts such as Doug Schmidt, Markus Völter and the like.

Why did I write this rather emotional posting? Because I consider it very unfair that in the last months public attention continously focused on only some illbehaving persons within the company ignoring all the aforementioned achievements. Siemens has already started to consequently investigate this behavior and establish a system of very rigid control mechanisms. On one hand, it is important to reveal any wrong doing within a company. But on the other hand, its is wrong to judge a book by his cover (as Franknfurter mentions in the Rocky Horror Picture Show). Of course, I am biased as I am a Siemens employee myself but that doesn't change my personal attitude and opinion.

I am very proud to be part of the Siemens family where I keep working on great products and software architectures.

For more information visit the following Siemens site.

Saturday, August 11, 2007

Upcoming Events

I am in vacation for several weeks now. Nonetheless, I havent't been lazy :-)
One week ago I participated in a panel discussion at this year's ECOOP conference in Berlin. The panelists, among them celebrities such as Martin Odersky, Judith Bishop (moderator), Tiziana Margaria, Gilad Braha discussed "OOPS in the next five years - the hot topics". All of the panelists tried to argue from different perspectives introducing topics such as Dynanmic Languages or Service-Orientation, My key points were more driven by my industry background. I don't believe in Silver Bullets. Thus, I guess, OOP will experience some significant improvements integrating other paradigms such as functional programming and providing advanced features such as traits. However, I anticipate specialized OOPS solutions for specific domains. I also assume, we will have to live with other paradigms, due to the different kinds of impedance mismatches we are experiencing in every day projects. And I am sure, someday a new paradigm will evolve that is going to integrate and extend OOP.
In my vacation I also had to prepare for my two OOPSLA 2007 tutorials. The one on high quality software architecture covers process principles, architectural principles and quality properties relevant for achieving high quality in software architecture design. The preparation of my second tutorial on Software Architecture Refactoring was far more challenging. I thought, there should be a lot of material available on this issue. To my big surprise, there were only a few sources on that issue (which I already covered in a previous posting). Most sources refer to code refactoring such as introduced in Martin's excellent book. Architecture Refactoring deals with semantic preserving transformations of the software architecture itself. In my tutorial I will come up with some general concepts, a catalog of architecture refactorings as well as some words about how architecture refactoring relates to other disciplines. If you'd like further details, you might want to listen to the OOPSLA podcast (see episode 5) where Bernd Kolb from Software Engineering Radio interviewed me a few weeks ago.
In my OOP 2008 presentations I will also talk on Software Architecture Refactoring. Moreover, I will prepare a one day tutorial on .NET technologies for building SOA applications.
Note, that I won't be able to participate in this year's JAOO conference. I hadn't been invited as a track chair but Ted Neward asked me whether I'd be able to give a talk on Enterprise .NET. However, there is such a lot of private affairs and Siemens internal work that I cannot make it this year.
Thus, I am looking forward meeting some of you at OOPSLA.