Saturday, March 28, 2009

Get off of my Cloud

It enters the stage as one of the great buzzwords of the last months and if we had not to deal with an economic crisis these days it would gain much more attention. Cloud Computing and its relatives such as SaaS are dominating the IT world. Ever heard about Windows Azure, Google Web Apps, Amazon E2C and S3? The idea behind this paradigm is rather old: why not provide resources transparently as services  in a hosting infrastructure? This would let IT departments and CIOs refrain from being infrastructure providers. Instead they could focus on their actual job: providing IT functionality to drive their company's business. 

By services I don't just mean applications, but all kinds of services.  An application such as a CRM app could be a service. Likewise, services could comprise a  piece of middleware such as a persistence solution or a workflow management system. And in the most extreme case, even the infrastructure itself, for example, operating systems, storage or network solutions might be deployed as services. Services where ever you look.

The idea behind cloud computing is a very promising one. Just take trusted infrastructure providers, let them host the applications (services) we need within an Internet-based cloud, and access these services from an inexpensive device. No need to buy or maintain your own data center solutions anymore. A data center on everyone's desktop if you like. Whenever you need more resources, just take them from the cloud and only pay for what you use. "Elastic computing" made real. Sounds like a good idea in an economic crisis, doesn't it? The strength and weakness of Cloud computing today is its dependence on external providers. Would you run your mission-critical applications with sensitive data on a 3rd party infrastructure that is not under your control and which supports multi-tenancy? What about services running in a remote country that you do not trust? How can a provider guarantee quality-of-service in an Internet-based infrastructure? Remember all the recent downtimes of Google Mail. What if all your company mail is running on such a network?

These problems imply a strategy where mission critical applications only run in a private cloud, while only a fraction of less critical applications may leverage public clouds. As soon as the Cloud Computing technology evolves and matures, more and more services might be moved to public clouds.  Then, private clouds could also be extended using public clouds. By the way, this extension approach is only feasible if standards foster interoperability of cloud solutions.

From 10000 feet Cloud Computing denotes sort of a business solution that introduces pay-per-use models for all types of services. Regarding technology this solution is based upon a bunch of existing paradigms such as virtualization, SOA, Multicore CPUs, NAS, or Web 2.0.

What all those people enthusiastic about Cloud computing tend to forget, is that there is no free lunch. It, indeed, represents a promising technology with a high coolness factor. However, from a software architecture perspective, it is rarely sufficient to just deploy application silos in virtual machines that are hosted somewhere else in the network. To really leverage clouds your applications need to be aware of the cloud infrastructure. You might split islands of functionality into interconnected services, and integrate application services with other platform or infrastructure services. What about issues such as collocation of services and data? In addition, engineers need to create desktop-like GUIs that communicate with backend services as promoted by Web 2.0. Last but not least, quality of service such as availability or responsiveness does not only depend on infrastructure and platform but is heavily influenced by the solution architectures.  The best infrastructure or platform does not help if the software architecture is a bunch of crap.

Cloud Computing offers fascinating new possibilities. But we shouldn't suffer from the hammer-and-nail syndrome. When applied inefficiently Cloud computing will inevitably lead to problems and disappointments. If your management or CIO department is all of a sudden promoting Cloud computing, this might be a good thing, but it also requires you as a software engineer to deal with expectation management. And finally, mind the architecture!

Friday, March 27, 2009

My Domain is my Castle

One of the most important (first) steps in architecture modeling encompasses the description of the domain model. This model introduces all entities relevant within the current domain as well as their relationships and interactions. It represents the main language all stakeholders should understand. All further activities in architecture modeling basically take the domain model and enrich it with additional infrastructure entities. 

Sounds very abstract, right? Let me give you a concrete example. Suppose, we are going to develop a Web store. What are the typical objects that appear in the problem space and are well known to all stakeholders?

For example, I'd expect entities such as:

  • web store user: someone accessing the web store
  • customer: someone interested to buy items
  • shopping cart: used to add, remove, pay goods
  • product catalog: presents all available products classified by categories and allows to search for these products using different types of query
  • customer database: place where all customer information is stored
  • order processing: responsible to process orders

If you think about this example further, you'll recognize there are some typical relationships between domain objects. For example: a web store user could be a customer or an administrator. A customer typically owns at most one shopping cart at the same time.

You also can easily defer some interactions between the entities. It is obvious there must be a relationship between the shopping cart and the product catalog, because the information about purchased items is read and updated from the product catalog. Of course, the shopping cart needs to interact with the order processing system after the customer has pressed the order button.

With other words: when thinking about how use cases would be mapped to sequence or interaction diagrams, the knowledge about the domain model is essential. It is the step taking you from a black box perspective to a gray box perspective.

There are different ways to express such a domain model. You could invent a graphical representation or use a textual language instead. If a domain object model gets formalized, we call this a DSL (Domain-Specific Language). In this case, engineers could even provide generators that map from the problem domain to the solution domain, i.e., that map problem domain objects and relations to solution domain objects and relations.

Using domain models offers huge advantages:

  • it introduces a common vocabulary among all stakeholders which supports effective discussions and often prevents a steep learning curve for domain dummies
  • it reduces the chance of missing to address important parts of a domain
  • it eases architecture modeling significantly
  • it helps focusing on the problem domain instead of always diving into the solution domain
  • when enriched with domain patterns (analysis), it boosts productivity

It is neither necessary nor useful trying to come up with a complete domain model in the first place. In most cases, there will be an initial, maybe incomplete, domain model which engineers together with the other roles will evolve over time.

There are several examples, where the domain model is already available. Think about GUIs, compilers, and some kinds of healthcare domains. In these domains, it is useless and a complete waste of time to come up with your own domain model.

 

So, whenever you are being involved in a new software architecture, always mind the domain object model! It will be your best friend.

Saturday, March 21, 2009

Software and System Architecture

Currently, I am involved in a large-scale project where Siemens Healthcare is building a couple of very innovative and exciting particle therapy systems. These systems are rather challenging as they consist of complex software, hardware  - for example, a circular particle accelerator - and all the other constituents such as the building, the CT devices. It is a typical project where software while being a major element is only contributing a small part of the system. However, within our company which yields revenues of over 70 billion euros a year, software engineering has become a major business factor. According to an estimation 60-70% of these revenues already depend on software.  Now imagine, 10% or even more of our projects would fail - what a nightmare! There are two implications: first of all, lead software architects must be well educated in software architecture and technology and show the right leadership skills. Secondly, system and software architecture must be treated as two sides of the same coin. So, what are the responsibilities  of software architects and system architects and how should they cooperate? I tried to find an appropriate definition in Wikipedia and failed. Especially, system architecture seems to be not well understood or ambiguous.

From my viewpoint an effective partitioning in software and system architecture depends on the characteristics of the project itself. If software constitutes the main part of the product or solution, then the lead software architect could and should also be system architect. Obviously, an additional IT architect  makes a lot of sense if the underlying infrastructure is sufficiently complex, but that's a different story.

In projects where hardware and software are of the same relevance or where hardware dominates, for example in most embedded systems, a system architect should be in the lead and co-operate with a hardware and a software architect. The system architect then should care a lot about testing. I have been involved in projects where I was in charge of the software system but got only rarely access to the target system. Thus, we had to use the development system where everything worked stunningly fine, but got deadlocks when finally moving to the target system - the OS of the target system had a faulty implementation of  the Sockets libraries. This implies, that the system architect should allow software and hardware development to happen almost independently, but introduce a lot of sync points where software/hardware integration is checked thoroughly. By the way, a similar problem occurs when partitioning a software system into modules which are then designed and implemented by different teams - think of outsourcing in the extreme case. If integration and system testing is neglected, you will encounter a lot of bad experiences. It is much more complex and challenging to check hardware/software integration than the integration of software modules - at least in theory :-;  One of the bad experiences you might need to cope with is that hardware requirements are often considered almost fixed, while software requirements are considered "soft". Would you ever change the main specifications of an airplane shortly before it is delivered? Definitely not! But this does not hold for software systems.  One of the consequences of this fact might be that the hardware team has already finished their part, being reluctant to change anything. As the software requirements have changed in the project, software development has fallen behind schedule. In these situations, software architects sometimes become the scapegoats of system development. A good system architect should be capable of handling these situations effectively and efficiently by better synchronizing the different teams.

A system architect should also make sure that the overall system requirements are mapped consistently and completely to software architecture and hardware related requirements. He or she should care about requirements traceability, especially for those requirements that have to be implemented in software and hardware as well.  If hardware comprised the core part of the system, software architects should have a sound understanding of the overall system, not just of the "soft" parts.

Basically, you could consider the partitioning of software and system architecture responsibilities in almost the same way like splitting responsibilities between software architects and the lead architect. The system architect is in charge of the whole system, while the software architect is in charge of a part of the system.

There is a lot more to say about this topic. But I am curious what you think and I am wondering about your experiences with software and software architecture as well as with the respective roles. Any comments highly appreciated.

Wednesday, March 18, 2009

News from the Engineering Frontier

I have been to the Canary Islands for a month. Since then I have been totally busy in two projects. One of them is dealing with medical particle therapy solutions, while the other one consists of setting up a certification program for software architects within Siemens. I am in charge of this program. Last week I was giving a seminar on software architecture with 12 guys from Siemens. I had a really great time and hope the same holds for the participants.

It is time to post more on this blog. Yes, I feel guilty for being so lazy. But you know, I am in love with Work-Life-Balance. To be honest, I needed a timeout from all software engineering business for a while, instead focusing on sports activities and digital photography. Now, I am really motivated to enter the software architecture bandwagon again. Last monday, we (= Markus Voelter,Stefan Tilkov, Christian Weyer, and myself) started with a new podcast series on software architecture. This will be in german with some company sponsors. I will keep you informed about the details when things mature. I am wondering whether a software architecture podcast in english would make sense.

In the medical particle therapy project we have just defined a possible partitioning between system architecture and software architecture. Seems to be a major source of misunderstandings in many projects. Same for differentiating problem and solution domain. Engineers seem to be addicted to the solution domain drug. Instead of thinking hard about the problem domain we always try entering safe ground, in particular by addressing solutions very early. The problem is: how can we define a solution when we don't understand the problem? On the other side, bubbles don't crash, as we also know. We could endlessly define UML diagrams without ever dealing with implementation. As Bertrand once said: all you need is code. Thus, we need to ensure the feasability of our solution very early. How can we satisfy both goals? Needless to say, agile development is the right answer.

Of course, there are a lot of additional issues when addressing the boundary between system and software architecture. This will be subject to upcoming posts, All feedback and comments as always highly appreciated