Wednesday, December 19, 2007

Requirements Part II - The Art of Multi-Facet Design

I'd like to continue my previous posting with an architectural design consideration. Basically, an architect maps the concrete problem (which is an entity of the problem domain) to a concrete solution (which is an entity of the solution domain).
Requirements help to decide which paths to take at specific points. This implies that requirements constrain the solution space. If multiple requirements are "applicable" at a specific point, priorities define the precedence which is why we must assign unique priorities to them. Not very complex so far, right?

But how do we really implement a solution architecture?
The first thing we obtain is a functional object model consisting of all the entities in the domain as well as their relationships. For instance, if we are going to build a Web shop, this model would probably consist of concepts such as customer, shopping cart, order system, product catalog, payment system. It is highly recommendable to define a domain model (domain-driven design) first to achieve a sound understanding and a common vocabulary among all stakeholders. Use cases are the means to drive this activity. They help designing the external boundaries of the system to be built but also the internal components required to implement the use cases. Obviously, functional requirements also have priorities which is why the use case priorities should drive the process.

Then we need to take into account infrastructural issues such as distribution, security, ... According to their priorities, we start with the most important infrastructure which needs to integrate into and with the functional model. For example, if we need to add distribution-specific infrastructure to our Web shop, we would introduce a broker-based architecture with customer browsers being the clients and the web shop being the server. Now let us assume, security is the next important requirements. We are now going to integrate an appropriate securiy infrastructure into our result we achieved after embedding the functional entities into the distributed infrastructure. We will add security components such as firewalls, identity management functionality, ... We continue with this approach until we have considered all operational requirements. What we did is extending the functional model (problem domain driven) with non-functional infrastructure parts (solution domain driven).

In the third step we add the infrastructure required by developmental qualities such as configurability, extensibility, and the like. This means that all those strategies, interceptors, configuration interfaces will be added at the end.

What we reached is an onion model with the functional architecture being the core part, the layers derived from the operational/infrastructural requirements being the middle part, and the developmental infrastructures being the outer part.
If you think about it, this is exactly what a building architect or other engineering disciplines do. Our resulting software architecture represents an integration of different functional and non-functional perspectives which is why I call it Multi-Facet Design.

But what about Software Product Line Engineering, In SPLE we will have to introduce a Commality/Variability analysis in order to determine all commanlities and variants. Variants will be considered as variations of a commonality and hence can be defined in terms of a commonality. Sounds abstract, but let me introduce an example. If we take our Web shop, we could define that we need to support different payment options (credit card, bank transfer, Paypal) depending on the customer. The variability is now defined as possible payment options. But the commonality in this example is the payment system itself. As a consequence, we need to consider the payment component in the core design, but specify the exact variations in the developmental design. For example, we could introduce the strategy pattern for the payment system to allow different options.

What about embedded systems?
All stringent constraints emerging from embedded systems such as scarce memory resources or CPU limitations or limited battery life will be handled as high priority requirements. One of the consequences is that static configurations are always preferred over dynamic configurations in such systems to allow determinism and control of QoS properties. Needless to say that realtime capabilities place additional burden to the engineer.

Monday, December 17, 2007

Requirements

Recently, I have been involved in consulting some friends about a software development project. At some point in time we started intensively discussing requirements. Soon I recognized there were some funny misunderstandings related to what requirements are. Sentences such as "Oh, all these use cases are our requirements" emerged from the void of the universe and gave me some hints there could be a problem. When I asked them for their use-cases, the only thing they could show me was a UML use case diagram. Not really what you expect as an architect. So, my first activity was explaining the theory of requirements, at least my small, probably incomplete perspective on the world of requirements.

Functional requirements define the what of the system in terms of the problem space. Thus, they should be expressed in the language of the target domain. All functional requirements span a full range of possible solutions.

Non-functional requirements come in two flavors, operational requirements and developmental requirements. They impose constraints on the full range of solutions and thus represent beasts of the solution domain.

Operational requirements are directly related to the operation of the software system. They define qualities such as security, performance, fault-tolerance and can be measured or tested.

Developmental requirements are directly related to the architecture of the software system. They define qualities such as maintainability, usability, or flexibility.  While some metrics exist for developmental requirements, they are normally not measurable.

It is interesting that most projects don't fail because of functional requirements but because of failing to meet the non-functional ones. Thus, it is often recommendable to assign sub teams or at least experts to the most important non-functional qualities in the project. They should then prepare checklists, strategies and tactics for ensuring the quality they are responsible for.

I often use the invasiveness as an additional property. A concrete instantiation of a requirement is invasive if it has an impact on the functional subsystems or relationships. Cross-cutting concerns such as security are mostly invasive.

To express functional requirements use-cases represent an excellent means. Note: Use cases are textual descriptions with sections such as actors, goals, exceptions, preconditions. A use case diagram is merely a graphical summary of use cases. For all other, i.e. non-functional requirements I recommend using utility trees such as introduced by Bass, Clements, Kazman. For example, a requirement such as flexibility could imply a lot of different things, maybe changeability, extensibility, or removability.

In the activity of architecture design, all decisions are based on requirements. Unfortunately, requirements can contradict each other. For instance, mind the availability/consistency paradox or flexibility versus performance. This means that all requirements must get a unique priority to let architects decide which path to choose at all decision points. NEVER accept requirements to have the same priority. If they have, assign priorities yourself. This also means: all architectural decisions must be solely based on requirements. Don't create design pearls, create the simplest possible architectures that work (i.e., that meet their requirements)!

Features are also requirements. So are commonalities and variants. However, they are at a very high level of abstraction and need to be mapped to finer levels first.

Requirements are the most important entities in a software development project. They help asking the right questions and making the right decisions. Requirements are a tool to constrain the problem and solution domain in such a way that an implementation can be developed effectively and efficiently. If you don't value requirements, you are doomed to fail, because you don't know what you are expected to build. Requirements are your friends, treat them as such!

Wednesday, December 12, 2007

About Learning & Documenting

Currently, I am pretty busy. That's the reason why I couldn't add more postings for a while. This posting will also be a short one.

It might seem somehow unrelated to architecture, but nonetheless I find it valuable. I am currently very busy preparing some presentations as well as an article on Software Architecture Refactoring for the german OBJEKTspektrum magazine.

I made some observations about my work style in the last months that I'd like to share with you.

For learning new IT-related things I am often documenting what I learned using Powerpoint presentations. Then, I will constantly rearrange the slides in such a way that they reflect how I consider the material should be organized to help educating others in a better, more effective way. Powerpoint also serves as a tool for organizing my thoughts and works better for me than mind mapping tools for this purpose.

When I am writing an article, I often prepare the material as a presentation and then use the presentation as a story line for the article. This helped me writing scientific papers as well as other professional articles. The presentation contains much less content by nature but forces me to design the rough storyline, while I am sometimes lost in details when I start writing articles.

I often also use pattern templates to organize my efforts when I need to solve architectural problems. Pattern templates are very powerful, because they require me thinking in terms such as context, problem, forces, solution, consequences and so forth.

In summary, I guess such tools really help me being better organized. Any similar experiences by someone else?

I am really wondering what tools and methods others recommend.

Saturday, November 17, 2007

SOA Pitfalls - Integrating your Legacy Apps

The Past: I can still remember the time when television was mostly black and white and color TVs were not widely available. One company constantly kept claiming in newspaper advertisements they had developed a color filter you just needed to place on your B+W TV set to transform it to a full Color TV set. What an easy solution! And the product did a perfect job given you enjoy random and constantly changing colors such as magenta faces or green skies. Sounds funny, but, of course, no one of us techno geeks would ever be so naive, right?

40 years later - The Present: Very often I am now involved in SOA projects.  And in many cases, even developers believe it is sufficient to simply wrap  existing application APIs using WS-* based  interfaces to open and integrate all applications into SOA wonderland. However, you might ask, if that is so simple, why do so many SOA integration endeavors fail?

The Future: SOA integration is not as simple as providing plain adapters. If business processes are the main reason for SOA-enabling an enterprise, it is much more effective to identify an appropriate partitioning of functionality into separate services that are then subject to orchestration. As a first step you need to obtain a consistent and prioritized list of business requirements from which you then define your enterprise architecture in a top-down manner. This might typically begin with simple company internal workflows and end in complex B2B scenarios. You'll need to refactor or even reengineer existing applications if securing investments is important in your company. This implies opening applications by providing service interfaces (in a bottom-up approach). Of course, qualities such as operational and developmental ones introduce additional levels of complexity. For example, opening up an existing intranet application for a B2B partnership scenario requires a whole lot of security considerations.

In summary, the adapter pattern might be simple but its concrete application can incur a whole lot of complexity. Building SOA applications naively will save a lot of time and resources in the short term, but imply a large amount of additional costs  in the mid- and long term.

Friday, November 16, 2007

What is Software Architecture

Like Andrew Tanenbaum, I could argue that there are so many definitions for software architecture that it is difficult to choose from them. You may also define it as follows: Software architecture is taken into consideration only when it hurts.
One possible definition for software architecture could be as follows:Software architecture denotes the set of artifacts and practices required to build a software system so that it achieves its implicit and explicit qualities. It includes the systematic partitioning of the software system into appropriate subsystems and relationships as well as the guiding principles used for that purpose. The partitioning of responsibilities and interactions need to be modelled using an interrelated set of viewpoints to address the functional and non-functional qualities.
Note that I am using "systematic" as an important attribute here as I don't consider ad-hoc systems (for example just hacking a Java program) as software architecture. From this personal definition follows that software architecture is both an entity and an act.
I am wondering what your favourite of software architecture is. Maybe one of the dozens available at the CMU SEI webpages?

Sunday, November 11, 2007

Beware of DSLs

At ooPSLA 2007 in Montreal there had been a very entertaining (and educating) panel on object-oriented programming languages and Simula 67 as their common ancestor. And the panelists were pretty excellent: Anders Hejlsberg (C#), James Gosling (Java), Guy Steele (Lisp/Scheme/Fortress), Bertrand Meyer (Eiffel), Ole Lerman Madsen (Beta). Now you might ask how this is related to software architecture.

First of all, programming languages have some influence on the way we think about architecture. Don't believe those experts that want to make you believe architecture design is completely unrelated to paradigms and languages. For example, one of the goals of Simula 67 was to provide a means for modeling systems which often got lost in state-of-the-art languages.

And secondly, we are currently facing a lot of discussions about DSLs (Domain specific languages). The panelists expressed their  concern that now people are starting to develop DSLs who have no experience in language design. It is not trivial to design a language that is complete and consistent as well as usable.  Believe me, I worked on such topics during my time at university.

The conclusions the panelists drew was that they prefer to add more modeling capabilities to programming languages over DSLs. Ruby is one of the examples in this area, but ,of course, it is only the beginning. Upcoming languages such as Scala, offer a lot of cool features for this purpose.

I discussed that issue with Markus (Völter), one of the gurus in Model-Driven Software Development and he shared this conclusion.

My conclusion and observation: Bad DSL design can cause more harm than value. Only language experts should become DSL designers. Designing DSLs on top of programming languages might be an appropriate approach. I already discussed Integrated DSLs in a previous posting.

Project Diary

Did you ever experience those never ending and and continuous discussions about project topics and decisions which you thought had been already addressed? 

Did you ever read an architecture document feeling a little bit confused or lost, because you couldn't remember the rationale of all those decisions?

I am sure, you know what I mean. The problem with software development projects is that there are basically two sources of information besides personal communication:

  • Meeting Minutes
  • Architecture Documents

Unfortunately, most architecture documents only describe the what, but not the how or why. Meeting Minutes strive for brevity and often don't include all those discussions. In addition, spontaneous meetings of project (sub) teams are often not documented at all.

Your gut feeling may tell you that something is missing here.

What I propose for development projects is an additional document, which I call "Project Diary". This document does not need to be very formal. It should not describe what is already available in other documents (meeting minutes, architecture documents) but instead refer to these sources. And, of course, architecture documents and meeting minutes, should also refer to the project diary. Its sole purpose is to complement these aforementioned documents by adding information such as alternatives discussed for solving a particular problem and the reason why a specific decision was preferred.

The organization of such project diaries may be by date or by topic or by both.

I don't recommend any specific template to use. My experience with all kind of project documents tells me that it is more important to come up with a uniform, complete and consistent style than to  strictly follow a specific template. Just take your style of choice. 

Sometimes, project members don't like the additional efforts of a project diary, especially in very small teams. In these cases I often have written a project diary without telling anyone. As soon as critical situations or fruitless discussions appeared, I would then just read from my project diary. This way, I could convince many people that a project diary offers more value than costs.

Wednesday, November 07, 2007

Corny Joke - somehow adapted

After their death three IT persons arrived in hell. Among them a senior manager, a consultant and a software architect. One of the devils was in charge of taking care of these unfortunates. However, hell population has the same kind of feelings towards IT experts like the rest of mankind. Thus, the devil offered a deal to the newcomers.  "There is a chimpanzee around this corner. Each of you you will need to make the chimpanzee first laugh, then cry, and finally make him return back to his cage. If you succeed, we'll send you back to earth." First the senior manager approached the chimpanzee. No matter what he said or did, the monkey showed absolutely no reaction. Then the consultant tried his luck. After an hour he also gave up. Finally, it was the turn of the software architect. After a few seconds the chimpanzee started screaming with laughter. After some more seconds he was moved to tears. And as soon as the architect had spoken some additional words, the monkey started panicking, returned immediately to his cage, locked the door and threw away the key. "Ok" the devil said, "I will keep my word, but could you, please, tell me what exactly you said to the chimpanzee?" "Of course!", the architect responded, "First, I told him what job I have which made him laugh. Then I told him what income I get which made him cry. Finally, I told him that we are still searching for new architects!"

Sunday, October 28, 2007

OOPLSA 2007 (Additional Info)

All keynotes are now available as Podcasts. Interested? Click here!

Thursday, October 25, 2007

OOPSLA 2007 [unplugged]

oopsla

I will just provide you with my unedited script on the OOPSLA 2007 which took place 21.10.-25.10.2007 in Montreal, Canada. I decided not to edit or beautify it. Hope, you are not offended by this fact. Yes, I know, this is a huge posting. Of course, the most important part is not included here: personal communication. This always has been one of the most important experiences on a conference like this.

Nonetheless, I hope you find it helpful to get an impression.

Some General Statistics:

Attendees 1225
90% male
60% newcomers
Most are architects, researchers, developers, also some testers
78% academy, 22% industry
Most 25-34, then in the age of 35-44
31% from rest of the world, the others from Canada and US

Conference Chair: Richard P. Gabriel (IBM Distinguished Engineer)

Tutorials I gave:
One on High Quality Software Architecture which was very well received. In this tutorial I explained different perspectives relevant to get high quality such as process principles, quality attributes, and architectural principles.

In my 2nd tutorial I addressed Software Architecture Refactoring. This is a new topic I already explained in other postings which is why I don't get into the details here. The tutorial was number one in terms of attendance. I got interviewed by infoQ (video) and will post where and when it will be available.

The 3rd tutorial was actually given by my team member Jörg Bartholdt on WS-* standards.

Tutorials I attended:
The one by Martin Odersky on SCALA (Scalable Language) which runs on the Java VM was an eye-opener. This language combines OO and functional programming. A free compiler is available for download. Highly recommendable.

I also went to a tutorial on product line engineering by Charles W. Krueger (BigLever). Very good introduction. Nonetheless, I found it a little bit disappointing as it focused on a concrete product, namely GEARS provided by the speaker's company.

First Keynote, Peter Turchi (Professor Literature/Poetry), Warren Wilson College
Some extract: Are we done? Can we write anything new? This is the typical pattern of repetition we experience in daily life.

Getting lost is important for imagination but is hard and requires some preparation. This especially holds for reading.

Exploration is part of the writing activity. Important is to discover things others haven't (examples: Galileo, Herschel, ...). "Seeing is an art which must be learnt". Example: upside down map where Australia is on the top. Different kind of seeing the world. How we see depends (in part) of what we want to see. Example: maps always show the bias of their maker.

"Being disoriented can be fun (and instructive)". "Perhaps being lost, one should get loster". Models/perspectives on reality.

Panel: Celebrating 40 years of language evolution: Simula67 to the Present and beyond
Steven Fraser (Cisco), moderator
J. Gosling: Played around with Simula, was an eye-opener, had a nice threading facility (in fact co routines), was driven by simulation, big question in next 10 years will be multi cores. Programming will be different. Other kinds of problems, 3D rendering for video games.
A. Hejlsberg: influences from functional programming, dynamic, declarative, Meta programming, lot of interesting things happening, e.g., LINQ, we need to think about newer models. For example, how can we deal with concurrent programming? No good solutions currently available
Ole Lehrmann Madsen, Aarhus: was a Simula 67 programmer, started BETA,  Simula 1 in early 60ies. Originally for Operation Research, then more general purpose. Extension of Algol, also a tool for modeling, sth. we are missing today
B. Meyer, ETH, Eiffel Software: One single environment for everything.
Guy L. Steele: cites some quotations on languages
Q: to Anders, Concurrent and functional programming - programming as a problem statement? Anders: you mostly say the what and the how. Too much "how". Need moving up abstraction level. LINQ also applicable to parallel querying such as in PLINQ. A lot of things to do e.g. in Meta programming. Bertrand: Functional languages may make procedural languages more flexible. Don't think functional programming can become mainstream. We would like to embrace the angel (mathematics) but we can't ignore the beast (hardware). Concurrency is biggest topic, changing little by little. Look at SCOOP.

James: Functional will be important. Only small percentage of community can handle functional programming. Humans want to be procedural. Guy: Likely programming will become very functional but not completely. Functional programming simply ignores state. Important for distribution.
Q: Languages changes programmer. Best way to make more creative. In which direction do you want to change?
Bertrand: Kristen Nygaard. Programming as modeling. Programming is understanding. Think about your system throughout the lifecycle. Wealth of inheritance, e.g. multiple inheritance. Generics. Idea of semantics: describe abstract logical properties. Ole: modeling is important. Teaching a student a conceptual framework for students, not letting them depend just on the language. It is not about just adding fancy features to languages. Simula was not originating from programming.
Q: Kristen started to talk about the future, mentioned OO was good but not sufficient. Started COOL (project on teaching programming) and STAGE (web like multi paradigm language. Make actors and give them a script). How about this idea?
A: (Ole): New challenges are beyond Simula concepts. Global pervasive computing cannot be covered this way because you don't know about what's around. Time and location did not matter when Simula emerged. STAGE was intended to extend Simula. Bertrand: Kristen was very proactive.
Q: Programming is not very satisfying. It is more about finding the right API? Will this happen to concurrency?

James: Smartness is  not sufficient (look at Silicon Valley). Sophisticated software for embedded systems where you have to know it must be. Anders: 40 years of evolution have not made that difference. Still have to write standard statements such as i=i+1. This could and should change. Correct? Such as Avionics? There is no shortage of problems.
Q: Why don't eat OO language designers eat their own dog food? Still same old ACSII style programming?

Bertrand: no one cares about characters. Anders: Meta programming is important. Ruby is successful not due to typelessness but due to Meta programming facilities. If Meta language and language are the same, then productivity will increase. But "types are better". Nonetheless, dynamic languages are often precursors.
Q: One way of interpreting STAGE is enabling hierarchies of virtual machines. Linguistic machinery.
A: Ole: Why did not others discover Simula co routines which Ole thinks is the same considered in hierarchical machines. Bertrand: Co routines mechanisms is like reducing 3D to 2D.  Extremely elegant but not a solution for concurrency. Preconditions do not mean the same thing. Correctness can not be as strict anymore. Co routines do not help you there.

James: co routines was only a "hack".  But it encouraged a programming style with independent code entities.

Anders: part of the problem with concurrency is the huge mass of different ways for it. Language designers should look what is lacking in languages. Such as using lambda expressions. There is not one single model.

Guy: too strong commitment to stacks. Useful but enemy of concurrency.
Q: Historically languages came from computation. Now, the physics must be recognized such as in concurrency?

Bertrand: task of languages is too abstract. Otherwise, we would have to read hardware-manuals to understand FORTRAN.
Q: More domain specific kind of languages?
A: Ole: Simula started as simulation language. Then added class simulation framework. Then added syntax elements for this framework.  Had to develop domain specific language. Anders: DSLs very quickly need elements from normal programming languages (type systems/expressions). They don't raise the abstraction this way. Instead of DSLs raise abstraction by general purpose languages.

James: involved in real-time programming. Lot of languages expressing QoS. Rest is just crap. Then people don't use the DSLs anymore, because rest of infrastructure never gets sufficient. Bertrand: modeling is not modeling the world. Relationship between both is very indirect. Whole idea of OO is to implement DSLs (specialized tools, libraries, syntax not required).
Q: Real world is concurrent. What are your favorite metaphors for concurrency and collaboration?
Anders: can not give a good answer. Typically not much  concurrency visible to the developer, even though underneath a lot of stuff involved.  Same for agent systems. The more hiding, the more successful.

Bertrand: correctness not the same on multiple CPUs. Testing feasible but not useful anymore.
Q: Multiprocessor is hype today. But what about solid state memory?

Anders: Solid state memory may have more impact in the next time. Hard to reason about how long time cpu cycles takes. No exact answer. James:databases? Just use RAM. Just use transaction lock.

Keynote on Second Life, Jim Purbrick, Mark Lentczner, Linden Lab
People can do anything, so they do anything to experiment. Demographics match demographics of real world. Lot of code running, teaching ordinary people to build software. 15% of 2nd life population codes. LSL script. VM - very low number of features. Must fit in 16k. Everything in message passing. 14500 regions each of them running in a separate CPU. Each of them running 1k-2k scripts. Massively concurrent! Scripts can move. Thus they must be able to be persisted.

Keynote by Frederick Brooks (The Mythical Man Month)
In history a lot of artists could provide their design alone (art, engineering, craftsmanship). This is not possible any more because of the increased sophistication of every aspect of engineering. We also get a hurry to get to market (early birds win). We need to have specialized expertise. More work: task partitioning, interface definition, shared element standardization, common style definition, integration and testing, on-going interface interpretation & reconciliation. Challenge is conceptual integrity. Many great engineering designs are still today principally the work of one mind, or two (Cray, Menn's bridges, a set of beloved software systems). Fan Clubs: FORTRAN, VM/360, UNIX, Linux Pascal, C, Macintosh, APL.

No Fan-Club: COBOL, OS/360, Windows, Algol, PL/I, PC, Ada. Differences between both: Fan Club means: developed by one mind. No Fan Club: design committees.
How to get conceptual integrity? Design as interdisciplinary negotiation? NO! Mill's Chief Programmer concept - a supported designer. A system Architect for design beyond one chief designer. The architect: agent, approver, advocate for the user.
A system architect who really cares. One user-interface designer. Documented assumptions: user population characteristics, application, and its future use and development, BETTER TO BE WRONG THAN SILENT OR VAGUE!, forces discussion, enables sensitivity analysis, directs verification efforts.
A Style Sheet: consistent styling of details is the hallmark of conceptual integrity.
The Cathedral and the Bazaar (Raymond's brilliant essay on Linux). Bazaar as an evolutionary model. No committee design - each piece has conceptual integrity. Emphasizes early and large scale testing. Marshals many minds for fixing, not just testing. The market votes among alternatives by adaptation.  based on a gift <-> prestige culture. Among people who are fed anyway. Works when the builders are the clients (know requirements from personal experience). Also applicable for an air traffic control system? Hopefully, not!
When does collaboration help?
- determining needs from users (more users => more diverse questions)
- conceptual exploration - radical alternatives
- not conceptual design nor detailed design (distinguish sharing design from delegating design) But pair programming wins.
Design Reviews
- especially with different expertise
- need and exploit richer graphical representations
Some Collaboration Caveats
- Real design is more complex (than in textbook examples)
- demands change control
- Collaboration is no substitute for
  the dreariness of labor and the loneliness of thought
Telecollaboration
Why?
- specialized skills
- geographic dwelling presence (national+cultural, city vs. suburban vs. rural)
- work around the clock
- cheap labor
- political factors
Example: Airbus 380. Telecollaboration plus resident ambassadors, a plane between Bristol and Toulouse every day.
Making telecollaboration work
Face time is crucial, low tech often suffices, shared document important then voice then way behind: videoconference (vital issues, people insecure, interviewing strangers, organizational or national cultures different).
Clean Interfaces: make big differences in error rates, in joy of work. Telecollaboration study: Mostly technology driven, not design-driven. A library shelf approach: 19 of 20 book on tools.

Panel "No Silver Bullet" Reloaded - A retrospective on "essence and accidents of software engineering"
Steven Fraser (panel), Frederick P. Brooks (University of North Carolina), Martin Fowler (ThoughtWorks), Ricardo Lopez (Qualcomm), Aki Namioka (Cisco), Linda Northrop (SEI), David Lorge Parnas (University of Limerick), Dave Thomas (Bedarra Labs)
Introductory Remarks:
Brooks: Difficulties in engineering can be separated in essence (structure) and accidents. Forecasted that productivity can only be achieved by conceptual work, not by eliminating the accidents. Bold statement: no technology will appear in ten years that gives a tenfold improvement (measured from 1986). Did not happen.
Parnas: There isn't a silver bullet. Silver bullet: no skill needed to use. Interesting question: why did Fred feel the need to write the paper and why do we still ask the question. Two points: designing software is hard, cannot cope with simple  approach. Point two: poor work man blames his tools. Developers are poor men. Looking for some magic. Myth we have had such a lot of progress. But that isn't the case. Why don't we use the lead bullets?
Linda: read the paper when everyone talked about the software crisis. Used the points in the paper over and over again. Figuring out what to say not how to say is the problem. Some progress has been made. Product line engineering: ten fold increase but that are only exceptions. Modeling is the essential point. People tend to focus on languages instead. We need great designers and cultivate atmosphere of hard work, also in interdisciplinary areas.
Aki: Is responsible for Cisco regarding services. Discipline has extended tremendously. Tools help even non-software developers build functionality. Software engineering takes a village to develop a software system. Many issues to cope with. It is a team effort.
Dave Thomas: Considers Fred's article as a challenge. Current state of OO technology is a disaster. No hope. Very hard to build stable products on top. Things may be created by smart people but the reality is normal people are challenged. Stuff too complicated. It is not sufficient to give certificates. Need more competence. Object are great but they have gone amok. Same for agility. Successes in niches.
Ricardo Lopez: We got a lot of silver bullets but they are killing us. Silver bullets comes from fearing. Trying to avoid what we fear. Fearing complexity is like fearing life. But this is elegant. Search for productivity is another driver (we are always trying to optimize). Silver bullet: we need to address and embrace complexity without fear. You are the silver bullet. Collaboration helps sharing experience which gives an order of magnitude. Silver bullet is inside of us.
Martin Fowler: distinction between essential and accidental has had a great influence. [Changes to a werewolf with cries of pain and will be the bad "person" in the panel arguing against the rest of the panelists]. OO is a dangerous and evil idea, but I could overcome it. Great ideas but no one actually does it. I love multicore concurrency systems. Use of prebuilt structures is a great theory. Only helps if libraries are good. Also requires good designers. No one outside OOPSLA does fortunately understand this. Invisibility of software development helps even more, at least to ordinary population. Rapid prototyping. One of the surprises ho many waterfalls I saw. Even lead bullets help, because people are crazy  about silver bullets.

Questions by the audience
Q: Paper was too successful. Often used by managers to reject new techniques.
A: Fred: If technologies are not addressing the sense they are not addressing the right thing.
   Linda: More address the customers and the business case and product needs but don't sell the technology. Other reason for silver bullets is greed
Q: How does this apply to individual productivity?
A: Agility is the way to let people grow. Make best people role models for not so good developers. Establish opportunity for others to learn.
Dave Parnas: Amount of training time matters. If small learning curve, then it is a silver bullet. Otherwise it is a lead bullet. If it is a silver bullet, sell it to IBM. Otherwise educate.
Dave Thomas: If it takes that small time, it is a scam. Find the better people and get them together.
Martin [aka werewolf]: There are not too many good people. Even if, people don't try hard enough to collaborate. People want everything easy and comfortable.
Q: What are about smaller lightweight teams with single individuals?
A: Dave Thomas: comes back to requiring good developers which is rarely possible. Leadership helps.
Fred: Growing teams. Book peopleware is extremely good. The Carolina Way: book how to make a good team out of Madonnas.
Linda: it takes a leader. Management is not enough. We need to be disciplined.
Martin: Peopleware is a dangerous book. Fortunately it is hard work to manage a team. It is more than Microsoft Project. Very easy to derail.
Q: How do we measure productivity?
A: Martin: you can't measure your productivity? You can't argue excessively about objects versus functions.
Ricardo: Metrics are crap. You can't measure the aggregate productivity. You may could derive micro economics from macro economics.
David Parnas: Preferred to have non-quantitative measures. E.g., give it a stranger and measure how long it will takes. That’s what we would quantify.
Aki: Often people only want to increase to get more. Then she asks what she/he means by productivity.
Q: What about non-linear increase of productivity?
A: Aki: often complexity is increasing by qualities such as security, transactions, ....
   Dave Parnas: critisizes web sites. web sites are often done by smart people which then no one is able to extend
   Linda: We need to have more design upfront.
   Dave Thomas: Communication is often the issue. Get rid of all the middleware and business objects crap.
Q: Mostly people address in environments on accidents and incidents. When will we have an environment that handles essence?
   Aki: Tools have improved.
   Dave Parnas: When going to building. First build the foundation then address the rest.
   Martin: Communication between users and developers. Communication here is hard. Business people don't want to talk to developers. Developers have no social skills. Software people get frustrated so that they loose interest in business case. Then they start building large infrastructures to overcome boredom.
Q: By Brian Foote. World runs on bad code. Software is a success story. Shouldn't we teach people how to write more bad code?
A: Ricardo: Wonderful success (civilization is embracing software success) makes us more vulnerable. Code should contain some self-correction. No technology can be perfect. Focus on guaranteeing code won't cause big problems.
David Parnas: Excellent developers writing code no one else can cause a problem.
Dave Thomas: Mismatch between smart minds also might be a problem.
Q: We have unrealistic objectives. Silver bullet could be training bringing IT people to business people?
A: David Parnas: might be good. But the same would be to have every driver being a mechanics. But we need people who make things for other people.
Q: Articles by Brooks and Parnas were silver bullets. Only we need no silver bullets now, then thus this mean non one can come up with one?
A: Dave Parnas: I did not provide a silver bullet. Nobody is against good ideas.
Q: It took a thousand years to go from geometry to calculus. How do we finish objects? How do we provide good libraries?
A: David Parnas. Calculus is a wonderful thing. It is a lead bullet but no silver bullet.
   Fred: better when system satisfies somebody than anybody.
   Ricardo: does not consider metaphor valid (romans just ignored Archimedes).
   Dave Thomas: best way to get components is to stop people developing frameworks. Most frameworks can only be used by experts. Better provide functionality as component. Is not optimistic that this is approaching.
Final statements
Martin: software development has made progress. But werewolf is not harmed. People underestimate. Human beings have this optimism.
Ricardo: We're in trouble. Werewolf is annihilated, but then new werewolves arrive.  Synergies with your peers are important.
Dave Thomas: New generation can face the challenge.
Aki: Search for creating silver bullets, we have made it easier to develop. But highly sophisticated developers are required. Complex systems are still there but we will have to appreciate there also system that does not need that skills.
Linda: We are continue to try getting better. Focus in essence not on accidents.
David Parnas: It is unfair to criticize waterfall model. Because some notions have just been misinterpreted. We are making progress by building simpler systems, not by more complex systems. My dog does not fear werewolves because he does not try to get simple answers.
Fred: No field of engineering where people look more thoroughly on other's work. Very dangerous thing.

Keynote: ELEPHANT 2000 for the year 2005: A Programming Language Based on Speech Acts, John McCarthy (AI, Lisp, ...)
Natural language offers semantic features that are absent in present programming languages. Example: Passenger has made a reservation. Nothing said about databases here. Name cause of slogan: an elephant never forgets. Reservation is made to a virtual object. Referring to the future ("baggage handlers ready one hour before flight arrives"). References to the past by suitable data structures. Compiler has to invent them. References to future are more difficult (not always possible). E.g.: Prediction to actual arrival difficult. AI maybe required.
Speech Acts: initiated by "ordinary language philosophers" who didn't like logic. They study sentences without truth values. "I now pronounce you man and wife". Not imply an assertion.  "I sentence you to be hanged by neck until dead".  Speech acts include offers, acceptances, statements, questions, command, ... One can also state, describe, assert, warn,... Personal intent to be considered (why does someone do/say sth.).
Programs that buy and sell good and services make commitments and receive them.
They undertake financial obligations on behalf of their owners.
Correct performance includes fulfilling obligations and insisting these to be fulfilled. They are operating in society.
At some execution point a program may execute a statement asserting the intention that variable x will remain less than y.
Is the execution correct so far? Will this process terminate? Internal speech acts give a form of intrinsic correctness. For example, correct programs carry out their intentions.
Two kinds of external specification: Input-output specifications depend on program and language. Verification can be done using these.
The accomplishment specifications. Depends on facts about the world. "Customers will be satisfied with a service".
Elephant programs have both.
Arguments against verification because people have confused these two kinds.
Speech acts philosophers  distinguish illocutionary and perlocutionary speech acts. "He told her" vs. "he convinced her". To some extent, this corresponds to input-output and accomplishment specs.
Look at paper!
http://www-formal.stanford.edu/jmc/elephant.html

Then he explained the history of Lisp. How they came up with AI (Minsky and him, how the Lambda calculus was partially used, how garbage collection was invented)

Keynote: David Parnas - Precise Software Documentation - Making OO work
What is OO - A Buzzword for Our Times. My meaning: "design software by data holding objects making the job easier", no special language necessary, languages supposed to make it easier, ...
separation of concerns
information hiding modules (invented by Parnas)
- stresses work assignments, flexibility, substitutability
abstract data types
...
No conflict!
It is a recipe for disaster

Same examples used over and over again
Reacted to the "hiding" alone
you have to tell them something - accurately, precisely
read the 2nd edition of The Mythical Man Month
There was an earlier article on the interface documentation
For 30 years, I have known that documentation was the other side of the coinlevels of abstraction (relation never clearly defined, stress on subset ability - many different ideas)

35 years later
never doubted the truth of the information hiding theorem information hiding is no empirical result. It is a theoretical result. Think of subsystems A and B with fixed interface.

Requires empirical verification.
- that people can know what to hide
- that the interface can be efficient
- that we can describe interface without giving away secrets

Old Problem: Documentation
Apeldoorn, 1969. Philips. Were unable to write specs for software components that were complete and precise. Parnas found it very difficult. Components had to know a lot information about each other such as data structures.
Still a problem. Think of Microsoft. Inability to provide good professional reference documentation is costing us millions maintaining complex products.
Role of documents in engineering
record key design decisions.
binding on everyone and fully controlled.
precise documents that use mathematics.
are not introductions or tutorials.
are not extracted documents (javadoc)
show "separation of concerns"

40 years software crisis
obviously not a crisis
underlying causes

- lack of discipline, careful choice of interfaces and structural decisions
- lack of review

Decisions that are not fully documented are not taken.

What is documentation?
Practical tool (not just theoretical achievement)
repository of info
convenient reference (organized like a dictionary not like an introduction manual)
easier to use for information than code
structured to avoid inconsistency
quicker and more authorative than trial executions
useful before, during and after the coding

Computer Science  and Documentation
Managers and engineers bemoaned inability to document
not just fragments of software
programming in another language
regarding documentation just as an essay or commentary
if you take the term "engineer" seriously you must ignore those views.

Verification and Validation
documents state what we want to verify
what we get to work with
used to support testing
Supports "divide and conquer" inspection and verification.

Properties of documents
-accuracy
-precision
-consistency
-completeness
-ease of reference
First three can be reached through mathematics. The last ones by better notation and organization. All require content

definitions for each document.

Rules
Don't mix intro and reference parts
never rely on words
mathematics only way to be precise but
- expressions must be simple and easily parsed
- interpretation should be direct
Only relevant information
Everything on one place

Clearly Defined Roles
-description: facts about products.
-specification: only required properties
-full specification: states all requirements
Same notation may be used for all 3. Matter of intent not notation. No such thing as a specification language.

Documents just formal methods?
Goal is to organize information for easy retrieval.
no mathematical model needed. Proof is not the main goal.

Content Definitions - when and how?
Often organizations specify formats but not the content
we need content definition to
- know what to put where
- check for completeness
Each document describes some mathematical relations. Each document has a different range and domain.

Main Documents
- system and software requirements, system and software design, ...

Modules and Components
distinct but related concepts
module: tasks
same documentation method for modules, components, objects
- event descriptors  (description of pre/post values of visible variables)
- time as a variable if needed
- trace: sequence of event descriptors
- history: trace describing what actually happened
- document relations

Module=private data structure + set of externally invokable programs
design, usually in designer's head, should be written down

How to describe relations in a readable way?
Two ideas:
- Tabular expressions
- Relational Model
We need both ideas!
Triggered by practical experience.
[Mentions lot of different example projects]

Pre- and post conditions do not scale up.
Developed process for inspection.
- preparing specification what code should do
- decompose program into small parts  appropriate for the display approach
- produce the specifications and descriptions required for the "display approach"
- compare both specifications

Key is divide and conquer and precise documentation.

[PARTY TIME ON WEDNESDAY EVENING]

Conference Event: Cabaret theatre. Different food offered. A rock band, painters, an event where the strangest web page was selected.

[THE LAST DAY]

Context, Perspective and Programs [copied from conference program]
Gregor Kiczales (University of British Columbia)
Context plays a large role in our perspective on the world around us -- people see things differently depending on background, role, task at hand and many other variables. How do different contexts affect developer perspectives on software?

What different ways do developers want to see a program? What different ways do they want to work with a program? How does a program mean different things to different people? How does context influence perspective? How do different contexts and perspectives interact? Can these interactions be reified, controlled and parametrized? A broad range of work has explored these questions, but many issues remain open. We lack a general understanding of the concepts and mechanisms that can support the changes in perspective we need. We lack the ability to handle context and perspective systematically, easily and reliably throughout software development. Work is needed in a number of areas, from conceptual foundations to theory, languages, tools and methods. A truly satisfying handle on these issues may even require a material expansion of the foundations of computation -- or at the very least the foundations of programming languages.

Invited Talk: Intimate Information for Everyone's Everyday Tasks (Mark Bernstein, Eastgate Systems, guru on hypertext).

Remark: this was a very esoteric talk. To understand look at TinderBox (find the hyperlink below).

Speaks about conceptual integrity and different views. Talks about different computers and software. The better specified a project the closer it is to slave's work. There is more to life than regular structures. Lot of tools for dealing with regular structures. But what to do when structure is almost regular. Introduces cathedral/bazaar metaphor. Here is where Neo Victorian computing fits in. The ultimate aim of all creative activity is a program! (Gropius: "Das Endziel aller bildnerischen Tätigkeit ist der Bau!"). Computing is one of the seminal artistic pursuits of our time. Compares development of building with that of programming which went from unique (monolithic structures) to components and re-use. Philosophically, it is all about romanticism and realism. Writing hypertext is like writing. Writing is hard. Refers to teen's diaries. Things we want to remember or think about. Other universe: filling out forms. Lot of trouble. Figuring out how to fill them out. We do think to often writing is about forms. Knowledge representation for everyone's intimate information. People who are writing for themselves. Constructive hypertext (interlinked information which we do not (fully) understand). At the beginning of your PhD you don't know the structure. Inheritance for people who don't program. Look at programs such as Tinderbox. It just saves typing. Prototype inheritance, no class objects. It is fast enough. Composition and inheritance. Write now, we can formalize later. Premature commitment is an important issue here. Don't commit too early, otherwise you get lost. Example: Organizing files and folders on one's computer. Good idea but people don't like and don't do it. Hypertext can help here. Lively data: persistent search. Different notions of inheritance: from class, container, mixins. We might need all of these. Hypertext is a new birth of freedom. Has become ubiquitous. It is invisible and seems obvious. We need to think about referential integrity. Need to address context, implicit narrative approach. Hyperlinks help readers tell where to go.

Next ooPSLA conference: 19.-23.10.2008 in Nashville

Tuesday, October 16, 2007

S-E-P

You may have heard about Douglas Adams SeP principle in the famous Hitchhiker's Guide to the Galaxy. SeP stands for Somebody else's Problem.

The SEP principle represents an ubiquitous principle in software engineering. You definitely have heard about it or even experienced it personally in some projects.

One typical example I often hear relates to software architects complaining that their management agreed on the requirements of a concrete software project without asking them for feedback or help. The management then throws the project specification over the fence to the software engineers which is when the whole mess begins. Involved engineers are doomed to fail due to incomplete, inconsistent or even missing requirements. Yes, indeed, all these guys from management are morons like those illustrated in the Dilbert cartoon series.

But wait a minute! Is it the job of a software architect to take a specification of requirements  without any review and feedback? No, that is not the job description of an architect, at least not from my point of view. As soon as you are asked to design a new system, it is a must to consider the business case, the requirements, the risks and then to decide whether this possibly could end up as a success story or rather develop to a dead man walk. If you find any problems in a project context, it is your duty to inform the other stakeholders about your findings and ask them to solve those issues before starting the project. If they don't accept any changes and your gut feeling tells you this project is doomed to fail, the last resort would be rejecting to participate in the project. Sounds rather harsh but consider the other option: If you take responsibility as an architect in any project then you are signing a kind of virtual contract where you fully agree with the whole project specification including the project organization, the process, the requirements, etc. This means you cannot simply finger point to your management afterwards when the specification turned out to be questionable. You have become part of the system and thus it is your fault too. And it is definitely not a SEP.

What this means for software architects is the fact that architects are not just puppets on a string controlled by some evil guys in HR or Senior Management. They have responsibility and should feel empowered by management, because if they don't, the architect's role in their organization is either undefined or not well defined. Draw your own conclusions in this case.

So from now on: Whenever you feel inclined that a problem you face in a concrete project setting could be a SEP, think about it again.

Sunday, October 14, 2007

OOPSLA 2007

I am looking forward to attending this year's OOPSLA in Montreal. It is not only the excellent conference, but also the great people. And of course I am able to meet a lot of good friends. Some of them I only meet at conferences even though they live not far from me. Maybe, this is a kind of conference law.

My two tutorials on Architecture Quality and Architecture Refactoring got a remarkable number of attendees. I am used to speak in front of large audiences but I never stop being a little bit nervous. Let us see what feedback I will get.

But of course, I am also very happy to see the great work of a lot of other people. I am sure we will see some fascinating stuff on Software Architecture and some other cool topics :-) Friends like Doug Schmidt will also give tutorials which you should consider a must if you are interested in this architecture stuff. A member of my team, Jörg Bartholdt, will present on WS-standards. As he is a superb expert in this field, I am sure he will do a very good job.

If you are attending OOPSLA and reading this blog, please, don't hesitate to get in touch with me. I am really happy to meet other people (especially you) interested in the never ending story of architecture and engineering. I am just including a small picture so that you may recognize me (or are able to escape if you don't like what I am writing here, but I accept any feedback :-) 

micha_foto

Sunday, October 07, 2007

The Great SOA Swindle (Beware of hidden humor!]

You might have noticed it or at least you got a little bit suspicious. Something strange is currently going on in the IT universe. People who you highly respected a few years ago have been caught by a serious disease. Now, they behave and think in unprecedented ways. You will hear them constantly producing sentences that sound like "yada yada yada SOA yada Governance yada BPM yada yada EAI yada messaging". My code name for the disease is WSA (Web Service Addiction), sometimes also coined SOA (Serious Opinion Anomaly). Their personal background does not matter either. I know C++ nerds as well as enterprise Java developers that were both caught by the SOA trap. Even mice (MIcrosoft Centric Engineers) got brainwashed. How do you recognize a person that carries the SOA virus? It all starts with loose coupling. Obviously, these poor morons loose the coupling to their own brains. You will also detect a serious speaking disorder. If you listen carefully, they will constantly repeat words that sound like "WSDL", "WS" or "XML". Any relationship with the dark lord in the Harry Potter series might be completely coincidental. There is a kind of Turing test to prove whether another person is suffering from the SOA disease. Just, tell this person about a particular software engineering problem you are currently facing in your project. If your peer immediately (without further considerations or detail questions) tries to convince you that SOA is the appropriate solution, you're done.

Don't consider all this just a SOAP-opera. And don't wait until you REST in peace. Act now! This dangerous virus is spread and distributed on purpose. A secret organization called CIA (Computer Integration Architects) is responsible for all the mess. Companies like IBM, Microsoft, SAP, Google, Oracle, <name your software vendor of choice> want to make us believe in a service paradise, while their actual goal is to increase stakeholder value. I bet they once secretly met discussing what new problem they could come up with in order to offer new products (sometimes also erroneously known as solutions).  And that's how Web Services and SOA were born.

Interestingly, only a few Web service success stories actually exist. And among all of the SOA success stories, most do not use Web services at all but a kind of messaging infrastructure. Now, guess how old messaging middleware is. Yes, we already have known and used the SOA paradigm for decades.

With other words, we are currently facing a giant SOA swindle created by a secret network of various contributors such as companies, media, or fellow engineers. Their actual goal is to achieve world domination. But that is another story and I will address this issue when talking about the truth of Web 2.0.

Saturday, October 06, 2007

Dealing with Human Affairs and Political Correctness

Software architects often constitute a sort of communication hub, because they sit right in the center of software development projects. I don't want to claim they represent the most important role. What I mean is that software architects have to deal with all the other stakeholders, be it customers, requirements engineers, testers, developers, and other persons involved. As a lot of communication happens with software architects, they need to take responsibility for the strategic and tactical design. This empowerment also bears some risks. For example, architects may  be "leveraged" as proxies in some (political) battles. Stakeholders may try to gain influence to architects in order to achieve their own goals, no matter whether these goals are expressed in an open manner or part of a hidden agenda.

Let me give you some concrete examples: what if a product manager asks you to use a particular technology from vendor X, even if the requirements in your project demand a completely different solution. Or suppose, engineers from different locations prefer to follow their own local interests instead of contributing to the overall project objectives.

You may be surprised how often software architects have to deal with such issues. And, of course, this also holds for the other roles - architects are by no means better beings than developers, testers, etc. And I claim that political issues pop up in every development project with more than one person being involved. But what can we do about it?

In my experience, the only possibility is either to quit your job in case the political quarrels eat a lot of time and resources and there is no chance, no matter how hard you try, to stop these quarrels. I bet that many of the 70% software projects that allegedly fail, fail to a large extent due to such issues.

The other possibility is to establish an open team culture with all people involved abiding to the rules of openness. Some of the rules could be: (1) no hidden agendas, (2) project goals over all other goals, (3) the project manager is empowered to decide all project-relevant aspects, (4) all issues are addressed in an open manner with all involved persons, (5) every project member must be treated with respect.

Obviously, we could fill a whole book with such recommendations.  Interestingly, it does not matter what kind of engineering process a project is following. Even in a pure agile setting such issues appear.

Maybe, we too often ignore that engineers and other stakeholders are human beings with various experiences, competencies, social skills, personal intents, and opinions. Thus, there is large chance of conflicts in a project team. For example, in a recent project some people were appointed architects and some others key developers. A couple of key developers complained not being appointed architects because they felt like second-class citizens. It did not help to explain that architects and key developers are just different roles, none of these roles considered more valuable than the other. 

We often hear a lot about technologies, concepts and methods. In a project and in every professional career these aspects often are less significant than social and other non-technical skills. Architects should always keep this in mind.

Wednesday, September 05, 2007

The Big Crunch of Software Engineering

In the eighties I bought s small pogramming system called Borland Turbo Pascal for around 100 bucks. The whole language system documentation included a tutorial as well as a reference manual with  more than 100 pages. I managed to read through the whole documentation in a few days and from that day on I could call myself Turbo Pascal expert.


In the beginning of the nineties I got familiar with a new middleware standard coined CORBA. The first implementation from a small Irish company IONA Technologies was extremely easy to use.


In the mid of the nineties a programming language called Java conquered the world. As it looked amazing, I learned Java in a few days reading through the seminal book "Java in a Nutshell" written by the well-known David Flanagan.


What do all of these stories have in common? Well, you will find their commonalities if you consider what happened after these technologies were introduced. Borland extended Turbo Pascal until it became what we now know as Delphi. CORBA version 3.0 turned into a giant middleware platform no single expert is able to understand.  And I can't believe that anyone will ever be capable of knowing Java the language and all the different platforms such as Java EE in full detail. Obviously, this is not an observation only applicable to software products. When did you buy the last gadget (mobile phone, MP3 player, TV set, ...)  where you knew all of  its features? When did you read the whole instructions manual or user guide? And how many of these features do you actually use?


The main challenge for software engineering is death by complexity. Even if you were expert in some of the technologies such as Java and CORBA and Eclipse and AspectJ, you wouldn't be able to oversee all implications of using deliberate  combinations of these technologies. "Yes, you definitely need to use the new version of Hibernate as well as  EJB 3.0, WSDL 2.0, Java 1.6, Spring 2.0 which, of course, must support the new Windows Vista SP1. Not to mention all the new tools such as Eclipse 3.3, Maven2, Ruby 1.9.  I forgot to mention that our documentation will use the new Share Point Server 2007  as well as Office 2007 applications. And did I already tell you about the new Microsoft Silverlight 1.1. we need to support?" Today, we are expected to be masters of many technologies each of them being extremely powerful. As this mastership doesn't work except for a handful of geniuses among us, we often become jacks of all trades but masters of none. Obviously, in a perfect world we would be experts in all technologies we are using, but in the real world we simply can't reach this level of wisdom. And often technologies even change or new technologies appear within the product lifetime. In concrete projects we don't have sufficient time to check and understand all these new technologies before applying them.


The problem is that almost everything in the world of software engineering has become a fast moving target. We are surrounded by these moving targets in our daily work. Some might argue that technologies reached new levels of power in the last years. We definitely achieved a high level of abstraction using OOP concepts, SOA, Model-Driven-Software Development, patterns,  new runtime platforms such as Java and .NET, you name it. Today, we can implement software systems that we were not able to think of a few years ago.Unfortunately, the complexity of the domains has even more increased. With other words, all these cool and smart technologies can not cope with the rising complexity introduced by the increasing requirements in real projects. 


I think, that most technologies are simply over-engineered offering functionalities no one really needs. Consider Office software systems as an example. I also believe that product development cycles are much too fast to achieve the required level of quality. Quality requires to gain in-depth knowledge of all technologies  which is not very realistic in current projects. Another experience is that many projects neglect software architecture efforts. Why else must I constantly explain to engineers and managers that architectural efforts are not a waste of time and money? The software architecture base must be stable. It needs to be developed as well as evolved in a systematic manner. Otherwise, the whole system will collapse or implode. Many systems start with a nice architecture which then begins suffering from design erosion after engineers have added more and more workarounds under increasing time pressure. This problem reaches new dimensions when facing product line engineering and platforms. Now, platforms are developed that do not serve a single one-off application but a whole sequence of products and solutions. All problems lurking in the guts of such platforms, will inevitably have an impact on all applications relying on these platforms.


Maybe, automating most engineering tasks can be one of the solutions to cope with this challenge. Software engineering is mostly about introducing indirection layers so that adding new abstraction levels and hiding more and more details from the programmer might be another approach.  Nonetheless, a third idea might be to stop for a while, reflect over what we are currently considering as the appropriate approaches in software engineering, and then maybe come up with a different way of thinking.  Otherwise, our whole discipline will suffer in the future.


Thursday, August 23, 2007

Gravestone Inscriptions

Imagine a kind of e-cemetry for software projects that failed. How would the gravestone inscriptions look like?


"In affectionate remembrance of projext X which departed this life January 21st 2007 aged 2 years."


Or maybe:


"In loving memory of Windows 95, descendant  of Windows 3.11. Died 1st January 2003 aged 8 years."


We would probably congregate to bid farewell to our failed project with the quality manager being in charge to deliver a funeral eulogy, reminding us of all the nice aspects of the project but concealing all the more negative issues - learning from failure doesn't seem appropriate in the context of a funeral. 


And then  in quiet remembrance of the software system we would think of the last words of the project manager: Something  like "Why, the hell, didn't CMMI save our project?" or, maybe, something close to Ether Allans' last words (1789):  "Waiting are they?  Waiting are they?  Well--let 'em wait."


Hopefully, no one will demand hanging the architects and engineers.


Tuesday, August 21, 2007

From Dusk Till Dawn

Suppose, in the future faults disappear from software systems. With other words, every software system will be completely fault free. Hmmmh, you might say, this sounds like a beautiful dream. But how could we ever make such a dream come true?

If we specify the architecture of a software system in a high-level language such as UML or a domain-specific DSL or a mixture of these, then the first assumption could be:

Instead of manually implementing code, apply a verified code generator that automatically creates an error-free implementation. Of course, there are several hidden preconditions in this context. The generator itself must be verified which certainly is a very tough job, but needs to be done only once. And after all, our implementation depends on the correctness of its runtime environment, i.e. all libraries or 3rd party components being used, the operating system, application server, even the underlying hardware, and so forth. So far, we didn't discuss the implementations model-driven generators create from models. How could we improve their quality? Patterns and pattern languages are the answer as they represent reusable and well-proven architectural solutions for recurring problems.

The next problem consists of guaranteeing the correctness of the models the developers specify and pass to the generator. Here, correctness means two things: correctness of syntax and semantics. While the former issue can be easily ensured using tools (e.g., editors), the latter one is much harder, because it implies the correct consideration of ALL requirements, functional, developmental and operational ones, as well as their composition. Even worse, for meeting operational requirements we need to take into account the whole infrastructure and environment under which our software system is supposed to run. That implies that all 3rd party vendor need to provide this kind of information in some structured and predefined way.

One solution could be to model our software system from different perspectives, for example, from a functional (domain-specific viewpoint) as well as from various operational viewpoints (e.g., specifying a performance model). Such approach requires the availability of different cross cutting DSLs - one for each perspective - as well as tools that merge all those DSLs to one implementation. Today, we would leverage Aspect-Oriented Software Development for exactly that purpose. Needless to say that we now have to prove the correctness of all DSLs and tools and infrastructures and 3rd party components and composition techniques, and so on.

How could we ever verify that a DSL consistently and completely describes a particular domain?

For widespread domains such as finance, automation, healthcare we could introduce standardized languages that are as "reliable" as programming languages. However, it is rather unlikely that we're capable of covering all relevant domains this way. In addition, such DSLs comprise an important IPR of a company which it seldom likes to share with competitors. Hence, we would need to specify our own domain languages or language extensions. Proving their correctness, however, is far from being simple. Language design requires some expertise. Anyway, domain driven design will be an important discipline. Of course, Software Product Line Engineering is another breakthrough technology in this context as it enables us to support a family of similar applications.

Ok, let us take for granted that we were able to master the aforementioned preconditions. What about requirements engineering? Well, many stakeholders might not be very familiar with software engineering. How can they manage to specify their requirements in a correct way? Contradiction or missing precedence of requirements might be detected by tools such as generators, given that all requirements can be expressed in some formal way.

But what about expressing stakeholder's intent? This could be formalized if stakeholders are able to understand some kind of mathematical model. If that's not the case, the only solution left is specifying requirements on behalf of stakeholders. This would need to happen in close cooperation between architects and stakeholders. Unfortunately, such approach only works if stakeholders know what they want and if it is possible to build what they want. Let me give you an example. How would you formally describe a word processor and all the features you expect? How about usability? How about cool UIs in this context? The problem (and the fun) with human communication is its lack of formalism. Thus, you always have to know the person and the person's context to understand its communication. Now, take issues such as hidden agendas into account. Eventually, another problem arises. Requirements are subject to constant evolution. We don't live in an island where changes seldom happen. Rather we live in metro poles where infrastructures and environments permanently evolve. Let us be realistic: Only for very small and stable domains that significantly impose constraints on possible applications, we can introduce formal methods that will ensure correctness. For all other domains, often some kind of guessing will be necessary, i.e. some assumptions that hopefully represent stakeholder intent.

There is another issue left. Of course, the process which we apply to achieve results must support our goals.

There must be a predescribed approach to cover the whole application lifecycle. This approach must take into account how to enforce fault-free software. For instance, each refactoring mustn't change correctness. Quality Assurance measures are required on all levels, from domain language design, architecture up to 3rd party components. Testing for correctness is essential. For this purpose, tests must cover all aspects. And tests must be correct themselves. Of course, they need to be automated. Note that tests are even important when we could verify the correctness of our tool chain such as the generator. As soon as the environment changes, we need to test!

What conclusions can we draw from these discussions?

  • For safety critical environments a formalization of software development where each formal step is being verified can be (and already is) an appropriate way.
  • Even if correctness woild be possible, it requires large efforts in terms of time and money. For some application development projects these efforts might not be feasible.
  • We might never achieve fault-free implementations in software engineering. I am sure this will be the case as the same observation also holds for other engineering disciplines such as building construction or car manufacturing. But we will be able to introduce more structure to software development, i.e. by adding some formalism and verification mechanisms. I mentioned examples such as DSLs.
  • We will never be able to express stakeholder content consistently and completely. In addition, stakeholder intent will change over time. Thus, we always will address some (slow or fast) moving target.

The world is not perfect and never will be. Let us keep this in mind when designing software systems.


[Meta] I am now using QUMANA

This represents another meta posting which means I will discuss blogging but not software architecture :-) I have switched to use a tool called QUMANA for editing my postings. Previously I wrote my postings directly to my blogger site which turned out to be not very comfortable, to say the least. I found QUMANA just by doing a Google search. If anyone has additional recommendations for such tools, just go on and add a comment here.  


Friday, August 17, 2007

Star Trek 2.0

I really love Star Trek. In my childhood I enjoyed the original series with Spock, Captain Kirk and Scotty. And of course, I kept watching all episodes of later series such as Star Trek - The Next Generation. To be honest, it was SciFi such as Star Trek that motivated me to become a computer expert - although I also considered physics as a possible alternative. I can even remember that I once developed my own Star Trek game on my Tandy Radio Shack TRS-80 computer. The computer offered a Z80 CPU and 16k RAM memory with a built in ROM-based Basic interpreter. When I bought it in the only computer shop in Munich (back in the early eighties!), a staff member seriously asked me why I needed such an incredible large amount of memory. After several days I had designed my own Star Trek game and had typed in all the 16k of code reaching the limits of my TRS-80. This had been such a pleasant but also very exhausting activity. For some recreation I decided to go playing soccer with some of my friends. When I returned back to home, I went into my room and had a shocking experience - the computer had lost power. It turned out that my mother assumed I had forgotten to turn the computer off, and pulled the power plug. All of the code had vanished into the vast darkness of cyberspace. The only thing left was the design documentation. Yet another hint why you should value your architecture and, of course, why backups are so important.

Star Trek has always been so inspiring in terms of science and engineering that even books were written dealing with the Star Trek Universe. Think of Beaming, Warp speed, or Holodecks! Famous scientists are still discussing what could be possible and what won't be possible in the future. Suppose, you're a software engineer in the 22nd century responsible to design software for space ships and star bases. How will the engineering process look like? Which technologies and tools will be available? Unfortunately, SciFi movies do not offer any clues. They never show the neon babies (also known as software engineers), but only their products. For instance, a holodeck requires some sophisticated software systems. Obviously, most SciFi authors consider the life of a software engineer rather boring. Do you have any idea why this is the case?

Nothing is more difficult to forecast than the future. Unfortunately, my crystal ball is currently not usable due to a graphics card failure. Maybe, we should therefore try the opposite approach instead. What if the 22nd century software engineers would have to use all our current technologies. Suppose, Captain Kirk needs to communicate with the earth-based command center. Will he use Instant Messaging or maybe a Web X.0 user interface for this purpose encountering some 404s from time to time? Will everything abide to service-oriented principles? Maybe, the reason that we call it Enterprise Service Bus is related to the fact that within the Enterprise starship all communication will take place via an ESB. Another question arising here is whether companies such as Microsoft or IBM will still be around in the twentysecond century, eventually providing specialized versions of Linux or Windows to starship construction plants. If Mr. Spock needs to look up some scientific information in the databases, will he leverage Google Search and Oracle for that purpose? Does Scotty require to use a future version of Word or Excel for writing his logs? Will the weapon control system be provided by Electronic Arts? Will Mr. Sulu use Google Maps and Google Galaxy to navigate through space and time? What happens if an alien lifeform intends to take over control of a starship? Will they send a virus or trojan or maybe a root kit that can adapt itself to whatever system architecture is available? Maybe, they'll try phishing attacks intercepting HTTP packets that travel through the galaxy. Of course, they will leverage future versions of Ruby, Java, or C# for that purpose.

I guess, we all agree that these scenarios do not appear to be very likely. Of course, the future will provide much more advanced technologies and technologies that we can dream of today [if mankind survives]. Just remember the state of science and technology in the beginning of the twentieth century and compare it with our current accomplishments. Nonetheless, I wholeheartedly believe that at least some software architecture principles will remain the same in the future. For example, as we all know, patterns represent proven solutions for recurring problems. Thus, they capture experience and knowledge which can then be re-used across different software engineers. I bet, that future engineers such as Scotty or Mr. Spock will still leverage patterns instead of constantly reinventing the wheel. Domain Driven Design and Model-Driven Software Development are going to become substantial parts of software development projects. Instead of handcrafting each bit in a time-consuming way, engineers will specify their software in a domain-specific language from which advanced generators will produce sophisticated programs. And I am also pretty sure that the Enterprise crew won't apply a waterfall approach when they need to come up with a new solution. The future is agile! In one of my previous postings I already could prove that the success of the Borgs is primarily related to their agile processes. Future software engineers will still have to face the challenges of operational and developmental requirements. Maybe, a smart software engineer can come up with some sophisticated concepts how to inject such properties into a concrete software architecture.

In summary, software architecture will still be one of the essential topics in the 22nd century. Therefore, we can finally draw the conclusion that being a software architect will remain an important profession. It means to boldly go where no one has gone before. It also means according to Douglas Adams' "Hitchhikers Guide to the Galaxy" that software architects unlike other species such as hair-dressers and telephone cleaners will not be on board of the "rescue spaceship". But that is a completely different story.

Enjoy your life as a software engineer, because you are and always will be a Very Important Person. All other colleagues should keep in mind: don't forget to praise and worship your software architects. You depend on these strange but smart lifeforms!

Thursday, August 16, 2007

Now for Something Completely Different- SIEMENS

Many of you may know that I am working for Siemens for almost 16 years now. I got employed back in 1991 by the Siemens R&D unit in Munich where I started my career as a software engineer and researcher. Some of you may also be aware of the fact that there was a high frequency of news about my company in the last months. Unfortunately, most of the news were rather on the negative spectrum. And even worse, as Douglas Adams once said, there is only one thing that can travel faster than light: bad news! My friends and relatives keep constantly asking all the time what is going on at Siemens.

To be honest, I am really unhappy with this kind of questions. Thus, It is time to shed some light on the many achievements and historical facts Siemens has provided over more than 160 years since it was founded in 1847 by the inventors Werner von Siemens and Johann Georg Halske. People seem to ignore all these great achievements which by far outnumber all the negative reports you may have heard recently. Some of the early innovations made by Werner von Siemens himself include a special telegraph, the trolley bus, or a dynamic transducer. But this is only the tip of the iceberg in terms of Siemens innovations. Thus, Siemens always has been a great place for research and engineering from day 1.

Werner von Siemens happened to be not only a great inventor but also a very kind person. He was the first entrepeneur to introduce a system of different social insurances for his employees.

Siemens constantly grew until today where we have reached a number of about 480000 employees working in almost all parts of the globe and creating a large range of high quality products such diverse as trains, plants, automation systems, medical systems, communication systems, automotive or lighting solutions as well as IT solutions. Did you know that scientists like Gustav Hertz or Walter Schottky were employed by Siemens?

What makes Siemens an excellent place for software engineers is the mixture of advanced and innovative products that require more and more IT centric constituents and a very creative setting biased towards innovations which is the reason why the company considers itself a global network of innovations. As the Siemens groups require a whole bunch of different software technologies and tools, I consider it a great opportunity to be part of the Corporate Research and Technology division of Siemens. Here, I am able to cooperate with colleagues from all Siemens-relevant applications domains. And as you might guess Software Architecture is one of the fundamental disciplines in such a context. All the books I co-authored and research results I achieved on architectural topics would't have been possible without such a creative and supportive environment where I am able to closely connect with people such as Frank Buschmann or cooperate with smart experts such as Doug Schmidt, Markus Völter and the like.

Why did I write this rather emotional posting? Because I consider it very unfair that in the last months public attention continously focused on only some illbehaving persons within the company ignoring all the aforementioned achievements. Siemens has already started to consequently investigate this behavior and establish a system of very rigid control mechanisms. On one hand, it is important to reveal any wrong doing within a company. But on the other hand, its is wrong to judge a book by his cover (as Franknfurter mentions in the Rocky Horror Picture Show). Of course, I am biased as I am a Siemens employee myself but that doesn't change my personal attitude and opinion.

I am very proud to be part of the Siemens family where I keep working on great products and software architectures.

For more information visit the following Siemens site.

Saturday, August 11, 2007

Upcoming Events

I am in vacation for several weeks now. Nonetheless, I havent't been lazy :-)
One week ago I participated in a panel discussion at this year's ECOOP conference in Berlin. The panelists, among them celebrities such as Martin Odersky, Judith Bishop (moderator), Tiziana Margaria, Gilad Braha discussed "OOPS in the next five years - the hot topics". All of the panelists tried to argue from different perspectives introducing topics such as Dynanmic Languages or Service-Orientation, My key points were more driven by my industry background. I don't believe in Silver Bullets. Thus, I guess, OOP will experience some significant improvements integrating other paradigms such as functional programming and providing advanced features such as traits. However, I anticipate specialized OOPS solutions for specific domains. I also assume, we will have to live with other paradigms, due to the different kinds of impedance mismatches we are experiencing in every day projects. And I am sure, someday a new paradigm will evolve that is going to integrate and extend OOP.
In my vacation I also had to prepare for my two OOPSLA 2007 tutorials. The one on high quality software architecture covers process principles, architectural principles and quality properties relevant for achieving high quality in software architecture design. The preparation of my second tutorial on Software Architecture Refactoring was far more challenging. I thought, there should be a lot of material available on this issue. To my big surprise, there were only a few sources on that issue (which I already covered in a previous posting). Most sources refer to code refactoring such as introduced in Martin's excellent book. Architecture Refactoring deals with semantic preserving transformations of the software architecture itself. In my tutorial I will come up with some general concepts, a catalog of architecture refactorings as well as some words about how architecture refactoring relates to other disciplines. If you'd like further details, you might want to listen to the OOPSLA podcast (see episode 5) where Bernd Kolb from Software Engineering Radio interviewed me a few weeks ago.
In my OOP 2008 presentations I will also talk on Software Architecture Refactoring. Moreover, I will prepare a one day tutorial on .NET technologies for building SOA applications.
Note, that I won't be able to participate in this year's JAOO conference. I hadn't been invited as a track chair but Ted Neward asked me whether I'd be able to give a talk on Enterprise .NET. However, there is such a lot of private affairs and Siemens internal work that I cannot make it this year.
Thus, I am looking forward meeting some of you at OOPSLA.

Wednesday, July 18, 2007

Strategies and Tactics

Software architecture is about the components of a system and their relationships. It also comprises the guiding principles that lead to these components and relationships. Finally, a software architecture deals with multiple viewpoints on the same system, for instance structural and behavioral viewpoints. Hence, every software system reveals a software architecture. However, if you just enter the implementation phase skipping analysis and design you will follow an unsystematic approach which inevitably leads to an ad-hoc software architecture. For all but trivial systems a systematic approach is important to achieve software architecture quality. Quality in this context means to meet all requirements, the implicit and the explicit ones - of course, considering the viewpoints of all stakeholders. One of the most important disciplines in a project is mapping requirements to the software architecture using a systematic approach. This does not only imply drawing bubbles in graphical tools. At the end, software architecture is only a means for implementation, not less and not more. Reminds me of the famous Bertrand Meyer, who once said things like "Bubbles don't crash" and "All you need is code". All these considerations lead directly to what we call tactics and strategies. A strategy represents a guiding principle helpful to achieve a particular goal in the software system as a whole. For example, how should your application address issues such as availability or fault handling. To guarantee conceptual integrity and orthogonality strategies shouldn't overlap. Let me introduce an example for such an overlap. Assume, the architects in a project come up with the following strategies: (A) Whenever a security exception appears, a 32 bit error value must be returned. (B) For reporting and handling all kinds of errors and exceptions, structured exception handling shall be used. Both are strategies as they deal with guiding principles for the whole application. But now suppose, you are in the role of a developer. How should you deal with a security exception? Because the two strategies overlap, it is very likely that some of the developers will return error values while the others will apply exception handling. Why is this bad? Overlaps always have a negative impact on understandability and readability of the software architecture. What are tactics? In contrast to strategies, tactics are more fine-grained and closer to implementation. A tactic helps solving local problems instead of addressing a whole software system. Design patterns are very good examples for such tactics. Assume, you have implemented a temperature sensor component which needs to notify other components when specific temperature tresholds are reached. Of course, this is exactly a problem context where the GoF Observer pattern comes into play. Needless to say, that tactics shouldn't overlap either. In the ideal case tactics also help implementing strategies. By the way, architecture patterns such as Model-View-Controller are often applicable to define strategies, while tactical patterns such as Observer help implementing the architecture pattern. I guess, this gives you a better understanding what strategies and tactics are and how they relate with each other. For the job of a software architecture this classification implies a two-step, top-down approach. First define the base line architecture using the strategies that are derived from requirements and are supposed to drive the whole software architecture. In the next step, refine the software architecture base-line using tactics. Yes, but in which order should I integrate strategies and tactics? As strategies (and tactics) must be derived from requirements, let risks and priorities drive the order of architecture design. A system where performance is much more important than platform independence will reveal a completely different architecture than a system where platform independence is king. Thus, in the first case you'll first encounter strategies that deal with performance (caching, pooling, load balancing, concurrency strategies), while in the second case you will preferably apply appropriate strategies for platform independence (strict layering, wrapper-facades, reactors). A good exercise could be to look at your current or last project and figure out all the strategies and tactics applied there. Did you systematically consider and integrate strategies and tactics? In every project, you should regularly take some time and reflect about the project to keep it on the right track!

Sunday, July 15, 2007

What the hell is an ESB?

When I am getting involved in discussions about SOA recently, ESB is one of those terms I often hear. ESB is another citizen of the SOA buzzword universe, comprising other TLAs such as BAM, BPM, EAI, or B2B. One of the most entertaining and fun exercises is asking senior management and other powerpoint engineers what exactly they mean by ESB. Guess, how many definitions of ESB you'll discover when asking this question? Once, business magazines such as the Wallstreet Journal start talking about SOA, you should be aware of the following facts:
  • senior management may soon ask your IT department for a SOA strategy
  • it may be the right time to sell shares of SOA vendors
  • you'll find all those market analysts attempting like crazy to convince you of the business impact of the aforementioned TLAs

Another fun part is talking to techno nerds about SOA, especially those who believe CORBA and the like will cure all world problems. Did you ever encounter this "I may not know your problem, but I know that CORBA is the right solution" habit? On one hand, these techno nerds will point you to all performance penalties caused by SOA, because they think SOA is all about WS-* standards only. On the other, they will prove that CORBA is SOA, because it already provides the same solution concepts such as separating interface from implementations and leveraging standardized protocols.

Nonetheless, SOA makes a lot of sense from a technology and from a business perspective. When integration of heterogeneous environments is a key issue in your product and solution development, SOA may be the appropriate choice. SOA is neither a ubiquitous solution for all networked systems as business managers often think, nor is it that inefficient as techno nerds believe. The reality lies somehow in the middle between these extremes. In fact, SOA is an architectural paradigm that may even be applicable for CORBA based solutions (but fits much better with message-oriented middleware, to be honest). Of course, SOA in its WS-* instantiation is a perfect alternative to EAI and BPM approaches.

When SOA started to evolve, it was considered a set of architectural concepts that must be implemented using a stack of low-level protocols (such as SOAP, WSDL, UDDI). This resembles how CORBA and DCOM started more than 15 years ago. For instance, the first CORBA products didn't even provide interoperability. They just offered the same standardized, uniform programming interface. To guarantee interoperability, the OMG introduced the IIOP (Internet Inter-ORB Protocol). Shortly after real world projects started integrating CORBA ORBs, they required advanced services to deal with events, security, discovery or transactions. Hence, the OMG initiated efforts to come up with the so-called COSS (Common Object Services Specification). The availability of COSS, however, did not solve all issues either, because engineers had to manually deal with CORBA ORBs as well as with a whole bunch of rather complex COSS services. In addition, not all vendors provided all the required services. Hence, the CCM (CORBA Component Model) was introduced that decouples developers from lifecycle management and service issues.

In the SOA/WS-* universe this storyline re-appears. Think of SOAP and WS-I (IIOP)! And think of WS-Coordination, WS-Attachments, WS-Security (COSS)! Finally, ask yourself which part of SOA is playing the role of advanced technologies such as CCM. This is exactly where ESB enters the stage. ESB is one of the core assets of any SOA strategy. It provides a container that decouples implementation details such as communication protocols from developers. It also adds (or should add) lifecycle management, routing, transformation, security, management and discovery services. Without an ESB, engineers have to face the challenge of dealing with a bunch of different (mostly low-level) technologies they need to integrate. Unfortunately, big players such as SAP or Microsoft do currently not provide integrated (!) ESB products. And those companies selling ESB solutions often provide "heavy-weight" containers with a high degree of vendor lock-in.

What we actually need are light-weight ESB solutions that are based upon standards such as SCA or OSGi and interoperate with other platforms (such as WCF/WF/BizTalk). Such ESB solutions should cover the whole range of application domains and support enterprise applications as well as embedded applications.

We have started our journey but it will take a long time to reach the destination.