Yes, I know that everyone knows what pattern are these days. This question however is not supposed to address the fundamental definition of patterns. It is more an issue dealing with the validity and lifetime of patterns. As we mention in our POSA series patterns are not carved in stone. They are subject to change and extension. Just assume for a moment that you have documented a pattern like Singleton. After several years of practical application new implementation contexts have been found by developers that were not anticipated in the orginal pattern description. In addition, assume errors were detected in the pattern documentation. Now consider the case someone else is re-documenting the pattern according to these findings. Which kind of changes will modify a pattern in such a way that you should consider the pattern re-design to be a new pattern of its own? When is it allowed to assign an already existing pattern name to a re-designed pattern description? My first guess at the moment is: when errors in a pattern description are removed this won't qualify for considering the result a new pattern. The same holds for providing new forces, consequences, or known uses. Obviously, just adding a new example won't affect a pattern so that it becomes a new pattern. If someone comes up with a narrowing of the pattern application scope - take remote proxy as an example - this will lead to a new pattern variant but not to a new pattern. If however, the internal structure or dynamic behavior of a pattern is changed in a significant way such as removing or adding participants without narrowing the pattern's context this will qualify to consider the new pattern document as a new pattern of its own. This, of course, as always in the pattern universe, only holds if there are at least three known uses. Using the same pattern name in this case would inevitably lead to confusion, because people applying the pattern would talk about the same pattern name without referring to the same pattern content. This way, we would loose one of the benefits of patterns as an architectural concept. Now, for something completely different. Guess, someone has documented an architectural pattern. After a while other architects document a pattern language to help implementing the architectural pattern. Does this qualify for re-documenting the original pattern? From my viewpoint it does if the pattern language is just applied to add meat to the pattern implementation section. In this case architects should be aware of the fact that architectural patterns by nature denote generic patterns applied to complete software architectures. Since they are cover such a high abstraction layer, it is obvious that each architecture pattern represents a root from which complete pattern languages can be derived. There are two points to consider. If the pattern language does narrow the focus of the architecture pattern or change its static structure or dynamic behavior, the result is a completely new pattern or variant. It is not just an implementation of the existing pattern anymore. The other issue to be taken into account is the fact that the same architectural pattern might even be implemented using different pattern languages.
So what are my conclusions? Patterns live. They are actually not carved in stone. Error modifications, widening of scope, additional known uses, additional or modified implementations or example sections, additional forces and consequences should be possible without having to instantiate a new pattern. All other changes affecting static structure or dynamic behavior in a significant way lead to a new pattern of its own that must be associated with a new pattern name. There are still some open issues here. Hopefully, future discussions will shed more light on this discussion.
If you are a software engineer: DON'T PANIC! This blog is my place to beam thoughts on the universe of Software Architecture right to your screen. On my infinite mission to boldly go where (almost) no one has gone before I will provide in-depth coverage of architectural topics, personal opinions, humor, philosophical discussions, interesting news and technology evaluations. (c) Prof. Dr. Michael Stal
Monday, March 08, 2004
Sunday, February 08, 2004
Model driven - Panacea or Myth?
When I first saw the MDA (Model-Driven Architecture) approach of the OMG a few years ago I was more than skeptical. To me it sounded like: Just take an platform-independent architecture model, introduce a platform-specific model, and generate the code. Life can be so easy :-) The problem why most experts did not believe (and some of them still do not believe) in MDA was the fact that marketing and tool vendors made everyone believe this would work for every domain and every platform. It simply appeared to be the arrival of the GPS (General Purpose Problem Solver). If you take away marketing stuff and don't confuse MDA with model-driven approaches in general, then people tend to be more optimistic. Let me introduce an example. A (very clever) colleague of mine, Andrey Nechypurenko, most recently did something which was very "easy" and at the same time very smart. He took Visio, introduced some domain-specific shapes for a customer, and wrote code using VBScript that traverses the model and generates code. Instead of handcrafting their code and configurations (code can have several thousand lines), developers now can just draw their domain model and let the generator do all those tedious, and error prone boilerplate code. They don't use this approach for most of their system, but only for parts that can be automated. Basically , these are the parts that are best understood, require a lot of work to handcraft, and support automatic generation. Note, that not only code can be generated but also any other artifacts such as XML files, documents, just add your own. Expect some further research in this area from Vanderbilt University (Doug Schmidt) in cooperation with Siemens. This area is so fascinating as we can (re-)use all our knowledge in areas such as UML, patterns, frameworks, middleware. And it is not that new. If you've read the exellent book "The Pragmatic Programmer", you'll notice the section about code generators which deals with model-driven generation of code fragments. Basically, all of us, at least if we are programmers, are already applying the principles of model-driven engineering. Every programming langauge helps you to define a model from which code is then generated to run on a (virtual) machine. Today's model-driven approaches differ from this trivial kind of MDA in that they define models on much higher abstraction levels. So what does this mean to us? It simply means don't confuse MDA with model-driven approaches in general. Model-driven concepts may not help to automate software development, but they can offer a significant amount of help. You don't even need to use a specific model, but just can rely on tools such as Visio, Rose, Together, JBuilder, Eclipse, Visual Studio, add some plug-ins and start model-driven engineering.
Sunday, February 01, 2004
The Secrets of Building Efficient Distributed Systems
The recipe is so easy. Just take a book on RMI, CORBA, or .NET Remoting. Add a little bit of EJB, OSGi, COM+ or CCM. That's all. Sorry, one ingredient is still missing. You shouldn't forget to add XML Web services. It's cool and your boss is simply expecting it from you. Then mix everything together. This is all it takes to build a distributed system. At least this is what many books and articles make you believe. Unfortunately, I've seen a lot of systems that were exactly built this way. And it is not the developer's fault in most cases. Vendors and standard organizations primarily focus on transparency isues. For example consider the fact that CORBA hides all system details from you. Hence, it should be no difference whether you are going to build a conventional system or a distributed architecture. This is no forgery but a serious pitfall and trap. In distributed systems there is no central control as in conventional systems. Operational requirements such as scalability or response times are much more difficult to meet. The same holds for non-functional requirements such as flexibility (adaptibility, extensibility, removeability, ...) or security. These cross-cutting concerns can not be centralized since they impact multiple tiers and maybe multiple layers within these tiers. Take security as an example. If you need to connect an external component or service B with your own infrastructure A you must assure that the foreign infrastructure B allows you to use a secure protocol and implements authentication functionality you are willing to trust. Secure communication and authentication rely on the fact that all parties participate in providing them. Take scalability is another example. Sure, scale-up activities are contrained to a single node. However, to scale out, multiple nodes must cooperate together to provide the same service. This might be easy for stateless services but turns out to be much more challenging for stateful services. In addition, the implementation of scale-out functionality such as load-balancing clusters might influence other parts of your architecture. If you make a system flexible this could decrease performance, and vice versa. In some cases it could even help you to increase performance and flexibility at the same time. Take the strategy pattern as an example and consider the capability of a system to use the most efficient algorithm in every runtime context. Here flexibility and performance are two sides of the same coin. In general, there are a lot of such requirements. Some of them can be considered as a whole unit while others might be more contradicting. The same two requirements can be contradicting in one context and enforcing each other in different contexts. Thus, priorities and contexts are an important issue in order to decide between different tradeoffs in your architectures. Things can get quite complex here as you can see. That is the main reason why we still need a methodology how to meet operational and non-functional requirements. The question is if there can be one central approach for all requirements or if we need to partition requirements into groups for each of which can come up with an own methodology. Efforts such as ATAM are very interesting here but this can only be considered as nice start. In the mean time, you as architects and developers should spend extra consideration and efforts for these issues. Don't focus on functional aspects only but keep in mind that non-functional and operational requirements are the major cause of trouble when implementing distributed systems today.
Saturday, January 24, 2004
OOP 2004
OOP 2004 was fine
I am back again after a long time. To be honest, I really had no time left for maintaining this site. Now, it is time to continue my blog.
Last week I attended OOP 2004 conference here in Munich. I heard different opinions about the event: some told me they found it excellent while others were a little bit "surprised" by the hot topics this year. There were a lot of talks on issues such as agility, UML 2.0, design, architecture, MDA. The ratio between technology based talks and those dealing with modelling and processes had its focus on the first area this year. Yes, I have also been involved giving talks and moderating a panel. I gave a full day tutorial on explaining .NET to Java programmers, and further talks on Patterns in .NET as well as EJB 2.1. The panel dealt with XML and Web services Standards. It addressed the issue of benefitting from the standards while avoiding the pitfalls. David Booth (HP, W3C), Jon Bosak (Sun, OASIS), and Richard Soley (CEO/Chairman of OMG) were the panelists. The panelists agreed that it is still difficult for programmers and managers to rely on existing standards. Points addressed were among many others IPRs, the lack of standards for semantic information, and the problem of conflicting standard organizations. Anyone interested in more details may just send me an e-mail. I personally could only attend a few other talks. My highlights were definitely Kevlin (Henney) talking about interface design and Martin (Fowler)'s Keynote where he addressed the design issue of implicit and explicit dependencies.
BTW: If you are interested in meeting me this year. I'll be PC member (tutorial chair) of WICSA taking place in parallel with this year's ECOOP in Oslo. And I am PC member of Middleware 2004 (Toronto) which means that I will also attend OOPLSA 2004 in Vancouver one week later.
I am back again after a long time. To be honest, I really had no time left for maintaining this site. Now, it is time to continue my blog.
Last week I attended OOP 2004 conference here in Munich. I heard different opinions about the event: some told me they found it excellent while others were a little bit "surprised" by the hot topics this year. There were a lot of talks on issues such as agility, UML 2.0, design, architecture, MDA. The ratio between technology based talks and those dealing with modelling and processes had its focus on the first area this year. Yes, I have also been involved giving talks and moderating a panel. I gave a full day tutorial on explaining .NET to Java programmers, and further talks on Patterns in .NET as well as EJB 2.1. The panel dealt with XML and Web services Standards. It addressed the issue of benefitting from the standards while avoiding the pitfalls. David Booth (HP, W3C), Jon Bosak (Sun, OASIS), and Richard Soley (CEO/Chairman of OMG) were the panelists. The panelists agreed that it is still difficult for programmers and managers to rely on existing standards. Points addressed were among many others IPRs, the lack of standards for semantic information, and the problem of conflicting standard organizations. Anyone interested in more details may just send me an e-mail. I personally could only attend a few other talks. My highlights were definitely Kevlin (Henney) talking about interface design and Martin (Fowler)'s Keynote where he addressed the design issue of implicit and explicit dependencies.
BTW: If you are interested in meeting me this year. I'll be PC member (tutorial chair) of WICSA taking place in parallel with this year's ECOOP in Oslo. And I am PC member of Middleware 2004 (Toronto) which means that I will also attend OOPLSA 2004 in Vancouver one week later.
Subscribe to:
Posts (Atom)