Hitchhiker's Guide to Software Architecture and Everything Else - by Michael Stal

Saturday, April 29, 2023

 Systematic Re-use

Re-use is based upon one of the fundamental principles not only for lazy software engineers: DRY (Don‘t Repeat Yourself). Instead of reinventing the wheel developers and architects may re-use existing artifacts instead of reinventing the wheel again and again. 

Re-usable assets come in different flavors:

  • Code snippets are small building units developers may integrate in their code base. 
  • Patterns are smart and proven design blueprints that solve recurring problems in specific contexts.
  • Libraries comprise encapsulated functionality developers may bind to their own functionality.
  • Frameworks also comprise encapsulated functionality. In contrast to libraries developrs integrate their own code into the framework according to the Hollywood principle (don‘t call us, we‘ll call you).
  • Components/Services include binary functionality (i.e., they are executables) that developrs may call from their own application.
  • Containers represent runtime environments that provide functionality and environments to applications in an isolated way.
Apparently, these are different levels of re-usable assets with varying granularities, complexities, and prerequisites.

Software engineers may not only use re-usable software assets, but other types as well. For instance:
  • Tests, Test units, Test plans
  • Documents
  • Production plans
  • Configurations
  • Business plans
  • Software architectures
  • Tools
While some assets such as code snippets may be used daily in the code-base, patterns or software architecture templates need to be instantiated in an easy way. 
The more impact re-usable assets have on applications and the more abstract they are, the more systematic the re-use approach must be. The most challenging projects are product lines and ecosystems that require different assets at different re-use levels. For example, they introduce the need for a configurable core asset base that is re-usable across different applications. Furthermore, they support a whole class of applications that share the same architecture framework and other assets. A core asset in a product line or ecosystem affects not one application but a whole system family.  Thus, its business impact is very high. 
In such scenarios, core assets often are inter-dependent and must be configured for the specific application under development.  As a prerequisite for the development of a core asset base, a Commonality/Variability analysis is necessary that determines what applications sharing the same core assets have in common and how they differ. A core asset needs a common base relevant for all applications that use it as well as configurable variation points to adapt it to the needs of an application. 
A bad or insufficient  Commonality/Variability analysis incurs higher costs a may even lead to project failure. 
Core asset development and application development might happen separately by different teams or  by the same teams. Each approach has its benefits and liabilities.
Due to the high business and technical risks of these advanced approaches, all stakeholders need to be involved in the whole development process. Building a product line or ecosystem without management is not feasible. Managers need to re-organize their organisation, spend budget for core asset development and evolution. 
Most product lines and ecosystems fail, because:
  • lack of management support,
  • insufficient consideration of customer needs,
  • inappropriate organisation,
  • inadequate Commonality/Variability analysis,
  • insuffient or low-quality core assets,
  • underestimation of testing or inadequate quality assurance,
  • bad software architecture,
  • neglectence of competence ramp-up activities,
  • no re-use incentives,
  • missing acceptance by stakeholders.
Consequently, product lines and ecosystems need a systematic approach for re-use  and must involve  different types of stakeholders. They need a manager  who is able to guide the approach and has the capability to decide, for example, on budget, organisation restructuring, competence ramp-up activities, or business strategy. 

Re-use comes in different flavors and the higher its impact the more systematic the re-use process needs to be

[to be continued]




Sunday, March 26, 2023

 

Models and Modelling - A Philosophical Deep Dive

Motivation

Not only in software architecture we use models for designing and documenting systems. Models are also indispensible in other engineering disciplines and in natural sciences. We all experienced good and bad models in our daily lifes. What is a model really about? And how does a good model look like? Let us enter a (philosophical) discussion about this topic.


What is a model?

A model captures the essence of a domain. It focuses on the core entities and the relationships within a domain from a specific viewpoint, i.e., serving a specific purpose. A model contains rules that must hold for its constituents. Models are used by humans or machines to communicate about the respective domain for a particular purpose.


Examples of models include:

a UML diagram

a street map

a floor plan

an electronic circuit diagram

a problem domain model (DDD)

quantum theory

mathematical formulas


Consequences: 


(i) The same domain can be represented using different models, each capturing another viewpoint of that domain. This viewpoints are often briefly called views.


(ii) Models can be informal or formal depending on their usage as a means for communication. Thus, they must be easily understandable and comprehensible by stakeholders.


(iii) Models introduce abstraction layers by using generalization and specialization leaving out „unnecessary“ respectively irrelevant details. 


(iv)  A model does not describe reality but a subset of reality viewed from a specific angle.


(v) Languages are based upon models. A model can be viewed as a language, and vice versa. 


(vi) A model may support a graphical presentation or a textual presentation, it even may include both.


The complexity of a model is directly proportional 

  • to the number and types of its entities and their relationships,
  • to the kinds and numbers of abstractions being used,
  • to the complexity of its underlying rules.


A good model:

  • provides a proper separation of concerns (SoC)
  • consequently applies principles such as the single responsibility principle (SRP), Don’t-Repeat Yourself (DRY), KiSS, or the Liskov Substitution Principle (LSP) in order to gain the highest understandability and comprehensiveness
  • uses expressive names for all its abstractions, entities, dependencies
  • provides an effective and efficient means of communicating among stakeholders
  • focuses on essence and leaves out everything that does not serve the required purpose of the addressed viewpoint
  • avoids accidental complexity strictly and consequently 
  • allows to model simple things in a simple way, while being capable of expressing complex things in a doable way


Stakeholders

The creation of a model should be guided by its (types of) stakeholders, in particular by the way they intend to use the model. In this context a meta model helps define how the set of creatable models should look like. Thus, meta models constitute modeling languages. They help create different models or views.


To define an adequate model that serves an intended purpose all (human) stakeholders should be involved. UML is an example of a modelling language that serves the needs of software engineers but (often) not those of many domain experts. In fact, domain experts might have their own models readily available. While a model might be perfect for machine-machine communication, it aint’t necessarily adequate whenever humans are involved. The more formal a model is, the easier it can be processed by computers. Humans often need more informal and expressive models instead. If both kinds of stakeholders are involved, we need to balance between formal and informal approaches. 

Emojis are an example of an informal model. They can be immediately understood by a human, but may be more difficult to process by a machine.

Artificial Neural Networks albeit “simple” can be processed by machines very well, but are hard to be understood by a human - i.e., with respect to what they actually do and how they work. 

UML is somewhere in the middle of these extremes. 


Fortunately in many mature domains, models already exist. An electrical circuit defines a proven concept of a model. Mathematics is often considered a uniquitous language with predefined notations. In the context of software engineering, domain models are often implicitly defined and have been established as common sense in an organization. If software engineers with no or little domain expertise start to develop software applications for the respective domain, they need to make the implicit model explicit. Otherwise they cannot design a software architecture that meets the customer requirements. This is what DDD (Domain-Driven Design) is all about. It tries to come up with a domain-specific model using generic building blocks such as DDD patterns and techniques.


The representation of a model should fit the needs of its stakeholders. For humans graphical notations often work very well, because they explicily reveal their structure in an easy manner and are good to grasp and to handle. Due  to productivity reasons, textual models may be more beneficial and flexible in some cases. As an example consider software code. For a beginner graphical code blocks might work very well, while advanced programmers prefer coding textually, because they can mentally map seamlessly between the “graphical” design and the textual code representation. Handling code graphically might just reduce their productivity, effectiveness and flexibility due to all clutter and constraints.


Model Transformations

To keep many stakeholders satisfied a possible approach might be to introduce different models for different types of stakeholders and also create mappings between these models, for example an easy to understand UML model that is transformed into a machine readable XML schema. 

Actually, software engineers are used to handle different models that are mapped onto each other. In software engineering a compiler represents a model transformation from a high level language to a system language or interpreter. A UML diagram might be transformed into high level language code. A low-code/no-code environment creates domain-specific applications from high level user specifications. However, model-to-model transformations can be quite complex, in particular when the gap between models is very large and if no common-off-the-shelve solutions for the transformations are available. Moreover, the more models the more transformations are necessary. Note: a model transformation might also be done manually if the model is not too complex and mapping rules are pretty straightforward.


Model sets

In domains such as building contruction or software engineering multiple views are necessary to represent information from different angles. Take design view, deployment view or runtime view as examples in the software engineering domain. In addition, their might be different model abstraction layers, for example, an in-depth design view versus a high level software architecture view.  In other words, to solve a task we need a model set instead of a single model that captures every detail from every perspective.

No matter how the views differ from each other, there needs to be meta information to tie the different views together. Prominent examples are the mapping from a view to code, and the implicit or explicit relation of views with each other. Note: there might be different solutions respectively model kits for the same problem context, e.g., RUP’s 4+1 view in contrast to TOGAF might not be the (only) solution of choice for designing an enterpise system. 

No matter what model set you choose, make sure that it is used consistently. In most cases tool support is strongly recommendable. Models can become very complex. Therefore you need a tool to draw, check and communicate the concrete models. This is the main reason why most software engineering activities rely on some sort of UML environment such as Enterprise Architect or MagicDraw. 

A ground plan is different from an electricity plan. All models together are necessary for building construction.  In this example, there might  also be rules and constraints across all models respectively views. For example, an electrical cable should have a minimum distance to a water pipe. Consequently, we need some kind of verification algorithm to check whether rules/constraints are violated. 


Model Creation

Models shall never be created in a big bang approach. They are living entities that change over time the more experience you obtain. They may start very simple but become more complex over time. Whenever they are overengineered, they need to be simplified/refactored again. Model creaters need to ensure that models can be facilitated and handled by stakeholders easily. If stakeholders have different viewpoints at the same problem, create a model set where each model view serves a particular set of stakeholders.


To start creating a model for a domain context, we should figure out whether such models already exist, and if this is the case, whether these models can serve the desired purpose(s). It is always beneficial to use existing models, in particular due to the experience and knowledge they carry. So, don‘t reinvent the wheel if not absolutely necessary, especially if you are no expert in the domain.


If no model exists, stakeholders should jointly create a model (set). It is helpful if at least one of the stakeholders is experienced in creating models while at least some other person is a domain expert.


If models exist that do not serve the intended purpose, we might change and adapt these models to fit our needs.


Note: a common mistake is to first focus on the syntax of a model. Instead, initially think about its semantics and find a good syntactical representation afterwards.  


No matter how a new model is created,  learning its representation should happen in a quick and straightforward process, even for unexperienced stakeholders.


Interestingly, most graphical models consist of rectangular or other symmetrical shapes, arrows, lines and textboxes, while textual models often use regular or context-free grammars. The reason for this observation is that this way the models are comprehensible and their handling is easy. It should also be possible to draw a model manually in order to discuss it with other stakeholders before documenting it.  Sitting around a modelling tool significantly decreases productivity, at least in my experience. A whiteboard or a flipboard is by far the best tool for modelling. This can be complimented by an AI software that recognizes manually drawn models and transforms them to clean and processable data representations. 


Summary

In this blog posting I did not reveal any new or innovative stuff you didn‘t already know. Neither was it my intent to provide anything revolutionary. It is just a summary of modelling and how to approach it. And if you started thinking about modelling from this more philosophical view, I‘d be happy. 




 


Wednesday, October 21, 2020

 Cohesion and Coupling

Sometimes what appears to be very simple and clear, is not that clear at all. A prominent example is the difference between cohesion and coupling.
    • Cohesion indicates the relationships between components in a module. These relationships are not just syntactic sugar but have a semantic implication.  For example, if a module provides different separate functionalities it will reveal low cohesion. In constrast, modules with high cohesion implement functionalities that are tightly interweaved with each other.  In other words, modules with a low cohesion provide different things that are not related with each other. Thus, they violate the SRP principle (SRP = Single Responsibility Principle). The best refactoring in such cases is to split up the module into different modules that obey the SRP.
    • Coupling defines the number of relationships between different modules. A module A that introduces a lot of relations to module B, strongly depends on module B, and vice versa. Strong respectively tight coupling of modules indicates that functionalities in B could be integrated in A. The obvious refactoring would integrate all functionalities of B into A, thus decreasing coupling of the system. Of course, merging of modules might reduce cohesion in the resulting module A. But what if another module A* also depends heavily on B. Should we then merge A and B as well as A* and B? Or should we merge A and B and reroute (A*, B) relations to A, thus possibly increasing coupling between A and A*? The best option is to strive for high cohesion and low coupling throughout our system and its constituting modules. Merging B into A and A* at the same time might violate the DRY principle (Don‘t repeat Yourself). But as we know, the DRY principle is not always appropriate. In Microservices architectures the diffenent microservices should hide their implementation details and do not share the same information but be independent of each other. Nonetheless, the principles of high cohesion and low coupling still apply.
Fortunately, architecture introspection tools can visualize cohesion and coupling and even provide hints and mechanisms how to restructure a given system. 

Friday, January 18, 2019

IoTDeepDive

IoTDeepDive - Infos zum OOP 2019 Ganztagstutorium Fr4

Diese Webseite ist auch erreichbar über folgende URL: https://tinyurl.com/oop2019-iotdeepdive

Hardwareanforderungen

  1. Windows, Mac, Linux
  2. Der Rechner benötigt WLAN-Fähigkeit über WPA/WPA2. WLAN-Verbindung stellt der Veranstalter bereit.
  3. Ein externer USB-Port sollte vorhanden sein.
  4. Zum Zugriff auf den seriellen Port mit dem auf dem Board verwendeten Silicon Labs USB-zu-UART, ist ein Treiber nötig. Bitte Version für eigenes Betriebssystem herunterladen und installieren!

Softwarevoraussetzungen

  1. Im Tutorium kommt die Arduino IDE zum Einsatz. Download über https://www.arduino.cc/en/Main/Software und Installation gemäß der dortigen Beschreibung (abhängig vom Betriebssystem).
  2. In der Arduino IDE muss ein zusätzlicher Boardsmanager für ESP32-Boards installiert werden. Für eine Beschreibung des Vorgehens siehe: http://esp32-server.de/
  3. Adafruit-Bibliotheken in der Arduino IDE installieren, falls nicht über Library Manager verfügbar: 
  4. Allgemeine Sensorbibliothek hier. Als Voraussetzung für die nachfolgenden Bibliotheken wichtig. Jede Bibliothek  per .ZIP Download holen (GitHub Clone or Download) und dann mittels Sketch>Include Library>Add.ZIP Library der IDE hinzufügen.
  5. DHT 11: hier
  6. BME680: Die Bibliotheken von Adafruit funktionieren auch für das Watterott-Breakout.  Zugehörige BME680-Bibliothek hier herunterladen. 

Source Code für Beispiele

Die Quelldateien für die Übungen finden Sie hier

Handouts


IoT Hardware Bezugsquellen

  • ESP32 Entwicklungsboard im Makershop.
  • BME680 Sensor (Wetter/Umwelt) Breakout-Board bei Watterott.
  • DHT11 (Temperatur/Feuchtigkeit) bei exp-tech.
  • DHT22 (genauerer Bruder des DHT11) bei exp-tech.
  • Breadboard bei exp-tech. Besser, aber auch teurer, sind freilich Labor-Breadboards wie das von pollin.
  • Jumper (Dupontkabel) zum Beispiel auf amazon.
  • LEDs zum Beispiel als Sortiment auf amazon.
  • Widerstandssortiment ebenfalls über amazon.
  • Anschlusskabel Micro USB auch auf amazon.
Wer sparen will, versucht in China (ebay, Banggood, Alibaba) entsprechende Komponenten zu beziehen. Dauert of lange und ist eventuell mit einem Gang zum Zoll verbunden.

Sunday, August 16, 2015

What the hell is Software Architecture

Since the Nineties many definitions of software architecture have been proposed, most of them being very vague. Eventually, the common denominator of these definitions suggests that software architecture denotes a set of cooperating components. In my opinion, this view is rather simplistic. Most entities in the universe follow the same definition which makes the definition useless from an engineering perspective.

Instead of providing yet another definition of software architecture we should think about its properties.

(i) Software Architecture is both a process and a thing: the process of architecting comprises a sequence of strategic decisions, while the thing is the result of this process. The goal of software architecture is to create the backbone for implementing the specification. Note, that we do not assume a specific software development paradigm here such as Lean or Agile Development.

Remark 1: Strategic decisions refer to requirements and additional forces that affect the whole architecture and result in design artifacts that are tightly coupled with the rest of the architecture which makes them difficult and expensive to change. Examples include mandatory functional properties of the system, operational qualities such as performance or security, infrastructures for modifiability such as a Plug-in Architecture, constraints caused by the system context such as hardware prerequisites.

Remark 2: All architecture design decisions must be driven by the specification and the business goals. Put briefly: No decision without a (good) reason.

(ii) Architecture design spans the whole lifecycle of a system not just its creation. It starts with initial planning and requirements engineering and ends when the system(s) built upon this architecture reach their end. Between these various points in time it covers creation, maintenance and evolution.

(iii) In a naive sense every software-intensive system reveals a software architecture, even if it has been created in a complete unsystematic or unintentional way, i.e., using ad-hoc decisions. What we need is systematic architecture design driven by well defined and prioritized architecturally-relevant requirements as well as risks. Sometimes, some or even all parts of an existing system with an unknown or partially known software architecture need to be (re-)used. In this case software archeology methods are required to help extract the hidden software architecture and make it explicit.

(iv) There are two kinds of architecture quality, external quality and internal quality. While the former one defines the externally visible behavior as demanded by quality attributes, the latter defines the habitability of architectural artifacts (such as simplicity or expressiveness) by developers, testers, etc.

Remark: a consequence of habitability is the limitation of software architecture design to a small number of hierarchical entities such as system, subsystem, components. This entities should follow the Single-Responsibility Principle. Accordingly, the responsibility of fine design and implementation is to refine and extend these abstractions to provide executable artifacts.

(v) For architecture as a process a consistent set of guidelines and tools shall guide the architecture design in order to ensure high quality and business alignment. Without such guidelines the architecture will be overloaded with multiple styles, idioms, patterns, concepts, technologies, paradigms, conventions, all of which reduces internal quality.

Remark: One major challenge is taming inherent complexity while avoiding accidental complexity. The latter one can be caused by using the wrong solutions, applying the right solutions incorrectly.

(vi) Software architecture as a thing is a means for communicating design decisions to other stakeholders. Thus, all decisions must be made explicit in a comprehensive way. The various constituents of the architecture shall be provided to readers in adequate ways depending on their roles and goals and their responsibilities and expectations. For this purpose, software architecture documentation must offer a consistent and complete set of architectural views.

Remark 1: Since software architecture is a thing and a process, its documentation includes the sequence of design decisions and their rationale which relates architecture views and process and which enables requirements and decision traceability.

Remark 2: Software architecture creates a base for design & implementation. It defines an initial (walking) skeleton for deriving the code base. This is why habitability is of foremost importance.

(vii) A software architecture is not an island but embedded into a context. Thus, it is important to strictly separate the architecture from its environment, while at the same time considering and defining the interfaces and interactions between both. Otherwise, it won't be possible to come up with an appropriate software architecture. A context view and use case views are examples that help address these aspects.

(viii) Software architecture must cover both problem domain and solution domain. For this reason, a Multi-Tier design is not an architecture. To avoid monolithic designs both domains should be hierarchically structured into subdomains and their relationsships. The organization of software architecture activities shall be driven by the aforementioned subdomains, not by the line organization. With other words, mind Conways law!

(ix) Software architecture is not necessarily constrained to a single system. It might also define the base for a set of systems within a specific problem domain. In such reuse contexts a Commonality/Variability analysis is required to define a common base architecture which will be modified for a particular implementation context before fine design and implementation start.
Examples: product lines, ecosystems, platforms/infrastructures, libraries.

Remark: This is what reference architecture and product-line architectures are about.

(x) Architecture design must follow a test-driven approach and communicated in such a way that the architecture can be tested. Testing provides relevant information to the architects such as revealing quality issues or design flaws. It includes quantitative and qualitative architecture reviews.

I can't resist. Let me conclude this posting with yet another definition of software architecture:

Software Architecture
is a process, i.e., a sequence of intentional strategic design decisions which map specification and business goals to architecture design.
is a thing, i.e., a set of views that address different stakeholders and are the results of the architecture process.

The main challenge for software architects is: there are different ways to design a good software architecture, but there are infinite ways to create a bad one.

Addendum July, 17th

Some may wonder why software architecture should be considered a process as well.

It is insufficient to obtain some architecture views, i.e. the What. In addition we might need information how the architecture has been created and why it is as it is, i.e. the How and the Why. This is not essential for some stakeholders such as users, but it provides relevant information to developers, customer service, and testers.

One notable example is the detection of design flaws. When we encounter a flaw in our system, we'd like to know when the design flaw has entered the "crime scene" and what other decisions depend on this flawed decision. This gives us traceability and the possibility to rollback in a systematic way. Likewise, process knowledge is important for architecture reviews.


Wednesday, May 13, 2015

A Matter of Waste

If a queue has a capacity of 100 units, we may enqueue 10 entities with a volume of 10 units per entity. 10 times 10 equals 100, right?

If we find a parking space with exactly the same length as our car, our search will come to an end - assuming the parking car must be parked parallel to the sidewalk. Right?

Hmmmh, the last one does not feel right. But what is the problem? It is that we need some additional space to maneuver. This extra space could be considered as waste, but it is in fact a precondition for parking operations. We may call this quality attribute "parkability".

A similar problem is the issue of a rectangular container with a volume of 100 ought to be filled with 100 spheres of volume 1 each. Obviously, we can't expect 100 spheres to fit into a container with the size of all spheres summed up. Again, the challenge is caused by inherent extra space needed by spheres.

Do we encounter such challenges in software design as well? Of course!
To be precise, we experience "waste" in two different areas: in our own activities as engineers and in our systems.

Let me start with the latter one that is primarily caused by technology and design constraints as well as by inherent properties of the problem domain. If you got this cool new 64-core machine with Terabytes of RAM, you may want to increase the efficiency of your apps by 64 times. Unfortunately, waste comes into our way again. Competition for resources and inherent dependencies in the problem context will cause the performance increase to be significantly lower than the desired factor of 64. And we did not even consider accidental complexity in this scenario.

Interestingly, it is such dynamic overhead that makes it difficult to deal with operational qualities or to build realistic simulators.

But how does this waste issue relate to software engineering activities? Basically, it is the same calculus again.

If we are committed to two different activities with overlapping time spans that need our attention, there is a chance of conflicts. Two ressources are competing for your time. As you cannot focus on two activities at the same time, you'll have to focus on one, which leads to a kind of time debt in the other one.

Another approach is switching back and forth between two activities. However, this requires the engineer to stay synchronized with the current state of each activity. Experts in multithreading call this "context switch". Time needed for such housekeeping duties inevitably causes waste.

Another typical root of waste is when a consultant or engineer of a company is supporting a custumer, because it is necessary to deal with two organizations such as organizational overhead for vacation, payments, ordering items, time accounting, meetings.

Often engineers do not consider such overhead, because they underestimate the problem instead of making waste explicit and finding ways to reduce it. But keep in mind: It is impossible to remove all waste, in particular inherent one.

Experience tells us that at least (!) one fifth of time allocated to an activity will be consumed for "non productive" issues. Thus, we always need to explicitly add a waste factor representing idle time.

Another reason for waste is communication. Think about reading mail, joining meetings, handling phone calls, chating with colleagues, drinking coffee. In a magazine I recently read that such interrupted work is a major issue in companies causing incredible amounts of overhead. This is due to the fact that when humans are experiencing a non maskable interrupt they'll need to reboot their brain before refocusing and continuing with the interrupted work.

So, if you need to reduce productivity of colleagues significantly play the interrupt game. Tom DeMarco once told me about two printer companies in the US where he could relate meeting time and productivity. The more (mandatory) meetings the less productive a company will be. Maybe, this pattern could be called "death by meetings". If you appreciate further patterns, start reading Dilbert by Scott Adams.



- Posted using BlogPress from my iPad



Thursday, March 05, 2015

Technical Debt - The Downside of Metaphors

The term "debt" is a metaphor from economy. Its use for software development seems to be very reasonable. Whenever developers fail to address quality issues in their software, they will have to pay this debt back. That is quite simple, isn't it?

However, we can easily find weaknesses of the term "technical debt":

When a system reaches its end of lifecycle, all debt will be gone. Try this for financial debt.

Financial debt is a separate entity, i.e. two debts do not intertwine, while software debt sometimes can't be easily located, isolated or separated, because it is woven into the system.

Technical debt may stay unpaid, if software engineering comes to the conclusion that the cost for refactoring would be higher than keeping the debt untouched.

Financial debt is created intentionally. This is certainly true for some technical debt issues as well such as temporary workarounds. However, many kinds of technical debt are accidental without architects even recognizing it.

Technical debt is destructive. If possible, you would like all of it to be eliminated. In contrast, financial debt often is more constructive - such as increased cash flow.

Financial debt is independent of other system parts, while technical dept in one place can be affected by other parts, and vice versa. An example would be modification of the system that makes the code parts containing the technical debt obsolete.

Interest rates are mostly determined by banks and usually stay fixed over the specified time. In the context of design debt the interest rates increase the longer the debt is not treated. The pay back time is usually not predetermined. And the same kind of debt may be more expensive to pay back in one system part than in other parts.

While almost all forms of financial debt are paid back continuously until the debt plus the interest rates are fully covered, each single technical debt in most cases must be paid back at once.

There is one kind of debt in the financial domain, while technical debt has many different shapes.

If you think about it, you'll find other metaphors that are more convincing. For example, a tumor or infection has many similarities with what we call technical debt: It can occur accidentally, or intentionally if you don't take care of your health. It must be paid back at once - you are either ill or not, which isn't quite true in practice, but very close. The longer it is not treated, the worse the health condition will become - think of viruses that spread and damage other organs. Infections and technical debt are mostly destructive. Both can be monitored in imaging devices or labor diagnostics. There are tumors that are difficult to get rid of and that cause severe damage, while others are harmless if treated early. The worst ones can even become lethal. So my first guess would be to call it software infection instead of software debt.

Another possibility is to use environmental pollution as a metaphor. Like technical debt it may occur incidentally or (most of the time) accidentally. You must pay it back, but you can do this part by part. If left untreated, the situation worsens. It is purely destructive. Often, it can't be easily separated from the environment. Thus, another possibility would be to call it software pollution. Software Architecture Sustainability is a metaphor that is related to environmental issues. And so is the term "Software Ecosystem".

Other metaphors could be considered as well such as terms for similar issues in hardware or building architecture (consider design erosion).

My personal favourite is Software Infection (respectively technical infection, code infection, design infection). Of course, this metaphor has its own liabilities, but it is much closer to its cousin in software development than the debt metaphor is.

And it sounds appropriate to speak about a sick component or architecture. Or to visualize an infection using a modality like in medicine.

I know that it is too late to get rid of the expression "technical debt", but at least we should handle it with care. Some metaphors that sound good in the beginning may turn out to be not well aligned with what we like to visualize.







Location:Auenstraße,Munich,Germany