Suppose, in the future faults disappear from software systems. With other words, every software system will be completely fault free. Hmmmh, you might say, this sounds like a beautiful dream. But how could we ever make such a dream come true?
If we specify the architecture of a software system in a high-level language such as UML or a domain-specific DSL or a mixture of these, then the first assumption could be:
Instead of manually implementing code, apply a verified code generator that automatically creates an error-free implementation. Of course, there are several hidden preconditions in this context. The generator itself must be verified which certainly is a very tough job, but needs to be done only once. And after all, our implementation depends on the correctness of its runtime environment, i.e. all libraries or 3rd party components being used, the operating system, application server, even the underlying hardware, and so forth. So far, we didn't discuss the implementations model-driven generators create from models. How could we improve their quality? Patterns and pattern languages are the answer as they represent reusable and well-proven architectural solutions for recurring problems.
The next problem consists of guaranteeing the correctness of the models the developers specify and pass to the generator. Here, correctness means two things: correctness of syntax and semantics. While the former issue can be easily ensured using tools (e.g., editors), the latter one is much harder, because it implies the correct consideration of ALL requirements, functional, developmental and operational ones, as well as their composition. Even worse, for meeting operational requirements we need to take into account the whole infrastructure and environment under which our software system is supposed to run. That implies that all 3rd party vendor need to provide this kind of information in some structured and predefined way.
One solution could be to model our software system from different perspectives, for example, from a functional (domain-specific viewpoint) as well as from various operational viewpoints (e.g., specifying a performance model). Such approach requires the availability of different cross cutting DSLs - one for each perspective - as well as tools that merge all those DSLs to one implementation. Today, we would leverage Aspect-Oriented Software Development for exactly that purpose. Needless to say that we now have to prove the correctness of all DSLs and tools and infrastructures and 3rd party components and composition techniques, and so on.
How could we ever verify that a DSL consistently and completely describes a particular domain?
For widespread domains such as finance, automation, healthcare we could introduce standardized languages that are as "reliable" as programming languages. However, it is rather unlikely that we're capable of covering all relevant domains this way. In addition, such DSLs comprise an important IPR of a company which it seldom likes to share with competitors. Hence, we would need to specify our own domain languages or language extensions. Proving their correctness, however, is far from being simple. Language design requires some expertise. Anyway, domain driven design will be an important discipline. Of course, Software Product Line Engineering is another breakthrough technology in this context as it enables us to support a family of similar applications.
Ok, let us take for granted that we were able to master the aforementioned preconditions. What about requirements engineering? Well, many stakeholders might not be very familiar with software engineering. How can they manage to specify their requirements in a correct way? Contradiction or missing precedence of requirements might be detected by tools such as generators, given that all requirements can be expressed in some formal way.
But what about expressing stakeholder's intent? This could be formalized if stakeholders are able to understand some kind of mathematical model. If that's not the case, the only solution left is specifying requirements on behalf of stakeholders. This would need to happen in close cooperation between architects and stakeholders. Unfortunately, such approach only works if stakeholders know what they want and if it is possible to build what they want. Let me give you an example. How would you formally describe a word processor and all the features you expect? How about usability? How about cool UIs in this context? The problem (and the fun) with human communication is its lack of formalism. Thus, you always have to know the person and the person's context to understand its communication. Now, take issues such as hidden agendas into account. Eventually, another problem arises. Requirements are subject to constant evolution. We don't live in an island where changes seldom happen. Rather we live in metro poles where infrastructures and environments permanently evolve. Let us be realistic: Only for very small and stable domains that significantly impose constraints on possible applications, we can introduce formal methods that will ensure correctness. For all other domains, often some kind of guessing will be necessary, i.e. some assumptions that hopefully represent stakeholder intent.
There is another issue left. Of course, the process which we apply to achieve results must support our goals.
There must be a predescribed approach to cover the whole application lifecycle. This approach must take into account how to enforce fault-free software. For instance, each refactoring mustn't change correctness. Quality Assurance measures are required on all levels, from domain language design, architecture up to 3rd party components. Testing for correctness is essential. For this purpose, tests must cover all aspects. And tests must be correct themselves. Of course, they need to be automated. Note that tests are even important when we could verify the correctness of our tool chain such as the generator. As soon as the environment changes, we need to test!
What conclusions can we draw from these discussions?
For safety critical environments a formalization of software development where each formal step is being verified can be (and already is) an appropriate way.
Even if correctness woild be possible, it requires large efforts in terms of time and money. For some application development projects these efforts might not be feasible.
We might never achieve fault-free implementations in software engineering. I am sure this will be the case as the same observation also holds for other engineering disciplines such as building construction or car manufacturing. But we will be able to introduce more structure to software development, i.e. by adding some formalism and verification mechanisms. I mentioned examples such as DSLs.
We will never be able to express stakeholder content consistently and completely. In addition, stakeholder intent will change over time. Thus, we always will address some (slow or fast) moving target.
The world is not perfect and never will be. Let us keep this in mind when designing software systems.