Tuesday, September 05, 2023

The Dark Side of Crowdfunding

This post is not going to cover any software architecture topic. Instead I want to share some impressions and experiences with crowdfunding platforms such as Indiegogo or Kickstarter.


Let me start with a success story: Bambu Lab was completely unknown when the upcoming 3D printer company started their X1/X1C campaign via Kickstarter. They eventually gathered almost 55 million HK-$ from 5575 backers. In the following months Bambu Lab completed the X1/X1C product line and sent all the perks to the backers. This new CoreXY 3D printer turned out to be a revolutionary, award-winning and extremely successful product which soon was followed by other products like the P1P and the P1S. Needless to say that Bambu Lab has been a huge success story with a happy end  for the crowdfunding company, the campaign owner, and the backers. 


One of the benefits of crowdfunding can be summarized as: crowdfunding platforms connect innovative campaigners and enthusiastic backers. They enable start-up companies and well established companies to get funding for innovative products.


In hindsight, not all campaigns work that well. In some cases, campaigners fail to provide a product, only create an under average product, run out of money, or turn out to be scams. Year by year millions of US-$ get lost this way. It is never foreseeable whether a project will succeed, as it is the case with joint ventures. Reasons for failure might be infeasibility of the innovation, budget overspending, huge project delays caused by unfortunate conditions such as Covid-19, underestimation of costs, or sharp increases of prices for necessary components.


While project failure can never be avoided, scams can. A chinese campaign owner collected over one million US-$ in his Indiegogo campaign featuring the world‘s smallest Mini-PC, but did not create any of the promised perks. After a while there would be even no communication between campaign backers and the campaign owner. It seemed as if the owner just had disappeared from the surface. When backers asked Indiegogo for help, the crowdfunding company did not feel responsible. They just disabled any further contributions, put a „this campaign is currently under investigation“-label on the project web site, but did never provide any results of the so-called investigation nor a refund to betrayed customers.


Lesson 1: crowdfunding companies do not care (too much) about backers. They earn money by providing a platform for different parties, treat backers as venture capitalists who are supposed to bear all the risks themselves.


Indiegogo, Kickstarter basically act like betting offices for horse races with almost no transpareny about the horse owners (aka campaign owners). Every participant in such scenarios bears high risks with the betting company being the only exception. Obviously, the rules between customers and the crowdfunding platform are defined in such a way that the bank (aka betting office) will always win.


Lesson 2: if you are contributing to a crowdfunding campaign, make sure, you can live with project failure and with complete loss of your contributions.


Every backer should be aware of this reality. She/he may lose her/his whole contribution or get an overpaid or even useless perk. Sure, the majority of campaigns does eventually succeed. However, there is also a significant amount of campaigns that fail. I do not bother about project failure despite of huge efforts of campaign owners. This is a known and acceptable risk backers should keep in mind when contributing. But I bother about scam campaigns where owners just take the collected contributions and vanish.


Lesson 3: If you urgently need a specific type of product, don‘t contribute to a crowdfunding campaign, but buy it from well-established sources instead.


Lesson 4: Currently, no safety nets for backers exist. Neither is there any transparency or accountability with respect to campaign owners. A campaign resembles a game or a bet on the future without sufficient transparency regarding campaign owners. 


Lesson 5:  Do not believe in videos and documents provided by campaigners.  Consider this information as a pure marketing and advertising campaign. Never trust any promises, in particular not those that seem to be unrealistic or very, very challenging to fulfil. Phrases like „the world‘s first“, „the world’s fastest“ or „the world’s smallest“ should make backers sceptical.


What could be done to avoid such situations? Or is the crowdfunding platform inherently unable to protect backers?


In fact, there should be a kind of trust relationship between all players in the game - yes, it is a game! To achieve the right level of trust, a crowdfunding company shall offer the following services:

  • Personal identification of all campaign owners with official and legal documents such as passports, driver licenses, locations of residence. This enables companies like Indiegogo or Kickstarter to keep in touch with campaign owners and track them down. Sure, passports and the like can be faked as well, but this requires a substantial amount of criminal energy.
  • Transparency: If we analyze existing campaigns, lack of transparency is one of the biggest issues. By „lack of transparency“ I am referring to the fact that backers often know almost nothing about campaign owners. This is related to the previous aspect. While backers need to guarantee with credit card payments that they are trustworthy (which is checked by the credit card companies), they only get a tiny amount of  information about campaign owners in return. Wait a minute. I am paying my contribution to people that are mostly anonymous (i.e. hiding behind a campaign web site)? Unfortunately, the answer is yes. It does not suffice when only the crowdfunding company owns detailed information about the campaign owners.
  • Due diligence measures would require a crowdfunding company to technically check whether a campaign respectively project is feasible. For this purpose, they may hire experts in the respective domain to validate the claims campaign owners make. In addition, they should check the background of campaign owners, be it companies or individuals.  If a successful company such as Anker acts as the campaign owner, there is a much higher chance that contributors will receive the offered perks and rewards. If on the other hand the campaign originator is unknown, the risk is significantly higher. Accountability should come to one’s mind when thinking about campaigns and their originators.
  • Check and balances: step-wise transfer of contributions instead of full payment at once. This may be a bit difficult to achieve, because certainly some upfront investments are required by campaign owners. Nonetheless, I’d expect more of a bank (crowdfunding platform)/borrower (campaign owner) attitude in this context. In each step (such as prototyping, testing, final product design, manufacturing, delivery) the crowdfunding company should demand proofs by the campaign owners what they did and achieve so far with the crowdfunding investments. For example, prototyping only requires a smaller amount of money. After coming up with  a successful prototype, they may move forward to completing the product. After the product is ready, they move further to manufactoring.  In each step they obtain predefined percentages of funding. In addition, campaign owners are supposed to provide a concrete time line for all of their activities. If a step is delayed, no further money can be obtained until the step is completed. A kind of traffic light on the project web site could represent the current risk level of a campaign.
  • Shipment: for each project campaign owners need to prove that they actually shipped the perks and rewards to their backers by presenting respective documents from the delivery service. In my experience, some campaign owners marked the perks as being shipped without ever actually sending any items.
  • Insurance: Crowdfunding companies should pay a part of each contribution to an insurance company that covers all risks and pays back a high percentage of the contribution to backers. This is similar to how Paypal works. It would require campaign originators to disclose personal information which can then be rated in terms of credibility, credit history, financial background, and trustability. This puts more burden to the campaign owners and the crowdfunding company, and makes contributions more expensive, but provides a safety net for backers which are those who pay campaign owners and crowdfunding platforms, anyway. I assume, many backers would be willing to pay a slightly higher contribution if they win more security in return. Of course, crowdfunding platforms could act as insurances themselves if they are willing to do so.
  • No selling on other channels: In some campaigns the perk developers started selling their products via their web sites before some backers even received their perks. The contract between campaigners and crowdsourcing plaforms should definitely exclude this possibility. Whenever backers spend funding to product development via a crowdfunding campaign, they must be the first who receive their perks and rewards. In addition, some of the products sold were significantly cheaper than the claimed MSRP. This looks like betrayal, smells like betrayal and is a betrayal.  In such cases I‘d expect campaign owners to have to pay penalties to backers.

Some may argue that all of these measures restrict the freedom of campaign owners. They are right in this respect. However, there currently is an imbalance between contributors, campaign originators, and crowdfunding platforms which puts most risks on the backers. Thus, it seems more than fair to share these risks among all stakeholders. I honestly believe, that crowdfunding evolves to a dead end, if companies like Indiegogo continue to put all burdens to backers, don‘t care much about scams, refuse to create safety nets, or keep the high intransparency. If they realize all or at least some of the aforementioned measures, this clearly will turn out to be more of a Win/Win/Win scenario. 

Sunday, August 20, 2023

AI and Software Architecture - Two Sides of the same Coin

Introduction

The whole media worldwide is currently jumping on the AI bandwagon. In particular, Large Language Models (LLM) such as ChatGPT sound appealing and intimidating at the same time. When we dive deeper into the technology behind AI, it doesn‘t feel that strange at all. In contrast to some assumptions of the yellow press, we are far away from a strong AI that resembles human intelligence. This means, blockbusters such as Terminator or Bladerunner are not becoming true in the near future. 

Current AI applications, while very impressive, represent instantiations of weak AI.  Take object detection as an example, where a neural network learns to figure out what is depicted on an image. Is it a cat, a dog, a rabbit, a human or something different? Eventually, neural networks process training data to compute and learn a nonlinear mathematical function that works incredibly well for making good guesses (aka hypotheses) with high precision about new data. 

On the other side, this capability proves to be very handy when dealing with big or unstructured data such as images, videos, audio streams, time series data, or Kafka streams. For example, autonomous driving systems strongly depend on such kind of functionality, because they continuously need to analyze, understand and handle highly dynamic traffic contexts, e.g., potential obstacles.

In this article, I am not going to explain the different kinds of AI algorithms such as types of artificial neural networks and ML (Machine Learning) which may be part of a subsequent article. My goal is to draw the landcape of AI with respect to software architecture & design.


There are obviously two ways of applying AI technologies to software architecture:

  • One way is to let AI algorithms support software architects and designers in their tasks such as requirements engineering, architecture design, implementation or testing - which I’ll call the AI solution domain perspective.
  • The other way is the use of AI to solve specific problems in the problem domain, why I’ll name it the AI application domain perspective.


AI for the Solution Domain

LLMs are probably the most promising approach when we consider the solution domain. Tools such as GitHub Copilot, Meta Llama 2 and Amazon CodeWhisperer help developers generate functionality in their preferred programming language. It seems like magic but comes with a few downsides. For example, you never can be sure whether an LLM learned its code suggestions from copyrighted sources. Nor do you have any guarantee that the code does the right thing in the right way. Any software engineer who leverages an application like Copilot needs to look over the generated code again and again to ensure the code is exactly what she or he expects. It requires software engineering experts to continuously analyze and check LLM answers. At least currently, it appears rather unlikely that laymen may take over the jobs of professional engineers with the help of LLMs. 


Companies already have began to create their own LLMs to cover problem domains such as industrial automation. Imagine, you need to develop programs for a PLC (Programmable Logic Control). In such environments, the main languages are not C++, Python or Java. Instead you’ll have to deal with domain-specific languages such as ST (Structured Text = Siemens SCL) or LD (Ladder diagram). Since there is much less source code freely available for PLCs, feeding an LLM with appropriate code examples turns out to be challenging. Nonetheless, it is a feasible objective. 


AI for the Application Domain

In many cases Artificial Neural Networks (ANNs) are the basic ingredient for solving problem domain challenges. Take logistics as an example where cameras and ANNs help identity which product is in front of a camera. Other AI algorithms such as SVNs (Support Vector Machines) enable testing equipment to figure out whether a turbine is behaving according to its specification or not, which is commonly coined Anomaly Detection. At Siemens we have used Bayes-Trees to forecast the possible outcome of system testing. Reinforcement Learning happens to be useful for successfully moving and acting in an environment, for example robots learning how to  complete a task successfully. Another approach is unsupervised learning such as k-Means Clustering which classifies objects and maps them to different categories. 


Even more examples exist:

Think about security measures in a system that comprise keyword and face recognition. Autonomous driving uses object detection and segmentation in addition to other means. Smart sensors include ANNs for smell and gas detection. AI for preventive maintenance helps analyzing whether a machine might fail in the near future based on historical data. With the help of recommender systems online shops can provide recommendations to customers based on their order history and product catalog searches. As always, this is only the tip of the iceberg.


Software Architecture and AI

An important topic seldomly addressed in AI literature is how to integrate AI in a software-intensive system. 


MLOps tools support different roles like developers, architects, operators and  data analysts. Data analysts start with a data collection activity. They may augment the data, apply feature extraction as well as regularization and normalization measures, and select the right AI model which is supposed to  learn how to achieve a specific goal using the data collection. In the subsequent step they test the AI/ML-model with sufficient test data, i.e. data the model has not seen before. Eventually, they version the model & data and generate an implementation. Needless to say that data analysts typically iterate through these steps several times. When MLOps tools such as Edge Impulse follow a No/Low-Code approach, separation of concerns between different roles can be easily achieved. While data analysts are responsible for the design of the AI model, software engineers can focus on the integration of the AI model in the system design process, as the MLOps envoronment generates implementation of the model.


Software engineers take the implementation and integrate it into the surrounding application context. For example, the model must be fed with new data by the application which reads and processes the results once inference is completed. For this purpose, an event-driven design often turns out to be appropriate, especially when the inference runs on a remote embedded system. If the inference results are critical, resilience might be increased by replicating the same inference engine multiple times in the system. Docker containers and Kubernetes are perfect solutions, in particular when customers desire a scalable and platform-independent architecture with high separation of concerns like in a Microservice architecture. Security measures support privacy, confidentiality, and integrity of input data, inference results and the model itself. In most cases, inference can be treated from a software engineering viewpoint mostly as a black box that expects some input and produces some output. 

When dealing with distributed systems or IoT systems, it may be beneficial to execute inference close to the sources of input data, thus eliminating the need to send around big chunks of data, e.g., sensor data. Even embedded systems like edge or IoT nodes are capable of running inference engines efficiently. In this context, only inference results are often sent to backend servers.


Operators finally deploy the application components onto the physical hardware. Note: a DevOps culture turns out to be even more valuable in an AI context, because more roles are involved.


Input sources may be distributed across the network, but may also comprise local sensor data of an embedded system. In the former case, either Kafka streams or MQTT messages can be appropriate choices to handle the aggregation of necessary input data on behalf of an inference engine. Take processing of weather data as an example where a central system collects data from various weather stations to forecast the weather in a whole region. In this context we might encounter pipelines of AI inference engines, where the results of different inference engines are fed to a central inference engine. Hence, such scenarios comprise hierarchies of possibly distributed inference engines.


Architecting AI models

Neural networks or other types of AI algorithms expose an architecture themselves, be it a MobileNet model leveraged for transfer learning, a SVN (Support Vector Machines) with a Gaussian kernel, or a Bayes decision tree. The choice of an adequate model has significant impact on the results of AI processing. It requires the selection of an appropriate model and hyperparameters such as learning rate or configuration of layers in an ANN (Artificial Neural Network). For data analysts or those software engineers who wear a data analytics hat a mere black box view is not sufficient. Instead they need a white box view to design respectively configure the appropriate AI model. This task depends on the experience of data analysts, but may also imply a trial-and-error approach for configuring and fine tuning the model. The whole design process for AI models closely resembles software architecture design. It consists of engineering the requirements (goals) of the AI constituents, selecting the right model and training data, testing the implemented AI algorithm, and deploying it. Consequently, we may consider these tasks as the design of a software subsystem or component. If an aforementioned MLOps tool is available und used, it significally can boost design efficiency.


Conclusions

While the math behind AI models may appear challenging, the concepts and usage are pretty straightforward. Their design and configuration is an important responsibility that experts in Data Analytics and AI should take care of. MLOps helps separate different roles and responsibilities which is why I consider its use as an important development efficiency booster. 

Architecting an appropriate model is far from being simple, but resembles the process of software design. Training an AI model for ML (Machine Learning) may take weeks or months. As it is a time consuming process, the availability of performant servers is inevitable. Specialized hardware such as Nvidia GPUs or other dedicated NPUs/TPUs helps reduce the training time significantly. In contrast to the amount of required training efforts, optimised inference engines (-> Tensorflow Lite or Lite Micro) often run well and efficient on resource constrained embedded systems which is the concept behind AIoT (AI plus IoT).






Saturday, April 29, 2023

 Systematic Re-use

Re-use is based upon one of the fundamental principles not only for lazy software engineers: DRY (Don‘t Repeat Yourself). Instead of reinventing the wheel developers and architects may re-use existing artifacts instead of reinventing the wheel again and again. 

Re-usable assets come in different flavors:

  • Code snippets are small building units developers may integrate in their code base. 
  • Patterns are smart and proven design blueprints that solve recurring problems in specific contexts.
  • Libraries comprise encapsulated functionality developers may bind to their own functionality.
  • Frameworks also comprise encapsulated functionality. In contrast to libraries developrs integrate their own code into the framework according to the Hollywood principle (don‘t call us, we‘ll call you).
  • Components/Services include binary functionality (i.e., they are executables) that developrs may call from their own application.
  • Containers represent runtime environments that provide functionality and environments to applications in an isolated way.
Apparently, these are different levels of re-usable assets with varying granularities, complexities, and prerequisites.

Software engineers may not only use re-usable software assets, but other types as well. For instance:
  • Tests, Test units, Test plans
  • Documents
  • Production plans
  • Configurations
  • Business plans
  • Software architectures
  • Tools
While some assets such as code snippets may be used daily in the code-base, patterns or software architecture templates need to be instantiated in an easy way. 
The more impact re-usable assets have on applications and the more abstract they are, the more systematic the re-use approach must be. The most challenging projects are product lines and ecosystems that require different assets at different re-use levels. For example, they introduce the need for a configurable core asset base that is re-usable across different applications. Furthermore, they support a whole class of applications that share the same architecture framework and other assets. A core asset in a product line or ecosystem affects not one application but a whole system family.  Thus, its business impact is very high. 
In such scenarios, core assets often are inter-dependent and must be configured for the specific application under development.  As a prerequisite for the development of a core asset base, a Commonality/Variability analysis is necessary that determines what applications sharing the same core assets have in common and how they differ. A core asset needs a common base relevant for all applications that use it as well as configurable variation points to adapt it to the needs of an application. 
A bad or insufficient  Commonality/Variability analysis incurs higher costs a may even lead to project failure. 
Core asset development and application development might happen separately by different teams or  by the same teams. Each approach has its benefits and liabilities.
Due to the high business and technical risks of these advanced approaches, all stakeholders need to be involved in the whole development process. Building a product line or ecosystem without management is not feasible. Managers need to re-organize their organisation, spend budget for core asset development and evolution. 
Most product lines and ecosystems fail, because:
  • lack of management support,
  • insufficient consideration of customer needs,
  • inappropriate organisation,
  • inadequate Commonality/Variability analysis,
  • insuffient or low-quality core assets,
  • underestimation of testing or inadequate quality assurance,
  • bad software architecture,
  • neglectence of competence ramp-up activities,
  • no re-use incentives,
  • missing acceptance by stakeholders.
Consequently, product lines and ecosystems need a systematic approach for re-use  and must involve  different types of stakeholders. They need a manager  who is able to guide the approach and has the capability to decide, for example, on budget, organisation restructuring, competence ramp-up activities, or business strategy. 

Re-use comes in different flavors and the higher its impact the more systematic the re-use process needs to be

[to be continued]




Sunday, March 26, 2023

 

Models and Modelling - A Philosophical Deep Dive

Motivation

Not only in software architecture we use models for designing and documenting systems. Models are also indispensible in other engineering disciplines and in natural sciences. We all experienced good and bad models in our daily lifes. What is a model really about? And how does a good model look like? Let us enter a (philosophical) discussion about this topic.


What is a model?

A model captures the essence of a domain. It focuses on the core entities and the relationships within a domain from a specific viewpoint, i.e., serving a specific purpose. A model contains rules that must hold for its constituents. Models are used by humans or machines to communicate about the respective domain for a particular purpose.


Examples of models include:

a UML diagram

a street map

a floor plan

an electronic circuit diagram

a problem domain model (DDD)

quantum theory

mathematical formulas


Consequences: 


(i) The same domain can be represented using different models, each capturing another viewpoint of that domain. This viewpoints are often briefly called views.


(ii) Models can be informal or formal depending on their usage as a means for communication. Thus, they must be easily understandable and comprehensible by stakeholders.


(iii) Models introduce abstraction layers by using generalization and specialization leaving out „unnecessary“ respectively irrelevant details. 


(iv)  A model does not describe reality but a subset of reality viewed from a specific angle.


(v) Languages are based upon models. A model can be viewed as a language, and vice versa. 


(vi) A model may support a graphical presentation or a textual presentation, it even may include both.


The complexity of a model is directly proportional 

  • to the number and types of its entities and their relationships,
  • to the kinds and numbers of abstractions being used,
  • to the complexity of its underlying rules.


A good model:

  • provides a proper separation of concerns (SoC)
  • consequently applies principles such as the single responsibility principle (SRP), Don’t-Repeat Yourself (DRY), KiSS, or the Liskov Substitution Principle (LSP) in order to gain the highest understandability and comprehensiveness
  • uses expressive names for all its abstractions, entities, dependencies
  • provides an effective and efficient means of communicating among stakeholders
  • focuses on essence and leaves out everything that does not serve the required purpose of the addressed viewpoint
  • avoids accidental complexity strictly and consequently 
  • allows to model simple things in a simple way, while being capable of expressing complex things in a doable way


Stakeholders

The creation of a model should be guided by its (types of) stakeholders, in particular by the way they intend to use the model. In this context a meta model helps define how the set of creatable models should look like. Thus, meta models constitute modeling languages. They help create different models or views.


To define an adequate model that serves an intended purpose all (human) stakeholders should be involved. UML is an example of a modelling language that serves the needs of software engineers but (often) not those of many domain experts. In fact, domain experts might have their own models readily available. While a model might be perfect for machine-machine communication, it aint’t necessarily adequate whenever humans are involved. The more formal a model is, the easier it can be processed by computers. Humans often need more informal and expressive models instead. If both kinds of stakeholders are involved, we need to balance between formal and informal approaches. 

Emojis are an example of an informal model. They can be immediately understood by a human, but may be more difficult to process by a machine.

Artificial Neural Networks albeit “simple” can be processed by machines very well, but are hard to be understood by a human - i.e., with respect to what they actually do and how they work. 

UML is somewhere in the middle of these extremes. 


Fortunately in many mature domains, models already exist. An electrical circuit defines a proven concept of a model. Mathematics is often considered a uniquitous language with predefined notations. In the context of software engineering, domain models are often implicitly defined and have been established as common sense in an organization. If software engineers with no or little domain expertise start to develop software applications for the respective domain, they need to make the implicit model explicit. Otherwise they cannot design a software architecture that meets the customer requirements. This is what DDD (Domain-Driven Design) is all about. It tries to come up with a domain-specific model using generic building blocks such as DDD patterns and techniques.


The representation of a model should fit the needs of its stakeholders. For humans graphical notations often work very well, because they explicily reveal their structure in an easy manner and are good to grasp and to handle. Due  to productivity reasons, textual models may be more beneficial and flexible in some cases. As an example consider software code. For a beginner graphical code blocks might work very well, while advanced programmers prefer coding textually, because they can mentally map seamlessly between the “graphical” design and the textual code representation. Handling code graphically might just reduce their productivity, effectiveness and flexibility due to all clutter and constraints.


Model Transformations

To keep many stakeholders satisfied a possible approach might be to introduce different models for different types of stakeholders and also create mappings between these models, for example an easy to understand UML model that is transformed into a machine readable XML schema. 

Actually, software engineers are used to handle different models that are mapped onto each other. In software engineering a compiler represents a model transformation from a high level language to a system language or interpreter. A UML diagram might be transformed into high level language code. A low-code/no-code environment creates domain-specific applications from high level user specifications. However, model-to-model transformations can be quite complex, in particular when the gap between models is very large and if no common-off-the-shelve solutions for the transformations are available. Moreover, the more models the more transformations are necessary. Note: a model transformation might also be done manually if the model is not too complex and mapping rules are pretty straightforward.


Model sets

In domains such as building contruction or software engineering multiple views are necessary to represent information from different angles. Take design view, deployment view or runtime view as examples in the software engineering domain. In addition, their might be different model abstraction layers, for example, an in-depth design view versus a high level software architecture view.  In other words, to solve a task we need a model set instead of a single model that captures every detail from every perspective.

No matter how the views differ from each other, there needs to be meta information to tie the different views together. Prominent examples are the mapping from a view to code, and the implicit or explicit relation of views with each other. Note: there might be different solutions respectively model kits for the same problem context, e.g., RUP’s 4+1 view in contrast to TOGAF might not be the (only) solution of choice for designing an enterpise system. 

No matter what model set you choose, make sure that it is used consistently. In most cases tool support is strongly recommendable. Models can become very complex. Therefore you need a tool to draw, check and communicate the concrete models. This is the main reason why most software engineering activities rely on some sort of UML environment such as Enterprise Architect or MagicDraw. 

A ground plan is different from an electricity plan. All models together are necessary for building construction.  In this example, there might  also be rules and constraints across all models respectively views. For example, an electrical cable should have a minimum distance to a water pipe. Consequently, we need some kind of verification algorithm to check whether rules/constraints are violated. 


Model Creation

Models shall never be created in a big bang approach. They are living entities that change over time the more experience you obtain. They may start very simple but become more complex over time. Whenever they are overengineered, they need to be simplified/refactored again. Model creaters need to ensure that models can be facilitated and handled by stakeholders easily. If stakeholders have different viewpoints at the same problem, create a model set where each model view serves a particular set of stakeholders.


To start creating a model for a domain context, we should figure out whether such models already exist, and if this is the case, whether these models can serve the desired purpose(s). It is always beneficial to use existing models, in particular due to the experience and knowledge they carry. So, don‘t reinvent the wheel if not absolutely necessary, especially if you are no expert in the domain.


If no model exists, stakeholders should jointly create a model (set). It is helpful if at least one of the stakeholders is experienced in creating models while at least some other person is a domain expert.


If models exist that do not serve the intended purpose, we might change and adapt these models to fit our needs.


Note: a common mistake is to first focus on the syntax of a model. Instead, initially think about its semantics and find a good syntactical representation afterwards.  


No matter how a new model is created,  learning its representation should happen in a quick and straightforward process, even for unexperienced stakeholders.


Interestingly, most graphical models consist of rectangular or other symmetrical shapes, arrows, lines and textboxes, while textual models often use regular or context-free grammars. The reason for this observation is that this way the models are comprehensible and their handling is easy. It should also be possible to draw a model manually in order to discuss it with other stakeholders before documenting it.  Sitting around a modelling tool significantly decreases productivity, at least in my experience. A whiteboard or a flipboard is by far the best tool for modelling. This can be complimented by an AI software that recognizes manually drawn models and transforms them to clean and processable data representations. 


Summary

In this blog posting I did not reveal any new or innovative stuff you didn‘t already know. Neither was it my intent to provide anything revolutionary. It is just a summary of modelling and how to approach it. And if you started thinking about modelling from this more philosophical view, I‘d be happy.