Modular Monolith: Architectural Drivers

This post is part of articles series about Modular Monolith architecture:

1. Modular Monolith: A Primer
2. Modular Monolith: Architectural Drivers (this)

Introduction

In the first post about the architecture of Modular Monolith I focused on the definition of this architecture and the description of modularity. As a reminder, the Modular Monolith:

  • is a system that has exactly one deployment unit
  • is a explicit name for a Monolith system designed in a modular way
  • modularization means that module:
    – must be independent, autonomous
    – has everything necessary to provide desired functionality (separation by business area)
    – is encapsulated and has a well-defined interface/contract

In this post I would like to discuss some of the most popular, in my opinion, Architectural Drivers which can lead to either a Modular Monolith or Microservices architecture.

But what exactly are Architectural Drivers?

Architectural Drivers

In general, you can’t say that X architecture is better than the other. You can’t say that Monolith is better than Microservices, Clean Architecture is better than Layered Architecture, 3 layers are better/worse than 4 layers and so on.

The same rule applies to other considerations such as ORM vs raw SQL, “Current State” Persistence vs Event Sourcing, Anemic Domain Model vs Rich Domain Model, Object-Oriented Design vs Functional Programming… and a lot of more.

So how we can choose the architecture/approach/paradigm/tool/library if there is, unfortunately, no best?

Context is king

Each of our decisions are made in a given context. Each project is different (it results from the project definition) so each context is different. It implies that the same decision made in one context can bring great results, while in another it can cause devastating failure. For this reason, using other people’s/companies approaches without critical thinking can cause a lot of pain, wasted money and finally – the end of the project.

Project Contexts
Every Project is different and has different Context

However, context is a too general concept and we need something more specified to put into practice. That’s why the Architectural Drivers concept was defined. Michael Keeling writes about them in his blog article in the following way:

Architectural drivers are formally defined as the set of requirements that have significant influence over your architecture.

Simon Brown in the book Software Architecture for Developers describes Architectural Drivers similarly:

Regardless of the process that you follow (traditional and plan-driven vs lightweight and adaptive), thereโ€™s a set of common things that really drive, influence and shape the resulting software architecture.

Architectural Drivers have their categorization. The main categories are:

  • Fuctional Requirements – what and how problems does the system solve
  • Quality Attributes – a set of attributes that determine the quality of architecture like maintainability or scalability.
  • Technical Constraints – technology standards, tools limitations, team experience
  • Business Constraints – budget, hard deadline
Architectural Drivers
Architectural Drivers

Most importantly, all Architectural Drivers are connected to each other and often focus on one causes loss of another (trade-offs everywhere, unfortunately). Let’s consider this example.

You have some service that calculates some important thing (Functional Requirement) in 3 seconds (Quality Attribute – performance). A new requirement appears, calculation is more complex and takes now 5 seconds (Performance decreased). To go back to 3 seconds another technology could be used, but there is no time for it (Business Constraint – hard deadline) and nobody has used it in the company yet (Technical Constraint – team experience). The only option to increase performance is to move the calculation to the stored procedure, which decreases maintainability and readability (Quality Attributes).

Architectural Drivers example
Architectural Drivers example

As you can see, the software architecture is a continuous choice between one driver and another. There is no one “right” solution. There is No Silver Bullet.

With this in mind, let’s see some of the popular Architectural Drivers and attributes which are discussed during the considerations of Modular Monolith and Microservices architectures

Level of Complexity

At the beginning, let’s consider one of the greatest advantages of a Modular Monolith compared to distributed architectures – Complexity. The definition of complexity on the Wiki is as follows:

Complexity characterizes the behavior of a system or model whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions. The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts.

As you see above, Complexity is about components and their interactions. In Modular Monolith architecture interactions between modules are simple because each module is located in the same process. This means that the module that wants to interact with another module:

  • Knows the exact address where he will direct the request and is sure that this address will not change
  • The request is just a method call, no network needed
  • The target module is always available
  • Security issue is not a concern
Modular Monolith Complexity
Complexity – Modular Monolith

On the other hand, consider distributed system architecture. In this architecture, the modules / services are located on other servers and communicate via the network. This means that when a service wants to communicate with another, it must deal with the following concerns:

  • It needs to get somehow address of target module, because it may be changed
  • Communication takes place via the network, which necessitates the use of special protocols like HTTP and serialization.
  • Network may be unavailable (CAP theorem)
  • Secure communication between modules must be ensured

Of course, You can find solutions for these issues. For example, to solve the addressing issue you can add Service Registry and implement Service Discovery pattern. However, it means adding more components and algorithms to the system so complexity rapidly increases.

To be aware of the scale of the problems generated by the Microservices architecture, I recommend that you familiarize yourself with the patterns that are used to solve them. The list is large, and most of them are not needed at all in the Monolith architecture.

Complexity - Distributed System
Complexity – Distributed System

In summary, the architecture of the Modular Monolith is definitely less complex than that of distributed systems. High complexity reduces maintainability, readability, observability. It needs an experienced team, advanced infrastructure, specific organizational culture and so on. If simplicity is your key architectural driver then consider Monolith First.

Productivity

The team’s productivity in delivering changes can be measured in two dimensions: in the context of the entire system and the single module.

In the context of the whole system, the matter is clear. The architecture of the Modular Monolith is less complex => the less complex the easier to understand => the productivity is higher.

From the point of view of the ease of running the entire system, the Modular Monolith ensures productivity at the maximum level – just download the code and run it on the local machine. In distributed architecture, the matter is not so simple despite the technologies and tools (like Docker and Kubernetes) that facilitate this process.

Running entire system – Monolith vs Distributed

On the other hand, we have productivity related to the development of a single module. In this case, the microservice architecture will be better, because we do not have to run the entire system to test one specific module.

Which architecture, then, supports the team’s productivity? In my opinion, for most systems, the Modular Monolith, but for really large projects (tens or hundreds of modules) are microservices. If your architectural driver is development speed and the system is not huge, a better choice will be a Modular Monolith and in case of system expansion, a possible transition to microservices could be right move to do.

Deployability

The deployability of a software system is the ease with which it can be taken from development to production. However, we must consider 2 situations here. The deployment of the entire system and the single module.

In the context of the entire system, is it easier to deploy one application of several applications? Of course, one application is easier to deploy so it seems that Modular Monolith is better option.

Deployment - Modular Monolith
Deployment – Modular Monolith

On the other hand, in a Modular Monolith, we always have to deploy the whole system. We can not deploy one particular module and this is one of the most important disadvantages. In this architecture, we do not have deployment autonomy so deployment process must be coordinated and may be more difficult.

Deployability - Distribiuted System
Deployability – Distribiuted System

In summary, if you do not mind the deployment of the whole system and you do not care about the autonomy of deployment- this is the point behind the Modular Monolith. Otherwise, consider distributed architecture.

Performance

Performance is about how fast something is, usually in terms of response time, duration of processing or latency.

Assuming the scenario that all requests are processed in a sequential manner, Monolith architecture will always be more efficient than a distributed system. All modules operate in the same process, so there is no overhead on communication between them.

The distributed system has overhead caused by communication over the network – serialization and deserialization, cryptography and speed of sending packets.

Even in real scenarios, the Monolith will be more efficient, but only for some time. With the increase in users, requests, data, and complexity of calculations, it may turn out that performance decreases. Then we come to one of the main drivers of the Microservices architecture: scalability.

Scalability

What is scalability? Wikipedia says:

Scalability is the property of a system to handle a growing amount of work by adding resources to the system

In other words, scalability is about the ability for software to deal with more requests or data.

It’s best to show this by example. Let’s assume that one of our modules must now handle more requests than we initially assumed. To do this, we must increase the resources that are responsible for the operation of this module.

We can always do it in two ways. Increase node computing power (called Vertical Scaling) or add new nodes (called Horizontal Scaling). Let’s see how it looks from the point of view of Monolith and Microservices architecture:

Scaling
Scaling

As can be seen above, both architectures can be scaled. Monolith can be scaled too. Vertical Scaling is the same, but the difference is in Horizontal Scaling. Using this approach, we can scale the Modular Monolith only as a whole, which leads to inefficient resource utilization. In the Microservices architecture, we scale only those modules that we need to scale, which leads to better utilization of resources. This is the main difference.

The more instances of the modules must work, the more significant the difference. On the other hand, if you don’t have to scale a lot, maybe you better accept less efficient resource utilization and stay with the Monolith and take its other advantages? This is a good question that we should ask ourselves in such a situation.

Failure impact

Sometimes our architectural driver may be limiting the impact of failure. Let’s say we have a very unstable module that crashes the entire process once in a while.

In the case of a Modular Monolith, as the whole system works in one process, the whole system suddenly stops working and our availability decreases.

In the case of Microservices architecture, the “risky” module can be moved into a separate process and if it is stopped the rest of the system will work properly.

Failure impact
Failure impact

To increase the availability of the Modular Monolith, you can increase the number of nodes, but as with scalability, resource utilization will not be at the highest level compared to Microservices architecture.

Heterogeneous Technology

One of the attributes of a Modular Monolith that cannot be bypassed in any way is the inability to use heterogeneous technology. The whole system is in the same process, which means that it must be running in the same runtime environment. This does not mean that it must be written in the same language because some platforms support multiple languages (for example .NET CLR or JAVA JVM). However, the use of completely separate technologies is not possible.

Heterogeneous Technology
Heterogeneous Technology

A feature of heterogeneous technology can be decisive to switch to Microservices architecture, but it doesn’t have to be. Often, companies use one technology stack and no one even thinks about the implementation of components in different technologies because team competence or software license does not allow it.

On the other hand, larger companies and projects more often use different technologies to maximize productivity using tailor-made tools to solve specific problems.

A common case associated with heterogeneous technology is the maintenance and development of the legacy system. The legacy system is often written in old technology (and often in a very bad way). To use the new technology, a new service/system is often created that implements new functionalities and the old system only delegates requests to the new one. Thanks to this, the development of the legacy system can be faster and it is easier to find people willing to work with it. The disadvantage here is that because of two systems instead of one – the whole system becomes distributed – with all of cons this architecture.

Summary

This post was not intended to describe all architectural drivers in favor of a Modular Monolith or Microservices. Separate books are being created on this topic.

In this post, I wanted to describe the most common discussed architectural drivers (in my opinion) and make it very clear that the shape of the architecture of our system is influenced by many factors and everything depends on our context.

Summarizing:

  • There is no better or worse architecture – it all depends on the context and Architectural Drivers
  • Architectural Drivers have their categorization – Functional Requirements, Quality Attributes, Technical Constraints, Business Constraints
  • Monolith architecture is less complex than a distributed system. Microservices architecture requires much more tools, libraries, components, team experience, infrastructure management and so on
  • At the beginning, the Monolith implementation will be more productive (Monolith first approach). Later, migration to Microservices architecture can be considered but only if architectural driver for that migration exists
  • Deployment of Monolith is easier but does not support autonomous deployment.
  • Both architectures supports scalability, but Microservices are way more efficient (resources utilization)
  • Monolith has better performance than Microservices until the need for scaling appears – then it depends on scaling possibilities
  • Failure impact is greater in Monolith because everything works in the same process. Risk can be mitigated by duplication but it will cost more than in Microservices architecture
  • Monolith from definition does not support heterogeneous technology

Additional Resources

1. Architectural Drivers – chapter from Designing Software Architectures: A Practical Approach book – Humberto Cervantes, Rick Kazman
2. Software Architecture for Developers book – Simon Brown
3. Design It! book – Michael Keeling
4. Collection of articles about Monolith and Microservices architectures named “When microservices fail…”
5. Modular Monolith with DDD โ€“ GitHub repository

Related Posts

1. Modular Monolith: A Primer

Modular Monolith: A Primer

This post is part of articles series about Modular Monolith architecture:

1. Modular Monolith: A Primer (this)
2. Modular Monolith: Architectural Drivers

Introduction

Many years have passed since the rise of the popularity of microservice architecture and it is still one of the main topics discussed in the context of the system architecture. The popularity of cloud solutions, containerization and advanced tools supporting the development and maintenance of distributed systems (such as Kubernetes) is even more conducive to this phenomenon.

Observing what is happening in the community, companies and during conversations with programmers, it can be concluded that most of the new projects are implemented using the microservice architecture. Moreover, some legacy systems are also moving towards this approach.

Ok, the subject of the post is Modular Monolith and I dwell on microservices, the question is why? Namely, because I think that as an IT industry we have made a false start adopting microservice architecture to such an extent. Instead of focusing on architectural drivers, we believed that microservices are medicine for all the evil that sits in monolithic applications. If you have participated in the development of a system that consists of more than one deployment unit, you already know that this is not the case. Each architecture has its pros and cons – microservices are no exception. They solve some problems by generating others in return.

With this entry, I would like to start a series of articles on the architecture of Modular Monolith. I do it for several reasons.

First of all, I would like to refute the myth that you cannot make a high-class system in monolithic architecture. Secondly, I would like to dispel doubts about the definition of this architecture and its appearance – many people interpret it differently. Thirdly, I treat this series of posts as an extension and addition to my implementation of Modular Monolith with DDD architecture, which I shared a few months ago on GitHub and which was very well received (1k stars a month after publication).

In this introductory post, I will focus on the definition of a Modular Monolith architecture.

What is Modular Monolith?

I always try to be precise when I talk or write about technical and business issues, especially when it comes to architecture. I believe that a clear and coherent message is very important. That is why I would like to clearly define what the architecture of the Modular Monolith means to me and how I perceive it.

Let’s start with the simpler concept, what is Monolith?

Monolith

Wikipedia describes “monolithic architecture” in terms of building construction and not computer science as follows:

Monolithic architecture describes buildings which are carved, cast or excavated from a single piece of material, historically from rock.

In terms of computer science, building is the system and the material is our executable code. So in Monolith Architecture, our system consists of exactly one piece of executable code and nothing more.

Let’s see 2 technical definitions: first one about Monolith System:

A software system is called “monolithic” if it has a monolithic architecture, in which functionally distinguishable aspects (for example data input and output, data processing, error handling, and the user interface) are all interwoven, rather than containing architecturally separate components.

Second one about Monolithic Architecture:

A monolithic architecture is the traditional unified model for the design of a software program. Monolithic, in this context, means composed all in one piece. Monolithic software is designed to be self-contained; components of the program are interconnected and interdependent rather than loosely coupled as is the case with modular software programs

These 2 definitions above (one of the first results in Google) have 2 shared assumptions.

First, they define that this architecture assumes that all parts of the system form one deployment unit – I will agree with that.

The second shared assumption of these definitions is that they assume a lack of modularity in such architecture and I will definitely disagree with that. The phrases “interwoven, rather than containing architecturally separate components” and “components of the program are interconnected and interdependent rather than loosely coupled” very negatively characterize this architecture, assuming that everything is mixed in them. It may be so, but it doesn’t have to be. It is not the ultimate attribute of the Monolith.

To sum up, Monolith is nothing more than a system that has exactly one deployment unit. No less no more.

Modularization

I’ve defined what Monolith means, let’s get to second aspect: Modularity.

What does it mean that something is modular according to the English Dictionary?

Consisting of separate parts that, when combined, form a complete whole/made from a set of separate parts that can be joined together to form a larger object

and Modularization itself:

The design or production of something in separate sections

Because it is a general definition, it is not enough for the programming world. Let’s use a more specific technical one about Modular programming:

Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute only one aspect of the desired functionality. A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. The implementation contains the working code that corresponds to the elements declared in the interface.

Several important issues have been raised here. In order to have modular architecture, you must have modules and these modules:

  • a) must be independent and interchangeable and
  • b) must have everything necessary to provide desired functionality and
  • c) must have defined interface

Let’s see what these assumptions mean.

Module must be independent and interchangeable

For the module to meet these assumptions, as the name implies, it should be independent. Of course, it is impossible for it to be completely independent because then it means that it does not integrate with other modules. The module will always depend on something, but dependencies should be kept to a minimum. According to the principle: Loose Coupling, Strong Cohesion.

In the diagram below on the left we have a module that has a lot of dependencies and you can definitely not say that it is independent. On the other hand, on the right, the situation is the opposite – the module contains a minimum of dependencies and they are more loose, it is finally more independent:

Module independence
Module independence

However, the number of dependencies is just one measure of how well our module is independent. The second measure is how strong the dependency is. In other words, do we call it very often using multiple methods or occasionally using one or a few methods?

Strong/Weak dependency
Strong/Weak dependency

In the first case, it is possible that we have defined the boundaries of our modules incorrectly and we should merge both modules if they are closely related:

Modules merged
Modules merged

The last attribute affecting the independence of the module is the frequency of changes of the modules on which it depends on. As you can guess – the less often they are changed, the more the module is independent. On the other hand, if changes are frequent – we must change our module often and it loses its independence:

Module stability
Module stability

To sum up, the module’s independence is determined by three main factors:

  • number of dependencies
  • strength of dependenies
  • stability of the modules on which the module depends on

Module must have everything necessary to provide desired functionality

The module is a very overloaded word and can be used in many contexts with different meanings. A common case here is to call logical layers as modules, e.g. GUI module, application logic module, database access module. Yes, in this context these are also modules but they provide technical, not business functionality.

Thinking about a module in a technical context, only technical changes cause exactly one module to change:

Technical modules and technical change
Technical modules and technical change

Adding or changing business functionality usually goes through all layers causing changes in each technical module:

Technical modules - new/change business feature
Technical modules – new/change business feature

The question we have to ask ourselves is: do we more often make changes related to the technical part of our system or changes in business functionality? In my opinion – definitely more often the latter. We rarely exchange the database access layer, logging library or GUI framework. For this reason, the module in the Modular Monolith is a business module that is able to fully provide a set of desired features. This kind of design is called “Vertical Slices” and we group these slices in the module:

Business modules and vertical slices
Business modules and vertical slices

In this way, frequent changes affect only one module – it becomes more independent, autonomous and is able to provide functionality by itself.

Module must have defined interface

The last attribute of modularity is a well-defined interface. We can’t talk about modular architecture if our modules don’t have a Contract:

Modules without contract (interface)
Modules without contract (interface)

A Contract is what we make available outside so it is very important. It is an “entry point” to our module. Good Contract should be unambiguous and contain only what clients of a given contract need. We should keep it stable (to not break our clients) and hide everything else behind it (Encapsulation):

Modules with contract
Modules with contract

As you can see in the diagram above, the contract of our module can take different forms. Sometimes it is some kind of facade for synchronous calls (e.g. public method or REST service), sometimes it can be an published event for asynchronous communication. In any case, everything that we share outside becomes the public API of the module. Therefore, encapsulation is an inseparable element of modularity.

Summary

1. Monolith is a system that has exactly one deployment unit.
2. Monolith architecture does not imply that the system is poor designed, not modular or bad. It does not say anything about quality.
3. Modular Monolith architecture is a explicit name for a Monolith system designed in a modular way.
4. To achieve a high level of modularization each module must be independent, has everything necessary to provide desired functionality (separation by business area), encapsulated and have a well-defined interface/contract.

In the next post I will discuss the pros and cons of Modular Monolith architecture comparing it to the microservices.

Additional resources

1. Modular Monoliths Video – Simon Brown
2. Majestic Modular Monliths – Axel Fontaine
3. Modular programming – Wikipedia
4. Monolithic application – Wikipedia
5. Modular Monolith with DDD – GitHub repository
6. Vertical Slice Architecture – Jimmy Bogard

Related posts

1. GRASP โ€“ General Responsibility Assignment Software Patterns Explained
2. Attributes of Clean Domain Model
3. Domain Model Encapsulation and PI with Entity Framework 2.2
4. Simple CQRS implementation with raw SQL and DDD

Image credits: Magnasoma

Attributes of Clean Domain Model

Introduction

There is a lot of talk about clean code and architecture nowadays. There is more and more talk about how to achieve it. The rules described by Robert C. Martin are universal and in my opinion, we can use them in various other contexts.

In this post I would like to refer them to the context of the Domain Model implementation, which is often the heart of our system. We want to have a clean heart, aren’t we?

As we know from the Domain-Driven Design approach, we have two spaces – problem space and solution space. Vaughn Vernon defines these spaces in DDD Distilled book as follows:

Problem space is where you perform high-level strategic analysis and design steps within the constraints of a given project.

Solution space is where you actually implement the solution that your problem space discussions identify as your Core Domain.

So the implementation of the Domain Model is a solution to a problem. To talk about whether the Domain Model is clean, we must define what a clean solution means first.

What is Clean Solution

I think that everyone has a definition of what a clean solution means or at least feels if what he does is clean or not. In any case, let me describe what it means to me.

First of all – a given solution has to solve a defined problem. This is mandatory. Even most-elegant solution which doesn’t solve our problem is worth nothing (in context of problem).

Secondly, a clean solution should be optimal from the point of view of time. It should take no more or less time than necessary.

Thirdly, it should solve exactly one defined problem with simplicity in mind but with adaption to change. We should keep always the balance – no less (no design or under-design), no more (over-design, see YAGNI). This could be summarized by a quote from Antoine Airman’s Odyssey book:

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

Note: Of course we should strive for perfection by being aware that we will never achieve it.

Fourthly, we should solve problems based on the experience of others and learn from their mistakes. This is why architectural and design patterns or principles are so useful. We can take someone’s solution and adapt it to our needs. Moreover, we can take tools created by others and use them. This is game-changer in the IT world.

Last, but not least attribute is the level of understanding of the solution. We should be able to understand our solution even after a lot of time. More importantly, others should be able to understand it. The code is much more read than written. Martin Fowler said:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

This is one of my favorite IT quotes. I try to remember it constantly as I write the code.

To summarize, the clean solution should solve exactly our problem in optimal time with a good balance of simplicity vs complexity. It should be developed based on other experiences, common knowledge, using appropriate tools and should be understandable. Considering all these factors, how can we determine that our Domain Model is a clean solution?

Clean Domain Model

1. Uses Ubiquitous Language

Ubiquitous Language which is one of the fundamentals of DDD strategic patterns in Domain Model plays key role. I like to think about writing code in a Domain Model like writing a book for business. Business people should be able to understand all logic, terms, and concepts which are used – even this is a computer program and they don’t know programming language! If we use the same language, concepts and meaning everywhere in a defined context, understanding of our problems and solutions can only increase the probability of success of our project.

2. Is Rich in Behavior

Domain Model, as the name suggests, models the domain. We have behavior in the domain, so our model should also have it. If this behavior is absent, we have dealing with the so-called Anemic Domain Model anti-pattern. For me, this is really a Data Model, not model of domain. This kind of model is useful sometimes but not when we have complex behvior and logic. It makes our code procedural and all the benefits of object-oriented programming disappear. The real Domain Model is rich in behavior.

3. Is encapsulated

Encapsulation of Domain Model is very important. We want to expose only the minimum information about our model to the outside world. We should prevent leaking business logic to our application logic or even worse – GUI. Ideally, we should expose only public methods on our aggregates (only entry to Domain Model). More information how achieve good level of Domain Model encapsulation you can read in my eariler post Domain Model Encapsulation and PI with Entity Framework 2.2.

4. Is Persistence Ignorant

Persistence Ignorance (PI) is another well-known concept and desirable attribute of Domain Model. We don’t want to have high coupling between our model and the persistence store. These are different concepts, tasks and responsibilities. Change in one thing should have minimal impact on other and vice-versa. Low coupling in this area means greater adaption of our system to change. Again, you can read how to minimize this coupling in .NET world using EF Core in the post which is linked above.

5. Sticks to SOLID Principles

As the Domain Model is object-oriented, all SOLID principles should be followed. Examples:

  • SRP – one Entity represents one business concept. One method does one thing. One event represents one fact.
  • O/C – business logic implementation should be easy to extend without changing other places. For that case, Strategy/Policy Pattern is most often used.
  • LSP – because of sticking to the composition over inheritance rule, sticking to LSP rule should be easy. Inheritance is the strongest level of coupling between classes and this is what we want to avoid
  • ISP – interfaces of our Domain Services or policies should be small – ideally should have one method
  • DI – to decouple our Domain Model from the rest of the application and infrastructure we need to use the Dependency Injection and Dependency Inversion principle. This combination gives us a powerful weapon – we can keep the same level of abstraction in our model, use business language in the whole model and hide implementation details from him.

6. Uses Domain Primitives

Most of the codebases I see operate on primitive types – strings, ints, decimals, guids. In the Domain Model, this level of abstraction is definitely too low.

We should always try to express significant business concepts using classes (entities or value objects). In this way, our model is more readable, easier to maintain, rich in behavior. In addition, our code is more secure because we can add validation at the class level. More about Domain Primitives you can read here.

7. Is efficient

Even the best-written Domain Model that cannot handle a request in a satisfactory time is useless. That is why entity size and aggregate boundaries are so important.

When designing aggregates, you need to know how they will be processed – how often, how many users will try to use them at the same time, whether there will be periods of increased activity and so on. The smaller the aggregate, the shorter data needs to be loaded and saved and the transaction is shorter. On the other hand, the smaller the aggregate, the smaller the consistency boundary so the correct balance is needed.

8. Is expressive

Class structure in Object-Oriented languages is a powerful tool to model business concepts. Unfortunately, it is often used incorrectly. The most common mistake is modeling many concepts with one class. For this (often unconsciously) statuses and bit flags are used.

Most often, such a class can be divided into several classes, which supports SRP. A good heuristics here are looking at the business language and looking at our entity’s behavior.

9. Is testable and tested

Domain Models are used to solve complicated problems. A complicated problem means complicated business logic to solve it. If you have a complicated solution you have to be sure that it works correctly and you can’t be afraid of changing this logic.

This is where unit tests enter, which are an integral part of a clean Domain Model. Clean Domain Model is testable and should have a maximum test coverage factor. This is the heart of our application and we should always be one hundred percent sure that it works as we expected.

10. It should be written with ease

This point is a summary of all the above. If writing your Domain Model code is a problem every time – it’s hard to change it, understand it, test it, use it – something is wrong. What you should do?

Review all the points described by me above and see what your model does not meet? Maybe it depends on the infrastructure? Maybe he speaks a non-business language? Maybe it’s anemic, poorly encapsulated or untestable? If your model meets all attributes described in this post, then solving business problems should be easy and pleasant. I assure you.

So I have a question – today are you relaxed by writing your business logic or did you hit your head against the wall again? Is your Domain Model clean?

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
Domain Model Validation
Handling Domain Events: Missing Part

Handling Domain Events: Missing Part

Introduction

Some time ago I wrote post about publishing and handling domain events. In addition, in one of the posts I described the Outbox Pattern, which provides us At-Least-Once delivery when integrating with external components / services without using the 2PC protocol.

This time I wanted to present a combination of both approaches to complete previous posts. I will present a complete solution that enables reliable data processing in the system in a structured manner taking into account the transaction boundary.

Depth of the system

At the beginning I would like to describe what is a Shallow System and what is a Deep System.

Shallow System

The system is considered to be shallow when, most often, after doing some action on it, there is not much going on.

A typical and most popular example of this type of system is that it has many CRUD operations. Most of the operations involve managing the data shown on the screen and there is little business logic underneath. Sometimes such systems can also be called a database browser. ๐Ÿ˜‰

Another heuristic that can point to a Shallow System is the ability to specify the requirements for such a system practically through the GUI prototype. The prototype of the system (possibly with the addition of comments) shows us how this system should work and it is enough – if nothing underneath is happening then there is nothing to define and describe.

From the Domain-Driven Design point of view, it will most often look like this: the execution of the Command on the Aggregate publishes exactly one Domain Event and… nothing happens. No subscribers to this event, no processors / handlers, no workflows defined. There is no communication with other contexts or 3rd systems either. It looks like this:

Shalow system in context of DDD
Shallow system in context of DDD.

Execute action, process request, save data – end of story. Often, in such cases, we do not even need DDD. Transaction Script or Active Record will be enough.

Deep system

The Deep System is (as one could easily guess) the complete opposite of the Shallow System.

A Deep System is one that is designed to resolve some problems in a non-trivial and complicated domain. If the domain is complicated then the Domain Model will be complicated as well. Of course, the Domain Model should be simplified as it is possible and at the same time it should not lose aspects that are most important in a given context (in terms of DDD – Bounded Context). Nevertheless, it contains a lot of business logic that needs to be handled.

We do not specify a Deep System by the GUI prototype because too much is happening underneath. Saving or reading data is just one of the actions that our system does. Other activities are communication with other systems, complicated data processing or calling other parts of our system.

This time, much more is happening in the context of Domain-Driven Design implementation. Aggregates can publish multiple Domain Events , and for each Domain Event there can be many handlers responsible for different behavior. This behavior can be communication with an external system or executing a Command on another Aggregate, which will again publish its events to which another part of our system will subscribe. This scheme repeats itself and our Domain Model reacts in a reactive manner:

Deep system
Deep system in context of DDD.

Problem

In post about publishing and handling domain events was presented very simple case and the whole solution did not support the re-publishing (and handling) of events by another Aggregate, which processing resulted from the previous Domain Event. In other words, there was no support for complex flows and data processing in a reactive way. Only one Command -> Aggregate -> Domain Event -> handlers scenario was possible.

It will be best to consider this in a specific example. Let’s assume the requirements that after placing an Order by the Customer:
a) Confirmation email to the Customer about placed Order should be sent
b) New Payment should be created
c) Email about new Payment to the Customer should be sent

These requirements are illustrated in the following picture:

Let’s assume that in this particular case both Order placement and Payment creation should take place in the same transaction. If transaction is successful, we need to send 2 emails – about the Order and Payment. Let’s see how we can implement this type of scenario.

Solution

The most important thing we have to keep in mind is the boundary of transaction. To make our life easier, we must make the following assumptions:

1. Command Handler defines transaction boundary. Transaction is started when Command Handler is invoked and committed at the end.
2. Each Domain Event handler is invoked in context of the same transaction boundary.
3. If we want to process something outside the transaction, we need to create a public event based on the Domain Event. I call it Domain Event Notification, some people call it a public event, but the concept is the same.

The second most important thing is when to publish and process Domain Events? Events may be created after each action on the Aggregate, so we must publish them:
– after each Command handling (but BEFORE committing transaction)
– after each Domain Event handling (but WITHOUT committing transaction)

Last thing to consider is processing of Domain Event Notifications (public events). We need to find a way to process them outside transaction and here Outbox Pattern comes in to play.

The first thing that comes to mind is to publish events at the end of each Command handler and commit the transaction, and at the end of each Domain Event handler only publish events. We can, however, try a much more elegant solution here and use the Decorator Pattern. Decorator Pattern allows us to wrap up our handling logic in infrastructural code, similar like Aspect-oriented programming and .NET Core Middlewares work.

We need two decorators. The first one will be for command handlers:

As you can see, in line 16 the processing of a given Command takes place (real Command handler is invoked), in line 18 there is a Unit of Work commit. UoW commit publishes Domain Events and commits the existing transaction:

In accordance with the previously described assumptions, we also need a second decorator for the Domain Event handler, which will only publish Domain Events at the very end without committing database transaction:

Last thing to do is configuration our decorators in IoC container (Autofac example):

Add Domain Event Notifications to Outbox

The second thing we have to do is to save notifications about Domain Events that we want to process outside of the transaction. To do this, we use the implementation of the Outbox Pattern:

As a reminder – the data for our Outbox is saved in the same transaction, which is why At-Least-Once delivery is guaranteed.

Implementing flow steps

At this point, we can focus only on the application logic and does not need to worry about infrastructural concerns. Now, we only implementing the particular flow steps:

a) When the Order is placed then create Payment:

b) When the Order is placed then send an email:

c) When the Payment is created then send an email:

The following picture presents the whole flow:

Flow of processing
Flow of processing

Summary

In this post I described how it is possible to process Commands and Domain Events in a Deep System in a reactive way.

Summarizing, the following concepts has been used for this purpose:

– Decorator Pattern for events dispatching and transaction boundary management
– Outbox Pattern for processing events in separate transaction
– Unit of Work Pattern
– Domain Events Notifications (public events) saved to the Outbox
– Basic DDD Building Blocks – Aggregates and Domain Events
– Eventual Consistency

Source code

If you would like to see full, working example โ€“ check my GitHub repository.

Additional Resources

The Outbox: An EIP Pattern – John Heintz
Domain events: design and implementation – Microsoft

Related posts

How to publish and handle Domain Events
Simple CQRS implementation with raw SQL and DDD
The Outbox Pattern

How to publish and handle Domain Events

2019-06-19 UPDATE: Please check Handling Domain Events: Missing Part post which is a continuation of this article

Introduction

Domain Event is one of the building blocks of Domain Driven Design. It is something that happened in particular domain and it captures memory of it. We create Domain Events to notify other parts of the same domain that something interesting happened and these other parts potentially can react to.

Domain Event is usually immutable data-container class named in the past tense. For example:

Three ways of publishing domain events

I have seen mainly three ways of publishing domain events.

1. Using static DomainEvents class

This approach was presented by Udi Dahan in his Domain Events Salvation post. In short, there is a static class named DomainEvents with method Raise and it is invoked immediately when something interesting during aggregate method processing occurred. Word immediately is worth emphasizing because all domain event handlers start processing immediately too (even aggregate method did not finish processing).

2. Raise event returned from aggregate method

This is approach when aggregate method returns Domain Event directly to ApplicationService. ApplicationService decides when and how to raise event. You can become familiar with this way of raising events reading Jan Kronquist Don’t publish Domain Events, return them! post.

3. Add event to Events Entity collection.

In this way on every entity, which creates domain events, exists Events collection. Every Domain Event instance is added to this collection during aggregate method execution. After execution, ApplicationService (or other component) reads all Eventscollections from all entities and publishes them. This approach is well described in Jimmy Bogard post A better domain events pattern.

Handling domain events

The way of handling of domain events depends indirectly on publishing method. If you use DomainEvents static class, you have to handle event immediately. In other two cases you control when events are published as well handlers execution – in or outside existing transaction.

In my opinion it is good approach to always handle domain events in existing transaction and treat aggregate method execution and handlers processing as atomic operation. This is good because if you have a lot of events and handlers you do not have to think about initializing connections, transactions and what should be treat in “all-or-nothing” way and what not.

Sometimes, however, it is necessary to communicate with 3rd party service (for example e-mail or web service) based on Domain Event. As we know, communication with 3rd party services is not usually transactional so we need some additional generic mechanism to handle these types of scenarios. So I created Domain Events Notifications.

Domain Events Notifications

There is no such thing as domain events notifications in DDD terms. I gave that name because I think it fits best – it is notification that domain event was published.

Mechanism is pretty simple. If I want to inform my application that domain event was published I create notification class for it and as many handlers for this notification as I want. I always publish my notifications after transaction is committed. The complete process looks like this:

1. Create database transaction.
2. Get aggregate(s).
3. Invoke aggregate method.
4. Add domain events to Events collections.
5. Publish domain events and handle them.
6. Save changes to DB and commit transaction.
7. Publish domain events notifications and handle them.

How do I know that particular domain event was published?

First of all, I have to define notification for domain event using generics:

All notifications are registered in IoC container:

In EventsPublisher we resolve defined notifications using IoC container and after our unit of work is completed, all notifications are published:

This is how whole process looks like presented on UML sequence diagram:

You can think that there is a lot of things to remember and you are right!:) But as you can see whole process is pretty straightforward and we can simplify this solution using IoC interceptors which I will try to describe in another post.

Summary

1. Domain event is information about something which happened in the past in modeled domain and it is important part of DDD approach.
2. There are many ways of publishing and handling domain events – by static class, returning them, exposing by collections.
2. Domain events should be handled within existing transaction (my recommendation).
3. For non-trasactional operations Domain Events Notifications were introduced.

Related posts

Handling Domain Events: Missing Part
The Outbox Pattern