In previous posts we discussed what is the architecture of Modular Monolith and architectural drivers that can affect its choice. In this post, I would like to focus on ways to enforce chosen architecture.
The methods described below are not just about the modular monolith architecture, it can be said that there are universal. Nevertheless, due to the monolithic nature, the size of the codebase and the ease of its changes, they are particularly important in enforcing architecture.
Let’s assume that based on current architectural drivers, you decided on the architecture of a Modular Monolith. Let’s also assume that you have predefined your module boundaries and solution architecture. You chose the technology, approach, way of communication between modules, way of persistence.
Everything has been documented in the form of a solution architecture document/description (SAD) or you made just a few diagrams (using UML, C4 model or simply arrows and boxes). You’ve done enough up-front design, you can start the first iterations of implementation.
At the beginning it is very simple. It does not have much functionality, there is little code, it is easily maintained and consistent with modeled architecture. There is a lot of time and even if something goes wrong it is easy to refactor. So far so good.
However, at some point it is not easy anymore. Functionalities and code are increasing, requirements changes are starting to appear, deadlines are chasing. We start making shortcuts and our implementation begins to differ significantly from the design. In the case of the Modular Monolith architecture, we most often lose in this way modularity, independence and everything begins to communicate with everything. Another Big Ball Of Mud made:
It was supposed to be like never before, it ended as always
George Fairbanks in his book Just Enough Software Architecture: A Risk-Driven Approach the phenomenon described above defines as follows:
Your architecture models and your source code will not show the same things. The difference between them is the model-code gap.
Whether you start with source code and build a model, or do the reverse, you must manage two representations of your solution. Initially, the code and models might correspond perfectly, but over time they tend to diverge. Code evolves as features are added and bugs are fixed. Models evolve in response to challenges or planning needs. Divergence happens when the evolution of one or the other yields inconsistencies.
Are we always doomed to such an end in the long run? Well no. It certainly requires a lot of discipline from ourselves, but discipline is not everything. We need to apply appropriate practices and approaches that keep our architecture in check. What are these approaches then?
When describing the tools to check whether our implementation is consistent with the assumed design, we must take into account 2 aspects.
The first aspect is the possibilities that the tool gives us. As we know architecture is a set of rules at different levels of abstraction, sometimes hard to define. Not to mention that we have to check them out.
The second aspect is how quickly we get feedback. Of course, the sooner the better because we are able to fix something faster. The faster we fix something, the less impact this error has on our architecture later.
Considering the following assumptions, when it comes to architectural enforcement we can do it on 3 different levels: through the compiler, automated tests and code review.
The compiler is your best friend. It is able to quickly check for you many things that would take you a long time. In addition, the compiler cannot be wrong, people can. In that case, why so rarely do we use the compiler to take responsibility for compliance with our chosen architecture? Why we do not want to use its possibilities to the maximum?
The first main sin is the everything is public principle. According to the definition of modularity, modules should communicate through well-defined interfaces, which means they should be encapsulated. If everything is public, there is no encapsulation.
Unfortunately, the programming community favors this phenomenon by:
– sample projects
– IDE (creating public classes by default)
We should definitely change the approach to private by default. If something cannot be private, let it be available within the module’s range, but still inaccessible to others.
How to do it? Unfortunately, we have limited options in .NET. The only thing we can do is to separate the module into a separate assembly and use the “internal” access modifier. There is almost a war between supporters of all code in one project (assembly) and supporters of splitting into many projects.
The former say assembly is an implementation unit. Yes, but since we have no other way to encapsulate our modules, the division into projects seems to be a sensible solution. Additionally, thanks to checking references, adding incorrect dependencies (e.g. from the domain to the infrastructure) will be difficult or even impossible.
Lack of encapsulation is one of the most common sins I see, but not the only one. Others are not using immutability (unnecessary setters) or strong typing (primitive obsession) for example.
Generally speaking, we should use our language in such a way that the compiler can catch as many mistakes for us. It is the most efficient approach to enforce architecture of system.
Not everything can be checked using a compiler. This does not mean, however, that we must check it manually. On the contrary, the computer can still do it for us. In that case, we can use 2 mechanisms – static code analysis and automated tests.
Static code analysis
I will start with a more familiar and common method – a static code analyzer. Certainly, most have heard about such tools as SonarQube or NDepend. These are tools that automatically perform static analysis of our code and based on they provide metrics information that may be very useful to us. Of course, static code analyzers we can connect to CI process and get feedback on a regular basis.
Architecture tests are another way less known but gaining popularity. These are unit tests, but instead of testing business functionalities they test our codebase in the context of architecture. Most often, such tests are written based on a library dedicated to this type of tests. Such a test may look like this:
public void ValueObject_Should_Be_Immutable()
var types = Types.InAssembly(DomainAssembly)
public void DomainLayer_DoesNotHaveDependency_ToInfrastructureLayer()
var result = Types.InAssembly(DomainAssembly)
We can check many things with these tests. Libraries for this (such as NetArchTests or ArchUnit) allow a lot and writing something different is not a difficult task. A complete example of using such tests can be found here.
If we are not able to check the compliance of our solution with the chosen architecture using a computer (compiler, automated tests), we have the last tool – code review. Thanks to the code review, we can check everything that a computer cannot do for us, but it has some disadvantages.
The first disadvantage is that people can be wrong, so the probability of missing an architectural decision break attempt is relatively high.
The second drawback, of course, is the large amount of time we need to spend on code-review. Of course, this is not a waste of time and we can not give it up, but it must always be included in the estimates of the project.
The conclusion is obvious – to enforce architecture we should use the computer as much as possible and treat code-review as the last line of defense. The question is how to strengthen this line of defense, i.e. how to reduce the time and probability of missing something during code-review? We can use Architecture Decisions Records (ADR).
Architecture Decisions Records (ADR)
What is the Architecture Decisions Record? Let me quote a definition from the most popular GitHub repository related to this topic:
An architectural decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.
Such a document is usually stored in the version control system, which is also recommended (as the approach itself) by the popular technology radar from ThoughtWorks company.
My advice is to start by describing your decisions as simply and quickly as possible. Without unnecessary ceremonies, choose a simple template (e.g. the one proposed by Michael Nygard), where are the most important elements – context, decision, and consequences. But how does this relate to the discussed code-review?
First, all decisions become public, everyone has access to them and they are described. There is no such thing that someone says “I did not know”. As such decisions are by definition important, everyone must know them and follow them.
The second thing is it speeds up the code-review process, because instead of writing why something is wrong, you can just paste the link to the appropriate ADR instead of explaining why we do it like that, what was the decision, when and in what context.
Each system has some architecture. The question is: whether you will shape the architecture of your system or whether it will shape itself? Certainly, the first option is better because the second one may condemn us to big failure.
Architecture enforcement is the responsibility of every team member (not just the architect), that’s why the way we do it is so important. It’s a process that requires commitment. The techniques I mentioned can significantly facilitate and improve the process of architectural enforcement while maintaining the quality of our system at the appropriate level.
1. Unit Test Your Architecture with ArchUnit – Jonas Havers, article
2. Architecture Decision Records in Action presentation – Michael Keeling, Joe Runde, presentation
3. Design It! – Michael Keeling, book
4. Modular Monolith with DDD – Kamil Grzybek, GitHub repository
5. “Modular Monoliths” – Simon Brown, video
Image credits: nanibystudio