Handling Domain Events: Missing Part

Introduction

Some time ago I wrote post about publishing and handling domain events. In addition, in one of the posts I described the Outbox Pattern, which provides us At-Least-Once delivery when integrating with external components / services without using the 2PC protocol.

This time I wanted to present a combination of both approaches to complete previous posts. I will present a complete solution that enables reliable data processing in the system in a structured manner taking into account the transaction boundary.

Depth of the system

At the beginning I would like to describe what is a Shallow System and what is a Deep System.

Shallow System

The system is considered to be shallow when, most often, after doing some action on it, there is not much going on.

A typical and most popular example of this type of system is that it has many CRUD operations. Most of the operations involve managing the data shown on the screen and there is little business logic underneath. Sometimes such systems can also be called a database browser. πŸ˜‰

Another heuristic that can point to a Shallow System is the ability to specify the requirements for such a system practically through the GUI prototype. The prototype of the system (possibly with the addition of comments) shows us how this system should work and it is enough – if nothing underneath is happening then there is nothing to define and describe.

From the Domain-Driven Design point of view, it will most often look like this: the execution of the Command on the Aggregate publishes exactly one Domain Event and… nothing happens. No subscribers to this event, no processors / handlers, no workflows defined. There is no communication with other contexts or 3rd systems either. It looks like this:

Shalow system in context of DDD
Shallow system in context of DDD.

Execute action, process request, save data – end of story. Often, in such cases, we do not even need DDD. Transaction Script or Active Record will be enough.

Deep system

The Deep System is (as one could easily guess) the complete opposite of the Shallow System.

A Deep System is one that is designed to resolve some problems in a non-trivial and complicated domain. If the domain is complicated then the Domain Model will be complicated as well. Of course, the Domain Model should be simplified as it is possible and at the same time it should not lose aspects that are most important in a given context (in terms of DDD – Bounded Context). Nevertheless, it contains a lot of business logic that needs to be handled.

We do not specify a Deep System by the GUI prototype because too much is happening underneath. Saving or reading data is just one of the actions that our system does. Other activities are communication with other systems, complicated data processing or calling other parts of our system.

This time, much more is happening in the context of Domain-Driven Design implementation. Aggregates can publish multiple Domain Events , and for each Domain Event there can be many handlers responsible for different behavior. This behavior can be communication with an external system or executing a Command on another Aggregate, which will again publish its events to which another part of our system will subscribe. This scheme repeats itself and our Domain Model reacts in a reactive manner:

Deep system
Deep system in context of DDD.

Problem

In post about publishing and handling domain events was presented very simple case and the whole solution did not support the re-publishing (and handling) of events by another Aggregate, which processing resulted from the previous Domain Event. In other words, there was no support for complex flows and data processing in a reactive way. Only one Command -> Aggregate -> Domain Event -> handlers scenario was possible.

It will be best to consider this in a specific example. Let’s assume the requirements that after placing an Order by the Customer:
a) Confirmation email to the Customer about placed Order should be sent
b) New Payment should be created
c) Email about new Payment to the Customer should be sent

These requirements are illustrated in the following picture:

Let’s assume that in this particular case both Order placement and Payment creation should take place in the same transaction. If transaction is successful, we need to send 2 emails – about the Order and Payment. Let’s see how we can implement this type of scenario.

Solution

The most important thing we have to keep in mind is the boundary of transaction. To make our life easier, we must make the following assumptions:

1. Command Handler defines transaction boundary. Transaction is started when Command Handler is invoked and committed at the end.
2. Each Domain Event handler is invoked in context of the same transaction boundary.
3. If we want to process something outside the transaction, we need to create a public event based on the Domain Event. I call it Domain Event Notification, some people call it a public event, but the concept is the same.

The second most important thing is when to publish and process Domain Events? Events may be created after each action on the Aggregate, so we must publish them:
– after each Command handling (but BEFORE committing transaction)
– after each Domain Event handling (but WITHOUT committing transaction)

Last thing to consider is processing of Domain Event Notifications (public events). We need to find a way to process them outside transaction and here Outbox Pattern comes in to play.

The first thing that comes to mind is to publish events at the end of each Command handler and commit the transaction, and at the end of each Domain Event handler only publish events. We can, however, try a much more elegant solution here and use the Decorator Pattern. Decorator Pattern allows us to wrap up our handling logic in infrastructural code, similar like Aspect-oriented programming and .NET Core Middlewares work.

We need two decorators. The first one will be for command handlers:

As you can see, in line 16 the processing of a given Command takes place (real Command handler is invoked), in line 18 there is a Unit of Work commit. UoW commit publishes Domain Events and commits the existing transaction:

In accordance with the previously described assumptions, we also need a second decorator for the Domain Event handler, which will only publish Domain Events at the very end without committing database transaction:

Last thing to do is configuration our decorators in IoC container (Autofac example):

Add Domain Event Notifications to Outbox

The second thing we have to do is to save notifications about Domain Events that we want to process outside of the transaction. To do this, we use the implementation of the Outbox Pattern:

As a reminder – the data for our Outbox is saved in the same transaction, which is why At-Least-Once delivery is guaranteed.

Implementing flow steps

At this point, we can focus only on the application logic and does not need to worry about infrastructural concerns. Now, we only implementing the particular flow steps:

a) When the Order is placed then create Payment:

b) When the Order is placed then send an email:

c) When the Payment is created then send an email:

The following picture presents the whole flow:

Flow of processing
Flow of processing

Summary

In this post I described how it is possible to process Commands and Domain Events in a Deep System in a reactive way.

Summarizing, the following concepts has been used for this purpose:

– Decorator Pattern for events dispatching and transaction boundary management
– Outbox Pattern for processing events in separate transaction
– Unit of Work Pattern
– Domain Events Notifications (public events) saved to the Outbox
– Basic DDD Building Blocks – Aggregates and Domain Events
– Eventual Consistency

Source code

If you would like to see full, working example – check my GitHub repository.

Additional Resources

The Outbox: An EIP Pattern – John Heintz
Domain events: design and implementation – Microsoft

Related posts

How to publish and handle Domain Events
Simple CQRS implementation with raw SQL and DDD
The Outbox Pattern

GRASP – General Responsibility Assignment Software Patterns Explained

Introduction

I recently noticed that a lot of attention is paid to SOLID principles. And this is very good thing because it is the total basis of Object-Oriented Design (OOD) and programming. For developers of object-oriented languages, knowledge of the SOLID principles is a requirement for writing code which is characterized by good quality. There are a lot of articles and courses on these rules, so if you do not know them yet, learn them as soon as possible.

On the other hand, there is another, less well-known set of rules regarding object-oriented programming. It’s called GRASP – General Responsibility Assignment Software Patterns (or Principles). There are far fewer materials on the Internet about this topic, so I decided to bring it closer because I think the principles described in it are as important as the SOLID principles.

Disclaimer: This post is inspired and based on awesome Craig Larman’s book: Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development. Although the last edition was released in 2004, according to me, the book is still up-to-date and explains perfectly how to design systems using object-oriented languages. It’s hard to find the better book about this subject, believe me. It is not book about UML, but you can learn UML from it because it is good explained too. Must-have for each developer, period.

The Responsibility in Software

Responsibility in software is a very important concept and not only concerns classes but also modules and entire systems. Thinking in terms of responsibilities is popular way to think about design of the software. We can always ask questions like:

– What is the responsibility of this class/module/component/system?
– Is it responsible for this or is it responsible for that?
– Is Single Responsibility Principle violated in this particular context?

But to answer these kind of questions we should ask one, more fundamental question: what does it mean that something is responsible in the context of the software?

Doing and Knowing

As it is proposed by Rebecca Wirfs-Brock in Object Design: Roles, Responsibilities, and Collaborations book and her RDD approach a responsibility is:

an obligation to perform a task or know information

As we see from this definition we have here a clear distinction between behavior (doing) and data (knowing).

Doing responsibility of an object is seen as:
a) doing something itself – create an object, process data, do some computation/calculation
b) initiate and coordinate actions with other objects

Knowing responsibility of an object can be defined as:
a) private and public object data
b) related objects references
c) things it can derive

Let’s see an example:

If you want more information about responsibilities in software and dig into Responsibility-Driven Design you can read it directly from Rebecca’s Wirfs-Brock book or this PDF.

Ok, now we know what the responsibility in context of software is. Let’s see how to assign this responsibility using GRASP.

GRASP

GRASP is set of exactly 9 General Responsibility Assignment Software Patterns. As I wrote above assignment of object responsibilities is one of the key skill of OOD. Every programmer and designer should be familiar with these patterns and what is more important – know how to apply them in everyday work (by the way – the same assumptions should apply to SOLID principles).

This is the list of 9 GRASP patterns (sometimes called principles but please, do not focus on naming here):

1. Information Expert
2. Creator
3. Controller
4. Low Coupling
5. High Cohesion
6. Indirection
7. Polymorphism
8. Pure Fabrication
9. Protected Variations

NOTE: All Problem/Solution paragraphas are quotes from Craig Larman’s book. I decided that it would be best to stick to the original.

1. Information Expert

Problem: What is a basic principle by which to assign responsibilities to objects?
Solution: Assign a responsibility to the class that has the information needed to fulfill it.

In following example Customer class has references to all customer Orders so it is natural candidate to take responsibility of calculating total value of orders:

This is the most basic principle, because the truth is – if we do not have the data we need, we would not be able to meet the requirement and assign responsibility anyway.

2. Creator

Problem: Who creates object A?
Solution: Assign class B the responsibility to create object A if one of these is true (more is better)
– B contains or compositely aggregates A
– B records A
– B closely uses A
– B has the initializing data for A

Going back to the example:

As you can see above Customer class compositely aggregates Orders (there is no Order without Customer), records Orders, closely uses Orders and has initializing data passed by method parameters. Ideal candidate for “Order Creator”. πŸ™‚

3. Controller

Problem: What first object beyond the UI layer receives and coordinates “controls” a system operation?
Solution: Assign the responsibility to an object representing one of these choices:
– Represents the overall “system”, “root object”, device that the software is running within, or a major subsystem (these are all variations of a facade controller)
– Represents a use case scenario within which the system operation occurs (a use case or session controller)

This principle implementation depends on high level design of our system but general we need always define object which orchestrate our business transaction processing. At first glance, it would seem that the MVC Controller in Web applications/API’s is a great example here (even the name is the same) but for me it is not true. Of course it receives input but it shouldn’t coordinate a system operation – it should delegate it to separate service or Command Handler:

4. Low Coupling

Problem: How to reduce the impact of change? How to support low dependency and increased reuse?
Solution: Assign responsibilities so that (unnecessary) coupling remains low. Use this principle to evaluate alternatives.

Coupling is a measure how one element is related to another. The higher the coupling, the greater the dependence of one element to the another.

Low coupling means our objects are more independent and isolated. If something is isolated we can change it not worrying that we have to change something else or wheter we would break something (see Shotgun Surgery). Use of SOLID principles are great way to keep coupling low. As you see in example above between CustomerOrdersController and AddCustomerOrderCommandHandler coupling remains low – they need only agree on command object structure. This low coupling is possible thanks to Indirection pattern which is described later.

5. High Cohesion

Problem: How to keep objects focused, understandable, manageable and as a side effect support Low Coupling?
Solution: Assign a responsibility so that cohesion remains high. Use this to evaluate alternatives.

Cohesion is a measure how strongly all responsibilities of the element are related. In other words, what is the degree to which the parts inside a element belong together.

Classes with low cohesion have unrelated data and/or unrelated behaviors. For example, the Customer class has high cohesion because now it does only one thing – manage the Orders. If I would add to this class management of product prices responsibility, cohesion of this class would drop significantly because price list is not directly related to Customer itself.

6. Indirection

Problem: Where to assign a responsibility to avoid direct coupling between two or more things?
Solution: Assign the responsibility to an intermediate object to mediate between other components or services so that they are not directly coupled.

This is where Mediator Pattern comes in to play. Instead of direct coupling:

we can use the mediator object and mediate between objects:

One note here. Indirection supports low coupling but reduces readability and reasoning about the whole system. You don’t know which class handles the command from the Controller definition. This is the trade-off to take into consideration.

7. Polymorphism

Problem: How handle alternatives based on type?
Solution: When related alternatives or behaviors vary by type (class), assingn responsibility for the behavior (using polymorphi operations) to the types for which the behavior varies.

Polymorphism is fundamental principle of Object-Oriented Design. In this context, principle is strongly connected with (among others) Strategy Pattern.

As it was presented above constructor of Customer class takes ICustomerUniquenessChecker interface as parameter:

We can provide there different implementations of this interface depending on the requirements. In general, this is very useful approach when we have in our systems different algorithms that have the same input and output (in terms of structure).

8. Pure Fabrication

Problem: What object should have the responsibility, when you do not want to viloate High Cohesion and Low Coupling but solutions offered by other principles are not appopriate?>
Solution: Assign a highly cohesive set of responsibilites to an artifical or convenience class that does not represent a problem domain conecept.

Sometimes it is realy hard to figure it out where responsibility should be placed. This is why in Domain-Driven Design there is a concept of Domain Service. Domain Services hold logic which are not related with one, particular Entity.

For example, in e-commerce systems we often have need to convert one currency to another. Sometimes it is hard to say where this behavior should be placed so the best option is to create new class and interface:

This way we support both High Cohesion (we are only converting currencies) and Low Coupling (client classes are only dependent to IForeignExchange interface). Additionally, this class is reusable and easy to maintain.

9. Protected Variations

Problem: How to design objects, subsystems and systems so that the variations or instability in these elements does not have an undesirable impact on other elements?
Solution: Identify points of predicted variation or instability, assign responsibilities to create a stable interface around them.

In my opinion, this is the most important principle which is indirectly related to the rest GRASP principles. Currently, one of the most important software metrics is the ease of change. As architects and programmers we must be ready for ever-changing requirements. This is not optional and “nice to have” quality attribute – it is “must-have” and our duty.

Fortunately, we are armed with a lot design guidelines, principles, patterns and practices to support changes on different levels of abstraction. I will mention only a few (already beyond the GRASP):

– SOLID principles, especially the Open-Close principle (but all of them supports change)
– Gang of Four (GoF) Design Patterns
Encapsulation
Law of Demeter
Service Discovery
– Virtualization and containerization
– asynchronous messaging, Event-driven architectures
Orchestration, Choreography

As iterative software development process is more suitable today because even we are forced to change something once, we can draw conclusions and be prepared for future changes at a lower cost.

Fool me once shame on you. Fool me twice shame on me.

Summary

In this post I described one of the most fundamental Object-Oriented Design set of patterns and principles – GRASP.

Skilful management of responsibilities in software is the key to create good quality architecture and code. In combination with others patterns and practices is it possible to develop well-crafted systems which supports change and do not resist it. This is good, because the only thing that is certain is change. So be prepared.

Related posts

10 common broken rules of clean code

The Outbox Pattern

Introduction

Sometimes, when processing a business operation, you need to communicate with an external component in the Fire-and-forget mode. That component can be, for example:
– external service
– message bus
– mail server
– same database but different database transaction
– another database

Examples of this type of integration with external components:
– sending an e-mail message after placing an order
– sending an event about new client registration to the messaging system
– processing another DDD Aggregate in different database transaction – for example after placing an order to decrease number of products in stock

The question that arises is whether we are able to guarantee the atomicity of our business operation from a technical point of view? Unfortunately, not always, or even if we can (using 2PC protocol), this is a limitation of our system from the point of latency, throughput, scalability and availability. For details about these limitations, I invite you to read the article titled It’s Time to Move on from Two Phase Commit.

The problem I am writing about is presented below:

After execution of line 24 transaction is committed. In line 28 we want to send an event to event bus, but unfortunately 2 bad things can happen:
– our system can crash just after transaction commit and before sending the event
– event bus can be unavailable at this moment so the event cannot be sent

Outbox pattern

If we cannot provide atomicity or we don’t want to do that for the reasons mentioned above, what could we do to increase the reliability of our system? We should implement the Outbox Pattern.

Outbox pattern

The Outbox Pattern is based on Guaranteed Delivery pattern and looks as follows:

Outbox pattern

When you save data as part of one transaction, you also save messages that you later want to process as part of the same transaction. The list of messages to be processed is called an Outbox, just like in e-mail clients.

The second element of the puzzle is a separate process that periodically checks the contents of the Outbox and processes the messages. After processing each message, the message should be marked as processed to avoid resending. However, it is possible that we will not be able to mark the message as processed due to communication error with Outbox:

Outbox messages processing

In this case when connection with Outbox is recovered, the same message will be sent again. What all this means to us? Outbox Pattern gives At-Least-Once delivery. We can be sure that message will be sent once, but can be sent multiple times too! That’s why another name for this approach is Once-Or-More delivery. We should remember this and try to design receivers of our messages as Idempotents, which means:

In Messaging this concepts translates into a message that has the same effect whether it is received once or multiple times. This means that a message can safely be resent without causing any problems even if the receiver receives duplicates of the same message.

Ok, Enough theory, let’s see how we can implement this pattern in .NET world.

Implementation

Outbox message

At the beginning, we need to define the structure of our OutboxMessage:

What is important, the OutboxMessage class is part of the Infrastructure and not the Domain Model! Try to talk with business about Outbox, they will think about the outlook application instead of the messaging pattern. πŸ™‚ I didn’t include ProcessedDate property because this class is only needed to save message as part of transaction so this property always will be NULL in this context.

Saving the message

For sure I do not want to program writing messages to the Outbox every time in each Command Handler, it is against DRY principle. For this reason, the Notification Object described in the post about publishing Domain Events can be used. Following solution is based on linked article with little modification – instead of processing the notifications immediately, it serializes them and writes them to the database.

As a reminder, all Domain Events resulting from an action are processed as part of the same transaction. If the Domain Event should be processed outside of the ongoing transaction, you should define a Notification Object for it. This is the object which should be written to the Outbox. The code looks like:

Example of Domain Event:

And Notification Object:

First thing to note is Json.NET library usage. Second thing to note are 2 constructors of CustomerRegisteredNotification class. First of them is for creating notification based on Domain Event. Second of them is to deserialize message from JSON string which is presented in following section regarding processing.

Processing the message

The processing of Outbox messages should take place in a separate process. However, instead of a separate process, we can also use the same process but another thread depending on the needs. Solution which is presented below can be used in both cases.

At the beginning, we need to use a scheduler that will periodically run Outbox processing. I do not want to create the scheduler myself (it is known and solved problem) so I will use one the mature solution in .NET – Quartz.NET. Configuration of Quartz scheduler is very simple:

Firstly, scheduler is created using factory. Then, new instance of IoC container for resolving dependencies is created. The last thing to do is to configure our job execution schedule. In case above it will be executed every 15 seconds but its configuration really depends on how many messages you will have in your system.

This is how ProcessOutboxJob looks like:

The most important parts are:
Line 1 – [DisallowConcurrentExecution] attribute means that scheduler will not start new instance of job if other instance of that job is running. This is important because we don’t want process Outbox concurrently.
Line 25 – Get all messages to process
Line 30 – Deserialize message to Notification Object
Line 32 – Processing the Notification Object (for example sending event to bus)
Line 38 – Set message as processed

As I wrote earlier, if there is an error between processing the message (line 32) and setting it as processed (line 38), job in the next iteration will want to process it again.

Notification handler template looks like this:

Finally, this is view of our Outbox:

Outbox view

Summary

In this post I described what are the problems with ensuring the atomicity of the transaction during business operation processing. I’ve raised the topic of 2PC protocol and motivation to not use it. I presented what the Outbox Pattern is and how we can implement it. Thanks to this, our system can be really more reliable.

Source code

If you would like to see full, working example – check my GitHub repository.

Additional Resources

Refactoring Towards Resilience: Evaluating Coupling – Jimmy Bogard
The Outbox: An EIP Pattern – John Heintz
Asynchronous message-based communication – Microsoft

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events

Domain Model Validation

Introduction

In previous post I described how requests input data can be validated on Application Services Layer. I showed FluentValidation library usage in combination with Pipeline Pattern and Problem Details standard. In this post I would like to focus on the second type of validation which sits in the Domain LayerDomain Model validation.

What is Domain Model validation

We can divide the validation of the Domain Model into two types based on scope – Aggregates scope and Bounded Context scope

Aggregates scope

Let’s remind what the Aggregate is by quoting a fragment of the text from Vaughn Vernon Domain-Driven Design Distilled book:

Each Aggregate forms a transactional consistency boundary. This means that within a single
Aggregate, all composed parts must be consistent, according to business rules, when the controlling
transaction is committed to the database.

The most important part of this quote in context of validation I underlined. It means that under no circumstances we can’t persist Aggregate to database which has invalid state or breaks business rules. These rules are often called “invariants” and are defined by Vaughn Vernon as follows:

… business invariants β€” the rules to which the software must always adhere β€” are guaranteed to be consistent following each business operation.

So in context of Aggregates scope, we need to protect these invariants by executing validation during our use case (business operation) processing.

Bounded Context scope

Unfortunately, validation of Aggregates invariants is not enough. Sometimes the business rule may apply to more than one Aggregate (they can be even aggregates of different types).

For example, assuming that we have Customer Entity as Aggregate Root, the business rule may be “Customer email address must be unique”. To check this rule we need to check all emails of Customers which are separated Aggregate Roots. It is outside of the scope of one Customer aggregate. Of course, supposedly, we could create new entity called CustomerCatalog as Aggregate Root and aggregate all of the Customers to it but this is not good idea for many reasons. The better solution is described later in this article.

Let’s see what options we have to solve both validation problems.

Three solutions

Return Validation Object

This solution is based on Notification Pattern. We are defining special class called Notification/ValidationResult/Result/etc which “collects together information about errors and other information in the domain layer and communicates it”.

What does it mean for us? Is means that for every entity method which mutates the state of Aggregate we should return this validation object. The keyword here is entity because we can have (and we likely will have) nested invocations of methods inside Aggregate. Recall the diagram from the post about Domain Model encapsulation:

Domain model encapsulation

The program flow will look like:

Validation Object Flow

and the code structure (simplified):

However, if we don’t like to return ValidationResult from every method which mutates the state we can apply different approach which I described in article about publishing Domain Events. In short, in this solution we need to add ValidationResult property for every Entity (as Domain Events collection) and after Aggregate processing we have to examine these properties and decide if the whole Aggregate is valid.

Deferred validation

Second solution how to implement validation is to execute checking after whole Aggregate’s method is processed. This approach is presented for example by Jeffrey Palermo in his article. The whole solution is pretty straightforward:

Deferred validation

Always Valid

Last but not least solution is called “Always Valid” and it’s just about throwing exceptions inside Aggregate methods. It means that we finish processing of the business operation with the first violation of the Aggregate invariant. In this way, we are assured that our Aggregate is always valid:

Always Valid program flow

Comparison of solutions

I have to admit that I don’t like Validation Object and Deferred Validation approach and I recommend Always Valid strategy. My reasoning is as follows.

Returning Validation Object approach pollutes our methods declarations, adds accidental complexity to our Entities and is against Fail-Fast principle. Moreover, Validation Object becomes part of our Domain Model and it is for sure not part of ubiquitous language. On the other hand Deferred Validation implies not encapsulated Aggregate, because the validator object must have access to aggregate internals to properly check invariants.

However, both approaches have one advantage – they do not require throwing exceptions which should be thrown only when something unexpected occurs. Business rule broken is not unexpected.

Nevertheless, I think this is one of the rare exception when we can break this rule. For me, throwing exceptions and having always valid Aggregate is the best solution. “The ends justify the means” I would like to say. I think of this solution like implementation of Publish-Subsribe Pattern. Domain Model is the Publisher of broken invariants messages and Application is the Subscriber to this messages. The main assumption is that after publishing message the publisher stops processing because this is how exceptions mechanism works.

Always Valid Implementation

Exception throwing is built into the C# language so practically we have everything. Only thing to do is create specific Exception class, I called it BusinessRuleValidationException:

Suppose we have a business rule defined that you cannot order more than 2 orders on the same day. So it looks implementation:

What we should do with the thrown exception? We can use approach from REST API Data Validation and return appropriate message to the client as Problem Details object standard. All we have to do is to add another ProblemDetails class and set up mapping in Startup:

The result returned to client:

Problem details validation domain model

For simpler validation like checking for nulls, empty lists etc you can create library of guards (see Guard Pattern) or you can use external library. See GuardClauses created by Steve Smith for example.

BC scope validation implementation

What about validation which spans multiple Aggregates (Bounded Context scope)? Let’s assume that we have a rule that there cannot be 2 Customers with the same email address. There are two approaches to solve this.

The first way is to get required aggregates in CommandHandler and then pass them to aggregate’s method/constructor as arguments:

However, this is not always a good solution because as you can see we need to load all Customer Aggregates to memory. This could be serious performance issue. If we can not afford it then we need to introduce second approach – create Domain Service which is defined as (source – DDD Reference):

When a significant process or transformation in the domain is not a natural responsibility of an entity or value object, add an operation to the model as a standalone interface declared as a service

So, for that case we need to create ICustomerUniquenessChecker service interface:

This is the implementation of that interface:

Finally, we can use it inside our Customer Aggregate:

The question here is whether pass Domain Service as an argument to aggregate’s constructor/method or execute validation in Command Handler itself. As you can see above I am fan of former approach because I like keep my command handlers very thin. Another argument for this option is that if I ever need to register Customer from a different use case I will not be able to bypass and forget about this uniqueness rule because I will have to pass this service.

Summary

A lot of was covered in this post in context of Domain Model Validation. Let’s summarize:
– We have two types of Domain Model validation – Aggregates scope and Bounded Context scope
– There are generally 3 methods of Domain Model validation – using Validation Object, Deferred Validation or Always Valid (throwing exceptions)
Always Valid approach is preferred
– For Bounded Context scope validation there are 2 methods of validations – passing all required data to aggregate’s method or constructor or create Domain Service (generally for performance reason).

Source code

If you would like to see full, working example – check my GitHub repository.

Additional Resources

Validation in Domain-Driven Design (DDD) – Lev Gorodinski
Validation in a DDD world– Jimmy Bogard

Related posts

REST API Data Validation
Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events

REST API Data Validation

Introduction

This time I would like to describe how we can protect our REST API applications from requests containing invalid data (data validation process). However, validation of our requests is not enough, unfortunately. In addition to validation, it is our responsibility to return the relevant messages and statuses to our API clients. I wanted to deal with these two things in this post.

Data Validation

Definition of Data Validation

What is data validation really? The best definition I found is from UNECE Data Editing Group:

An activity aimed at verifying whether the value of a data item comes from the given (finite or infinite) set of acceptable values.

According to this definition we should verify data items which are coming to our application from external sources and check if theirs values are acceptable. How do we know that the value is acceptable? We need to define data validation rules for every type of data item which is processing in our system.

Data vs Business Rules validation

I would like to emphasize that data validation is totally different concept than validation of business rules. Data validation is focused on verifying an atomic data item. Business rules validation is a more broad concept and more close to how business works and behaves. So it is mainly focused on behavior. Of course validating behavior depends on data too, but in a more wide range.

Examples of data validation:

– Product order quantity cannot be negative or zero
– Product order quantity should be a number
– Currency of order should be a value from currencies list

Examples of business rules validation

– Product can be ordered only when Customer age is equal or greater than product minimal age.
– Customer can place only two orders in one day.

Returning relevant information

If we acknowledge that the rules have been broken during validation, we must stop processing and return the equivalent message to the client. We should follow the following rules:

– we should return message to the client as fast as possible (Fail-fast principle)
– the reason for the validation error should be well explained and understood for the client
– we should not return technical aspects for security reasons

Problem Details for HTTP APIs standard

The issue of returned error messages is so common that a special standard was created describing how to handle such situations. It is called “Problem Details for HTTP APIs standard” and his official description can be found here. This is abstract of this standard:

This document defines a “problem detail” as a way to carry machine-readable details of errors in a HTTP response to avoid the need to define new error response formats for HTTP APIs.

Problem Details standard introduces Problem Details JSON object, which should be part of the response when validation error occurs. This is simple canonical model with 5 members:

– problem type
– title
– HTTP status code
– details of error
– instance (pointer to specific occurrence)

Of course we can (and sometimes we should) extend this object by adding new properties, but the base should be the same. Thanks to this our API is easier to understand, learn and use. For more detailed information about standard I invite you to read documentation which is well described.

Data validation localization

For the standard application we can put data validation logic in three places:

  • GUI – it is entry point for users input. Data is validated on the client side, for example using Javascript for web applications
  • Application logic/services layer – data is validated in specific application service or command handler on the server side
  • Database – this is exit point of request processing and last moment to validate the data
Data validation localization
Data validation localization

In this article I am omitting GUI and Database components and I am focusing on the server side of the application. Let’s see how we can implement data validation on Application Services layer.

Implementing Data Validation

Suppose we have a command AddCustomerOrderCommand:

Suppose we want to validate 4 things:

1. CustomerId is not empty GUID.
2. Products list is not empty
3. Each product quantity is greater than 0
4. Each product currency is equal to USD or EUR

Let me show 3 solutions to this problem – from simple to the most sophisticated.

1. Simple validation on Application Service

The first thing that can come to mind is a simple validation in the Command Handler itself. In this solution we need to implement private method which validates our command and throws exception if validation error occurs. Closing this kind of logic in separate method is better from the Clean Code perspective (see Extract Method too).

The result of invalid command execution:

This is not so bad approach but has two disadvantages. Firstly, it involves from us writing a lot of easy and boilerplate code – comparing to nulls, defaults, values from list etc. Secondly, we are losing here part of separation of concerns because we are mixing validation logic with orchestrating our use case flow. Let’s take care of boilerplate code first.

2. Validation using FluentValidation library

We don’t want to reinvent the wheel so the best solution is to use library. Fortunately, there is a great library for validation in .NET world – Fluent Validation. It has nice API and a lot of features. This is how we can use it to validate our command:

Now, the Validate method looks like:

The result of validation is the same as earlier, but now our validation logic is more cleaner. The last thing to do is decouple this logic from Command Handler completely…

3. Validation using Pipeline Pattern

To decouple validation logic and execute it before Command Handler execution we arrange our command handling process in Pipeline (see NServiceBus Pipeline also).

For the Pipeline implementation we can use easily MediatR Behaviors. First thing to do is behavior implementation:

Next thing to do is to register behavior in IoC container (Autofac example):

This way we achieved separation of concerns and Fail-fast principle implementation in nice and elegant way.

But this is not the end. Finally, we need to do something with returned messages to clients.

Implementing Problem Details standard

Just as in the case of validation logic implementation, we will use a dedicated library – ProblemDetails. The principle of the mechanism is simple. Firstly, we need to create custom exception:

Secondly, we have to create own Problem Details class:

Last thing to do is to add Problem Details Middleware with definition of mapping between InvalidCommandException and InvalidCommandProblemDetails class in startup:

After change in CommandValidationBehavior (throwing InvalidCommandExecption instead Exception) we have returned content compatible with the standard:

Problem details

Summary

In this post I described:
– what Data validation is and where is located
– what Problem Details for HTTP APIs is and how could be implemented
– 3 methods to implement data validation in Application Services layer: without any patterns and tools, with FluentValidation library, and lastly – using Pipeline Pattern and MediatR Behaviors.

Source code

If you would like to see full, working example – check my GitHub repository

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events
10 common broken rules of clean code

Domain Model Encapsulation and PI with Entity Framework 2.2

Introduction

In previous post I presented how to implement simple CQRS pattern using raw SQL (Read Model) and Domain Driven Design (Write Model). I would like to continue presented example focusing mainly on DDD implementation. In this post I will describe how to get most out of the newest version Entity Framework v 2.2 to support pure domain modeling as much as possible.

I decided that I will constantly develop my sample on GitHub. I will try to gradually add new functionalities and technical solutions. I will also try to extend domain so that the application will become similar to the real ones. It is difficult to explain some DDD aspects on trivial domains. Nevertheless, I highly encourage you to follow my codebase.

Goals

When we create our Domain Model we have to take many things into account. At this point I would like to focus on 2 of them: Encapsulation and Persistence Ignorance.

Encapsulation

Encapsulation has two major definitions (source – Wikipedia):

A language mechanism for restricting direct access to some of the object’s components

and

A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data

What does it mean to DDD Aggregates? It just simply mean that we should hide all internals of our Aggregate from the outside world. Ideally, we should expose only public methods which are required to fulfill our business requirements. This assumption is presented below:

Persistence Ignorance

Persistence Ignorance (PI) principle says that the Domain Model should be ignorant of how its data is saved or retrieved. It is very good and important advice to follow. However, we should follow it with caution. I agree with opinion presented in the Microsoft documentation:

Even when it is important to follow the Persistence Ignorance principle for your Domain model, you should not ignore persistence concerns. It is still very important to understand the physical data model and how it maps to your entity object model. Otherwise you can create impossible designs.

As described, we can’t forget about persistence, unfortunately. Nevertheless, we should aim at decoupling Domain Model from rest parts of our system as much as possible.

Example Domain

For a better understanding of the created Domain Model I prepared the following diagram:

It is simple e-commerce domain. Customer can place one or more Orders. Order is a set of Products with information of quantity ( OrderProduct). Each Product has defined many prices ( ProductPrice) depending on the Currency.

Ok, we know the problem, now we can go to the solution…

Solution

1. Create supporting architecture

First and most important thing to do is create application architecture which supports both Encapsulation and Persistence Ignorance of our Domain Model. The most common examples are:
Clean Architecture
Onion Architecture
Ports And Adapters / Hexagonal Architecture

All of these architectures are good and and used in production systems. For me Clean Architecture and Onion Architecture are almost the same. Ports And Adapters / Hexagonal Architecture is a little bit different when it comes to naming, but general principles are the same. The most important thing in context of domain modeling is that each architecture Business Logic/Business Layer/Entities/Domain Layer 1) is in the center and 2) has no dependency to other components/layers/modules. It is the same in my example:

What this means in practice for our code in Domain Model?
1. No data access code.
2. No data annotations for our entities.
3. No inheritance from any framework classes, entities should be Plain Old CLR Object

2. Use Entity Framework in Infrastructure Layer only

Any interaction with database should be implemented in Infrastructure Layer. It means you have to add there entity framework context, entity mappings and implementation of repositories. Only interfaces of repositories can be kept in Domain Model.

3. Use Shadow Properties

Shadow Properties are great way to decouple our entities from database schema. They are properties which are defined only in Entity Framework Model. Using them we often don’t need to include foreign keys in our Domain Model and it is great thing.

Let’s see the Order Entity and its mapping which is defined in CustomerEntityTypeConfiguration mapping:

As you can see on line 15 we are defining property which doesn’t exist in Order entity. It is defined only for relationship configuration between Customer and Order. The same is for Order and ProductOrder relationship (see lines 23, 24).

4. Use Owned Entity Types

Using Owned Entity Types we can create better encapsulation because we can map directly to private or internal fields:

Owned types are great solution for creating our Value Objects too. This is how MoneyValue looks like:

5. Map to private fields

We can map to private fields not only using EF owned types, we can map to built-in types too. All we have to do is give the name of the field and column:

6. Use Value Conversions

Value Conversions are the “bridge” between entity attributes and table column values. If we have incompatibility between types, we should use them. Entity Framework has a lot of value converters implemented out of the box. Additionally, we can implement custom converter if we need to.

This converter simply converts “StatusId” column byte type to private field _status of type OrderStatus.

Summary

In this post I described shortly what Encapsulation and Persistence Ignorance is (in context of domain modeling) and how we can achieve these approaches by:
– creating supporting architecture
– putting all data access code outside our domain model implementation
– using Entity Framework Core features: Shadow Properties, Owned Entity Types, private fields mapping, Value Conversions

Related posts

Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events
REST API Data Validation

Simple CQRS implementation with raw SQL and DDD

Introduction

I often come across questions about the implementation of the CQRS pattern. Even more often I see discussions about access to the database in the context of what is better – ORM or plain SQL.

In this post I wanted to show you how you can quickly implement simple REST API application with CQRS using the .NET Core. I immediately point out that this is the CQRS in the simplest edition – the update through the Write Model immediately updates the Read Model, therefore we do not have here the eventual consistency. However, many applications do not need eventual consistency, while the logical division of writing and reading using two separate models is recommended and more effective in most solutions.

Especially for this article I prepared sample, fully working application, see full source on Github.

My goals

These are my goals that I wanted to achieve by creating this solution:
1. Clear separation and isolation of Write Model and Read Model.
2. Retrieving data using Read Model should be as fast as possible.
3. Write Model should be implemented with DDD approach. The level of DDD implementation should depend on level of domain complexity.
4. Application logic should be decoupled from GUI.
5. Selected libraries should be mature, well-known and supported.

Design

High level flow between components looks like:

As you can see the process for reads is pretty straightforward because we should query data as fast as possible. We don’t need here more layers of abstractions and sophisticated approaches. Get arguments from query object, execute raw SQL against database and return data – that’s all.

It is different in the case of write support. Writing often requires more advanced techniques because we need execute some logic, do some calculations or simply check some conditions (especially invariants). With ORM tool with change tracking and using Repository Pattern we can do it leaving our Domain Model intact (ok, almost).

Solution

Read model

Diagram below presents flow between components used to fulfill read request operation:

The GUI is responsible for creating Query object:

Then, query handler process query:

The first thing is to get open database connection and it is achieved using SqlConnectionFactory class. This class is resolved by IoC Container with HTTP request lifetime scope so we are sure, that we use only one database connection during request processing.

Second thing is to prepare and execute raw SQL against database. I try not to refer to tables directly and instead refer to database views. This is a nice way to create abstraction and decouple our application from database schema because I want to hide database internals as much as possible.

For SQL execution I use micro ORM Dapper library because is almost as fast as native ADO.NET and does not have boilerplate API. In short, it does what it has to do and it does it very well.

Write model

Diagram below presents flow for write request operation:

Write request processing starts similar to read but we create the Command object instead of the query object:

Then, CommandHandler is invoked:

Command handler looks different than query handler. Here, we use higher level of abstraction using DDD approach with Aggregates and Entities. We need it because in this case problems to solve are often more complex than usual reads. Command handler hydrates aggregate, invokes aggregate method and saves changes to database.

Customer aggregate can be defined as follows:

Architecture

Solution structure is designed based on well-known Onion Architecture as follows:

Only 3 projects are defined:
– API project with API endpoints and application logic (command and query handlers) using Feature Folders approach.
– Domain project with Domain Model
– Infrastructure project – integration with database.

Summary

In this post I tried to present the simplest way to implement CQRS pattern using raw sql scripts as Read Model side processing and DDD approach as Write Model side implementation. Doing so we are able to achieve much more separation of concerns without losing the speed of development. Cost of introducing this solution is very low and and it returns very quickly.

I didn’t describe DDD implementation in detail so I encourage you once again to check the repository of the example application – can be used as a kit starter for your app the same as for my applications.

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
How to publish and handle Domain Events
REST API Data Validation

Feature Folders

Introduction

Today I would like to suggest a less-common but in my opinion a much better way to organize our codebase. Meet the Feature Folders.

Problem

For ages we have been (at least in the .NET environment) used to thinking about our code structure taking into account the technical aspects. For example MVC application project templates assume the division our objects into separate directories – Controllers, Views, Scripts and so on. We can see the same in many tutorials. If we need add new feature, following this approach we should add all objects in different places.

This approach is found not only on the application layer but on others too. I have seen many times in the business logic layer the big folders called Aggregates, Entities, Domain Events with a lot of classes unrelated to each other. But what does it mean “unrelated”?

We can specify 2 types of relationships between objects – technical and business.

Technical relationship tells whether two objects have similar meaning from technical perspective. That is, whether they have the same usage. Two controllers are technically related, but controller and application service are not related – their purpose is different.

Business relationship tells whether two objects support the fulfillment of the same use case. These can be definitely different objects from a technical point of view – for example: a validator and a command handler.

Let’s see how technical folders can look like:
Technical folders

What is the problem with this design? As you can see, we have three modules: Commands, Handlers and Validators. Each of these modules has very low cohesion because as Wiki says:

cohesion refers to the degree to which the elements inside a module belong together

For example the new requirement appears and we need add new attribute which can be editable. We need change all of three objects associated with editing so in this particular design we need change all three modules as well. The same if we would like to move all functionality to a separate service. This is not good. We have to try achieve as high cohesion as possible.

Solution

The solution is to stop thinking about the technical aspects of our objects and focus instead on the business relationship. It will provide high cohesion with all its benefits (maintainability, reusability, less complexity and so on).

For every feature/use case we should create separate Feature Folder with all related from business perspective objects:

The same rule applies for business logic layer and I think it is even more important here. We should design our domain model per aggregates:

This is very simple approach but improves our design and especially in projects with many features makes work definitely easier.

There is an extra bonus. With this design we create templates for later requirements. For example if we need add implementation for adding a new product, we can copy paste whole feature folder, rename objects and we know exactly what we have to implement. Great!

Summary

In this post I showed what Feature Folders are. By good codebase organization we can achieve more better and elegant design. If you still use “technical folders” I encourage you to try this solution – you will not regret it. πŸ™‚

Using Database Project and DbUp for database management

Introduction

In previous post I described two popular ways to manage database changes.

The first one was state versioning where you keep whole current design of your database in repository and when you need change something then you need only change this state. Later, when you want to deploy changes, your schema and target database is compared and the migration script is generated.

The second way is to versioning transitions to desired state, which means creating migration script for every change.

In this post I wanted to show implementation of these two approaches in .NET environment combined together – what I think is the best way to manage database changes.

Step one – Database Project

The first thing to do is create Database Project. This type of project is available only when you have SQL Server Data Tools installed. It can be installed together with Visual Studio 2017 or separately – see this page for more information.

When you have SQL Server Data Tools you can add new Database Project the standard way:

Now we can add database objects to our project in the form of SQL scripts. Each script should define one database object – table, view, procedure, function and so on. It is common to create root folders as schemes are named.

TIP: I do not recommend creating database objects in “dbo” schema. I advise to create good named schemes per module/purpose/functionality. Creating your own schemes also allow you to better manage your object namespaces.

The sample database project may look like this:

What is worth to notice is the Build Action setting of every script is set to Build. This is the setting after which Visual Studio recognizes database objects from ordinary scripts and build them together. If we for example remove script defining orders schema, VS will not be able to build our project:

This is great behavior because we have compile-time check and we can avoid more runtime errors.

When we finished database project, we can compare it to other project or database and create migration script. But as I described in previous post this is not optimal way to migrate databases. We will use DbUp library instead.

Step two – DbUp

DbUp is open source .NET library that provide you a way to deploy changes to database. Additionally, it tracks which SQL scripts have been run already, has many sql scripts providers available and other interesting features like scripts pre-processing.

You can ask a question why DbUp and not EF Migrations or Fluent Migrator? I have used all of them and I have to say that DbUp seems to me the most pure solution. I don’t like C# “wrapers” to generate SQL for me. DDL is easy language and I think we don’t need special tool for generating it.

DbUp is library so we can reference it to each application we want. What we need is simple console application which can be executed both on developer environment and CI build server. Firstly, we need reference DbUp NuGet package. Then we can add simple code to Main method:

This console application accepts two parameters: connection string to target database and file system path to scripts directory. It assumes following directory layout:
/PreDeployment
/Migrations
/PostDeployment

For “pre” and “post” deployment scripts we are defining NullJournal – in this way scripts will be run every time.

We should keep directory scripts in Database Project created earlier. DbUp executes scripts in alphabetical order. It can look like this:

Finally, we run migrations running our console application:

Executed scripts are listed in app.MigrationsJournal table:

And that’s all! We can develop and change our database in effective way now. πŸ™‚

Summary

In this post I described how to implement both state and transitions versioning using Database Project na DbUp library. What has been achieved is:
– Compile-time checks (Database project)
– Ease of development (Both)
– History of definition of all objects (Database project)
– Quick access to schema definition (Database project)
– Ease of resolving conflicts (Database project)
– IDE support (Database project)
– Full control of defining transitions (DbUp)
– Pre and post deployment scripts execution (DbUp)
– Deployment automation (DbUp)
– The possibility of manual deployment (DbUp)
– History of applied transitions (DbUp).

Using this machinery the development of database should be definitely easier and less error-prone.

Database change management

Introduction

Database change management is not an easy task. It is even more difficult when we are at the beginning of a project where the data model is constantly changing. More people in the team is another difficulty because we have to coordinate our work and not interfere with each other. Of course, we can delegate database change management to one person, but this results in a bottleneck. In addition, we should take care of the deployment automation, because without automation we can not have truly continuous integration and delivery.

In this post I will describe:
– what we need to have good database change management mechanism
– two common approaches to solve this problem
– recommendation, which approach should be taken

The ideal solution

I think we should stick to the same rules as when managing code changes. Which means that we should implement the following practices:

Everything should be in the source control

We should follow what was the change, who did it and when. There is no better tool for this than source control. Every little change, every little script should be thrown into the repository.

Every developer should be able to run migrations

Every developer should have local database on his own environment and should be able to always upgrade his database after downloading the changes. Without it, developer will have new version of application and old version of database. This will cause unpredictable behavior. Shared development databases say no! What is also important, the deployment of changes should be very fast.

Easy conflict resolution

Conflicts on the same database objects should be easy to solve and should not require a lot of work.

Versioning

We always should know in what version our database is. In addition, we should know what migrations have already been deployed and which ones not.

Deployment automation

Our mechanism should provide us with the ability to automate the deployment of changes to tests and productions environments. We should use the same mechanism during development to make sure everything will be fine and will work as expected.

The possibility of manual deployment

Sometimes changes can only be deployed manually by human due to procedures or regulations in the company. For this reason, we should be able to generate a set of scripts for manual deployment.

The ability to run “pre” and “post” deployment scripts

It is common to execute some logic before or/and after deployment. Sometimes we need regenerate some data, sometimes we need to check something (constraints integrity for example). This type of feature is very useful.

IDE support

It would be perfect to have the full support of our IDE. Quick access to objects schema (without connection to database), prompts, compile-time checking – these are the things we need to be very productive during database development.

A lot of these requirements, let’s see what solutions we have.

Versioning the State

The first approach to database change management is state versioning. We hold whole database definition in our source control repository and when we need change something we change objects definitions. When the upgrade time comes, our tool compares our actual definitions with target database and generates migration script for us (and he can execute it right away). The process looks as follows:

As I described we only change objects definitions. For example we have orders table and in our repository we have:

When we need change something we just change definition:

I altered Name column and added Description column. That’s all – I changed state of my schema. The transition to this state is on the tool side.

Pros

– Compile-time checks (whole schema in one place)
– Ease of development
– History of definition of all objects
– Quick access to schema definition
– Ease of resolving conflicts
– IDE support

Cons

The biggest downside of this approach is that sometimes tool you are using can not be able to generate migration script based on differences. This is situation when you:
– try to add new not nullable column without default value.
– you have different order of columns in project vs target database. Then tool tries to change this order by creating temp tables, coping data and renaming – something that we do not always want because from a theoretical point of view order of columns doesn’t matter.
– try to rename object, column
– change type of column (how to convert?)

All of these problems (despite the many advantages) make it not a good approach to database change management.

Versioning transitions

Another approach is to versioning transitions instead of state. In this approach we create sequence of migration scripts which lead us to the desired database state.

For the example with orders table, instead of change definition we just add migration script as follows:

In this approach we should be aware of 2 things:
– order of executing scripts does matter
– is required to store which scripts were executed.

Pros

– Full control of defining transitions
– Executing scripts one by one in correct order guarantees success of deployment to other environments
– Possibility to implement undo/downgrade features

Cons

– Not so easy as change state definition
– Lack of objects history
– Lack of IDE support
– Hardly visible conflicts (2 developers add changes to the same table in 2 separate files)
– Need to keep order of scripts
– Need to keep which scripts were executed

So this is not perfect solution either, has some disadvantages but it is still better than versioning state because we have more control and possibilities. And now we are going to…

Best solution

The best solution is simple but not easy – versioning transitions AND versioning states. πŸ™‚

If you look at the pros and cons of both solutions you have to admit that they are complementary. By using both approaches you can get rid of all cons leaving advantages.

Well, almost.. πŸ™ Combining two approaches costs a little more time and work. You need always create migration script and change definition as well. In addition, you need to be careful about consistency and integrity with two models. Transitions should always lead to defined state. If you have mismatch, you do not know how final schema should look like.

Fortunately a little more work earlier saves you a lot of work later. I worked separately with both approaches in many projects earlier. Now when I combined them I truly feel certain about development and deployment of my databases. I wish I knew it before. πŸ™‚

Summary

In this post I described:
– how ideal database change mechanism should look like
– state versioning approach
– transitions versioning approach
– the best solution that is a combination of both

In next post I will show how to setup database change management in .NET based on these assumptions.