Domain Model Validation

Introduction

In previous post I described how requests input data can be validated on Application Services Layer. I showed FluentValidation library usage in combination with Pipeline Pattern and Problem Details standard. In this post I would like to focus on the second type of validation which sits in the Domain LayerDomain Model validation.

What is Domain Model validation

We can divide the validation of the Domain Model into two types based on scope – Aggregates scope and Bounded Context scope

Aggregates scope

Let’s remind what the Aggregate is by quoting a fragment of the text from Vaughn Vernon Domain-Driven Design Distilled book:

Each Aggregate forms a transactional consistency boundary. This means that within a single
Aggregate, all composed parts must be consistent, according to business rules, when the controlling
transaction is committed to the database.

The most important part of this quote in context of validation I underlined. It means that under no circumstances we can’t persist Aggregate to database which has invalid state or breaks business rules. These rules are often called “invariants” and are defined by Vaughn Vernon as follows:

… business invariants — the rules to which the software must always adhere — are guaranteed to be consistent following each business operation.

So in context of Aggregates scope, we need to protect these invariants by executing validation during our use case (business operation) processing.

Bounded Context scope

Unfortunately, validation of Aggregates invariants is not enough. Sometimes the business rule may apply to more than one Aggregate (they can be even aggregates of different types).

For example, assuming that we have Customer Entity as Aggregate Root, the business rule may be “Customer email address must be unique”. To check this rule we need to check all emails of Customers which are separated Aggregate Roots. It is outside of the scope of one Customer aggregate. Of course, supposedly, we could create new entity called CustomerCatalog as Aggregate Root and aggregate all of the Customers to it but this is not good idea for many reasons. The better solution is described later in this article.

Let’s see what options we have to solve both validation problems.

Three solutions

Return Validation Object

This solution is based on Notification Pattern. We are defining special class called Notification/ValidationResult/Result/etc which “collects together information about errors and other information in the domain layer and communicates it”.

What does it mean for us? Is means that for every entity method which mutates the state of Aggregate we should return this validation object. The keyword here is entity because we can have (and we likely will have) nested invocations of methods inside Aggregate. Recall the diagram from the post about Domain Model encapsulation:

Domain model encapsulation

The program flow will look like:

Validation Object Flow

and the code structure (simplified):

However, if we don’t like to return ValidationResult from every method which mutates the state we can apply different approach which I described in article about publishing Domain Events. In short, in this solution we need to add ValidationResult property for every Entity (as Domain Events collection) and after Aggregate processing we have to examine these properties and decide if the whole Aggregate is valid.

Deferred validation

Second solution how to implement validation is to execute checking after whole Aggregate’s method is processed. This approach is presented for example by Jeffrey Palermo in his article. The whole solution is pretty straightforward:

Deferred validation

Always Valid

Last but not least solution is called “Always Valid” and it’s just about throwing exceptions inside Aggregate methods. It means that we finish processing of the business operation with the first violation of the Aggregate invariant. In this way, we are assured that our Aggregate is always valid:

Always Valid program flow

Comparison of solutions

I have to admit that I don’t like Validation Object and Deferred Validation approach and I recommend Always Valid strategy. My reasoning is as follows.

Returning Validation Object approach pollutes our methods declarations, adds accidental complexity to our Entities and is against Fail-Fast principle. Moreover, Validation Object becomes part of our Domain Model and it is for sure not part of ubiquitous language. On the other hand Deferred Validation implies not encapsulated Aggregate, because the validator object must have access to aggregate internals to properly check invariants.

However, both approaches have one advantage – they do not require throwing exceptions which should be thrown only when something unexpected occurs. Business rule broken is not unexpected.

Nevertheless, I think this is one of the rare exception when we can break this rule. For me, throwing exceptions and having always valid Aggregate is the best solution. “The ends justify the means” I would like to say. I think of this solution like implementation of Publish-Subsribe Pattern. Domain Model is the Publisher of broken invariants messages and Application is the Subscriber to this messages. The main assumption is that after publishing message the publisher stops processing because this is how exceptions mechanism works.

Always Valid Implementation

Exception throwing is built into the C# language so practically we have everything. Only thing to do is create specific Exception class, I called it BusinessRuleValidationException:

Suppose we have a business rule defined that you cannot order more than 2 orders on the same day. So it looks implementation:

What we should do with the thrown exception? We can use approach from REST API Data Validation and return appropriate message to the client as Problem Details object standard. All we have to do is to add another ProblemDetails class and set up mapping in Startup:

The result returned to client:

Problem details validation domain model

For simpler validation like checking for nulls, empty lists etc you can create library of guards (see Guard Pattern) or you can use external library. See GuardClauses created by Steve Smith for example.

BC scope validation implementation

What about validation which spans multiple Aggregates (Bounded Context scope)? Let’s assume that we have a rule that there cannot be 2 Customers with the same email address. There are two approaches to solve this.

The first way is to get required aggregates in CommandHandler and then pass them to aggregate’s method/constructor as arguments:

However, this is not always a good solution because as you can see we need to load all Customer Aggregates to memory. This could be serious performance issue. If we can not afford it then we need to introduce second approach – create Domain Service which is defined as (source – DDD Reference):

When a significant process or transformation in the domain is not a natural responsibility of an entity or value object, add an operation to the model as a standalone interface declared as a service

So, for that case we need to create ICustomerUniquenessChecker service interface:

This is the implementation of that interface:

Finally, we can use it inside our Customer Aggregate:

The question here is whether pass Domain Service as an argument to aggregate’s constructor/method or execute validation in Command Handler itself. As you can see above I am fan of former approach because I like keep my command handlers very thin. Another argument for this option is that if I ever need to register Customer from a different use case I will not be able to bypass and forget about this uniqueness rule because I will have to pass this service.

Summary

A lot of was covered in this post in context of Domain Model Validation. Let’s summarize:
– We have two types of Domain Model validation – Aggregates scope and Bounded Context scope
– There are generally 3 methods of Domain Model validation – using Validation Object, Deferred Validation or Always Valid (throwing exceptions)
Always Valid approach is preferred
– For Bounded Context scope validation there are 2 methods of validations – passing all required data to aggregate’s method or constructor or create Domain Service (generally for performance reason).

Source code

If you would like to see full, working example – check my GitHub repository.

Additional Resources

Validation in Domain-Driven Design (DDD) – Lev Gorodinski
Validation in a DDD world– Jimmy Bogard

Related posts

REST API Data Validation
Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events

REST API Data Validation

Introduction

This time I would like to describe how we can protect our REST API applications from requests containing invalid data (data validation process). However, validation of our requests is not enough, unfortunately. In addition to validation, it is our responsibility to return the relevant messages and statuses to our API clients. I wanted to deal with these two things in this post.

Data Validation

Definition of Data Validation

What is data validation really? The best definition I found is from UNECE Data Editing Group:

An activity aimed at verifying whether the value of a data item comes from the given (finite or infinite) set of acceptable values.

According to this definition we should verify data items which are coming to our application from external sources and check if theirs values are acceptable. How do we know that the value is acceptable? We need to define data validation rules for every type of data item which is processing in our system.

Data vs Business Rules validation

I would like to emphasize that data validation is totally different concept than validation of business rules. Data validation is focused on verifying an atomic data item. Business rules validation is a more broad concept and more close to how business works and behaves. So it is mainly focused on behavior. Of course validating behavior depends on data too, but in a more wide range.

Examples of data validation:

– Product order quantity cannot be negative or zero
– Product order quantity should be a number
– Currency of order should be a value from currencies list

Examples of business rules validation

– Product can be ordered only when Customer age is equal or greater than product minimal age.
– Customer can place only two orders in one day.

Returning relevant information

If we acknowledge that the rules have been broken during validation, we must stop processing and return the equivalent message to the client. We should follow the following rules:

– we should return message to the client as fast as possible (Fail-fast principle)
– the reason for the validation error should be well explained and understood for the client
– we should not return technical aspects for security reasons

Problem Details for HTTP APIs standard

The issue of returned error messages is so common that a special standard was created describing how to handle such situations. It is called “Problem Details for HTTP APIs standard” and his official description can be found here. This is abstract of this standard:

This document defines a “problem detail” as a way to carry machine-readable details of errors in a HTTP response to avoid the need to define new error response formats for HTTP APIs.

Problem Details standard introduces Problem Details JSON object, which should be part of the response when validation error occurs. This is simple canonical model with 5 members:

– problem type
– title
– HTTP status code
– details of error
– instance (pointer to specific occurrence)

Of course we can (and sometimes we should) extend this object by adding new properties, but the base should be the same. Thanks to this our API is easier to understand, learn and use. For more detailed information about standard I invite you to read documentation which is well described.

Data validation localization

For the standard application we can put data validation logic in three places:

  • GUI – it is entry point for users input. Data is validated on the client side, for example using Javascript for web applications
  • Application logic/services layer – data is validated in specific application service or command handler on the server side
  • Database – this is exit point of request processing and last moment to validate the data
Data validation localization
Data validation localization

In this article I am omitting GUI and Database components and I am focusing on the server side of the application. Let’s see how we can implement data validation on Application Services layer.

Implementing Data Validation

Suppose we have a command AddCustomerOrderCommand:

Suppose we want to validate 4 things:

1. CustomerId is not empty GUID.
2. Products list is not empty
3. Each product quantity is greater than 0
4. Each product currency is equal to USD or EUR

Let me show 3 solutions to this problem – from simple to the most sophisticated.

1. Simple validation on Application Service

The first thing that can come to mind is a simple validation in the Command Handler itself. In this solution we need to implement private method which validates our command and throws exception if validation error occurs. Closing this kind of logic in separate method is better from the Clean Code perspective (see Extract Method too).

The result of invalid command execution:

This is not so bad approach but has two disadvantages. Firstly, it involves from us writing a lot of easy and boilerplate code – comparing to nulls, defaults, values from list etc. Secondly, we are losing here part of separation of concerns because we are mixing validation logic with orchestrating our use case flow. Let’s take care of boilerplate code first.

2. Validation using FluentValidation library

We don’t want to reinvent the wheel so the best solution is to use library. Fortunately, there is a great library for validation in .NET world – Fluent Validation. It has nice API and a lot of features. This is how we can use it to validate our command:

Now, the Validate method looks like:

The result of validation is the same as earlier, but now our validation logic is more cleaner. The last thing to do is decouple this logic from Command Handler completely…

3. Validation using Pipeline Pattern

To decouple validation logic and execute it before Command Handler execution we arrange our command handling process in Pipeline (see NServiceBus Pipeline also).

For the Pipeline implementation we can use easily MediatR Behaviors. First thing to do is behavior implementation:

Next thing to do is to register behavior in IoC container (Autofac example):

This way we achieved separation of concerns and Fail-fast principle implementation in nice and elegant way.

But this is not the end. Finally, we need to do something with returned messages to clients.

Implementing Problem Details standard

Just as in the case of validation logic implementation, we will use a dedicated library – ProblemDetails. The principle of the mechanism is simple. Firstly, we need to create custom exception:

Secondly, we have to create own Problem Details class:

Last thing to do is to add Problem Details Middleware with definition of mapping between InvalidCommandException and InvalidCommandProblemDetails class in startup:

After change in CommandValidationBehavior (throwing InvalidCommandExecption instead Exception) we have returned content compatible with the standard:

Problem details

Summary

In this post I described:
– what Data validation is and where is located
– what Problem Details for HTTP APIs is and how could be implemented
– 3 methods to implement data validation in Application Services layer: without any patterns and tools, with FluentValidation library, and lastly – using Pipeline Pattern and MediatR Behaviors.

Source code

If you would like to see full, working example – check my GitHub repository

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events
10 common broken rules of clean code

Domain Model Encapsulation and PI with Entity Framework 2.2

Introduction

In previous post I presented how to implement simple CQRS pattern using raw SQL (Read Model) and Domain Driven Design (Write Model). I would like to continue presented example focusing mainly on DDD implementation. In this post I will describe how to get most out of the newest version Entity Framework v 2.2 to support pure domain modeling as much as possible.

I decided that I will constantly develop my sample on GitHub. I will try to gradually add new functionalities and technical solutions. I will also try to extend domain so that the application will become similar to the real ones. It is difficult to explain some DDD aspects on trivial domains. Nevertheless, I highly encourage you to follow my codebase.

Goals

When we create our Domain Model we have to take many things into account. At this point I would like to focus on 2 of them: Encapsulation and Persistence Ignorance.

Encapsulation

Encapsulation has two major definitions (source – Wikipedia):

A language mechanism for restricting direct access to some of the object’s components

and

A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data

What does it mean to DDD Aggregates? It just simply mean that we should hide all internals of our Aggregate from the outside world. Ideally, we should expose only public methods which are required to fulfill our business requirements. This assumption is presented below:

Persistence Ignorance

Persistence Ignorance (PI) principle says that the Domain Model should be ignorant of how its data is saved or retrieved. It is very good and important advice to follow. However, we should follow it with caution. I agree with opinion presented in the Microsoft documentation:

Even when it is important to follow the Persistence Ignorance principle for your Domain model, you should not ignore persistence concerns. It is still very important to understand the physical data model and how it maps to your entity object model. Otherwise you can create impossible designs.

As described, we can’t forget about persistence, unfortunately. Nevertheless, we should aim at decoupling Domain Model from rest parts of our system as much as possible.

Example Domain

For a better understanding of the created Domain Model I prepared the following diagram:

It is simple e-commerce domain. Customer can place one or more Orders. Order is a set of Products with information of quantity ( OrderProduct). Each Product has defined many prices ( ProductPrice) depending on the Currency.

Ok, we know the problem, now we can go to the solution…

Solution

1. Create supporting architecture

First and most important thing to do is create application architecture which supports both Encapsulation and Persistence Ignorance of our Domain Model. The most common examples are:
Clean Architecture
Onion Architecture
Ports And Adapters / Hexagonal Architecture

All of these architectures are good and and used in production systems. For me Clean Architecture and Onion Architecture are almost the same. Ports And Adapters / Hexagonal Architecture is a little bit different when it comes to naming, but general principles are the same. The most important thing in context of domain modeling is that each architecture Business Logic/Business Layer/Entities/Domain Layer 1) is in the center and 2) has no dependency to other components/layers/modules. It is the same in my example:

What this means in practice for our code in Domain Model?
1. No data access code.
2. No data annotations for our entities.
3. No inheritance from any framework classes, entities should be Plain Old CLR Object

2. Use Entity Framework in Infrastructure Layer only

Any interaction with database should be implemented in Infrastructure Layer. It means you have to add there entity framework context, entity mappings and implementation of repositories. Only interfaces of repositories can be kept in Domain Model.

3. Use Shadow Properties

Shadow Properties are great way to decouple our entities from database schema. They are properties which are defined only in Entity Framework Model. Using them we often don’t need to include foreign keys in our Domain Model and it is great thing.

Let’s see the Order Entity and its mapping which is defined in CustomerEntityTypeConfiguration mapping:

As you can see on line 15 we are defining property which doesn’t exist in Order entity. It is defined only for relationship configuration between Customer and Order. The same is for Order and ProductOrder relationship (see lines 23, 24).

4. Use Owned Entity Types

Using Owned Entity Types we can create better encapsulation because we can map directly to private or internal fields:

Owned types are great solution for creating our Value Objects too. This is how MoneyValue looks like:

5. Map to private fields

We can map to private fields not only using EF owned types, we can map to built-in types too. All we have to do is give the name of the field and column:

6. Use Value Conversions

Value Conversions are the “bridge” between entity attributes and table column values. If we have incompatibility between types, we should use them. Entity Framework has a lot of value converters implemented out of the box. Additionally, we can implement custom converter if we need to.

This converter simply converts “StatusId” column byte type to private field _status of type OrderStatus.

Summary

In this post I described shortly what Encapsulation and Persistence Ignorance is (in context of domain modeling) and how we can achieve these approaches by:
– creating supporting architecture
– putting all data access code outside our domain model implementation
– using Entity Framework Core features: Shadow Properties, Owned Entity Types, private fields mapping, Value Conversions

Related posts

Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events
REST API Data Validation

Simple CQRS implementation with raw SQL and DDD

Introduction

I often come across questions about the implementation of the CQRS pattern. Even more often I see discussions about access to the database in the context of what is better – ORM or plain SQL.

In this post I wanted to show you how you can quickly implement simple REST API application with CQRS using the .NET Core. I immediately point out that this is the CQRS in the simplest edition – the update through the Write Model immediately updates the Read Model, therefore we do not have here the eventual consistency. However, many applications do not need eventual consistency, while the logical division of writing and reading using two separate models is recommended and more effective in most solutions.

Especially for this article I prepared sample, fully working application, see full source on Github.

My goals

These are my goals that I wanted to achieve by creating this solution:
1. Clear separation and isolation of Write Model and Read Model.
2. Retrieving data using Read Model should be as fast as possible.
3. Write Model should be implemented with DDD approach. The level of DDD implementation should depend on level of domain complexity.
4. Application logic should be decoupled from GUI.
5. Selected libraries should be mature, well-known and supported.

Design

High level flow between components looks like:

As you can see the process for reads is pretty straightforward because we should query data as fast as possible. We don’t need here more layers of abstractions and sophisticated approaches. Get arguments from query object, execute raw SQL against database and return data – that’s all.

It is different in the case of write support. Writing often requires more advanced techniques because we need execute some logic, do some calculations or simply check some conditions (especially invariants). With ORM tool with change tracking and using Repository Pattern we can do it leaving our Domain Model intact (ok, almost).

Solution

Read model

Diagram below presents flow between components used to fulfill read request operation:

The GUI is responsible for creating Query object:

Then, query handler process query:

The first thing is to get open database connection and it is achieved using SqlConnectionFactory class. This class is resolved by IoC Container with HTTP request lifetime scope so we are sure, that we use only one database connection during request processing.

Second thing is to prepare and execute raw SQL against database. I try not to refer to tables directly and instead refer to database views. This is a nice way to create abstraction and decouple our application from database schema because I want to hide database internals as much as possible.

For SQL execution I use micro ORM Dapper library because is almost as fast as native ADO.NET and does not have boilerplate API. In short, it does what it has to do and it does it very well.

Write model

Diagram below presents flow for write request operation:

Write request processing starts similar to read but we create the Command object instead of the query object:

Then, CommandHandler is invoked:

Command handler looks different than query handler. Here, we use higher level of abstraction using DDD approach with Aggregates and Entities. We need it because in this case problems to solve are often more complex than usual reads. Command handler hydrates aggregate, invokes aggregate method and saves changes to database.

Customer aggregate can be defined as follows:

Architecture

Solution structure is designed based on well-known Onion Architecture as follows:

Only 3 projects are defined:
– API project with API endpoints and application logic (command and query handlers) using Feature Folders approach.
– Domain project with Domain Model
– Infrastructure project – integration with database.

Summary

In this post I tried to present the simplest way to implement CQRS pattern using raw sql scripts as Read Model side processing and DDD approach as Write Model side implementation. Doing so we are able to achieve much more separation of concerns without losing the speed of development. Cost of introducing this solution is very low and and it returns very quickly.

I didn’t describe DDD implementation in detail so I encourage you once again to check the repository of the example application – can be used as a kit starter for your app the same as for my applications.

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
How to publish and handle Domain Events
REST API Data Validation

Feature Folders

Introduction

Today I would like to suggest a less-common but in my opinion a much better way to organize our codebase. Meet the Feature Folders.

Problem

For ages we have been (at least in the .NET environment) used to thinking about our code structure taking into account the technical aspects. For example MVC application project templates assume the division our objects into separate directories – Controllers, Views, Scripts and so on. We can see the same in many tutorials. If we need add new feature, following this approach we should add all objects in different places.

This approach is found not only on the application layer but on others too. I have seen many times in the business logic layer the big folders called Aggregates, Entities, Domain Events with a lot of classes unrelated to each other. But what does it mean “unrelated”?

We can specify 2 types of relationships between objects – technical and business.

Technical relationship tells whether two objects have similar meaning from technical perspective. That is, whether they have the same usage. Two controllers are technically related, but controller and application service are not related – their purpose is different.

Business relationship tells whether two objects support the fulfillment of the same use case. These can be definitely different objects from a technical point of view – for example: a validator and a command handler.

Let’s see how technical folders can look like:
Technical folders

What is the problem with this design? As you can see, we have three modules: Commands, Handlers and Validators. Each of these modules has very low cohesion because as Wiki says:

cohesion refers to the degree to which the elements inside a module belong together

For example the new requirement appears and we need add new attribute which can be editable. We need change all of three objects associated with editing so in this particular design we need change all three modules as well. The same if we would like to move all functionality to a separate service. This is not good. We have to try achieve as high cohesion as possible.

Solution

The solution is to stop thinking about the technical aspects of our objects and focus instead on the business relationship. It will provide high cohesion with all its benefits (maintainability, reusability, less complexity and so on).

For every feature/use case we should create separate Feature Folder with all related from business perspective objects:

The same rule applies for business logic layer and I think it is even more important here. We should design our domain model per aggregates:

This is very simple approach but improves our design and especially in projects with many features makes work definitely easier.

There is an extra bonus. With this design we create templates for later requirements. For example if we need add implementation for adding a new product, we can copy paste whole feature folder, rename objects and we know exactly what we have to implement. Great!

Summary

In this post I showed what Feature Folders are. By good codebase organization we can achieve more better and elegant design. If you still use “technical folders” I encourage you to try this solution – you will not regret it. 🙂

Cache-Aside Pattern in .NET Core

Introduction

Often the time comes when we need to focus on optimizing the performance of our application. There are many ways to do this and one way is cache some data. In this post I will describe briefly the Cache-Aside Pattern and its simple implementation in .NET Core.

Cache-Aside Pattern

This pattern is very simple and straightforward. When we need specific data, we first try to get it from the cache. If the data is not in the cache, we get it from the source, add it to the cache and return it. Thanks to this, in the next query the data will be get from the cache. When adding to the cache, we need to determine how long the data should be stored in the cache. Below is an algorithm diagram:

First implementation

Implementation of this pattern in .NET Core is just as easy as its theoretical part. Firstly, we need register IMemoryCache interface:

Afterwards, we need add Microsoft.Extensions.Caching.Memory NuGet package.

And that’s all. Assuming that we want to cache basic information about our users, the implementation of the pattern looks as follows:

First of all, we are injecting .NET Core framework IMemoryCache interface implementation. Then in line 18 we check whether the data is in the cache. If it is not in the cache, we get it from the source (i.e. database), add to cache and return.

Code smells

This way of implementation you can find on MSDN site. I could finish this post at this point, but I must admit that there are a few things that I do not like about this code.

First of all, I think the interface IMemoryCache is not abstract enough. It suggests that the data is kept in application memory but the client code should not care where it is stored. Moreover, if we want to keep the cache in the database in the future, the name of this interface will not be correct.

Secondly, client code should not be responsible for logic of the naming cache key. It is Single Responsibility Principle violation. It should only provide data to create this key name.

Lastly, client code should not care about cache expiration. It should be configured in other place – application configuration.

In next section I will show how we can eliminate these 3 code smells.

Improved implementation

The first and most important step is to define a new, more abstract interface: ICacheStore

Then we need to define interface for our cache key classes:

This interface has CacheKey string property which is used during resolving cache key in our MemoryCacheStore implementation:

Finally, we need to configure IoC container to resolve MemoryCacheStore instance as ICacheStore together with expiration configuration taken from application configuration:

This is how new implementation looks like:

After this set up we can finally use this implementation in our client code. For each new object that we want to store in cache we need:

1) Add expiration configuration

2) Class that defines the cache key

Finally, the new client code looks like this:

In the code above we use more abstract ICacheStore interface, don’t care about creation of cache key and expiration configuration. It is more elegant solution and less error-prone.

Summary

In this post I described Cache-Aside Pattern and its primary implementation in .NET Core. I proposed also augmented design to achieve more elegant solution with a small amount of work. Happy caching! 🙂

UPDATE 2019-26-02: I updated my sample codebase so if you would like to see full, working example – check my GitHub repository.

How to publish and handle Domain Events

2019-06-19 UPDATE: Please check Handling Domain Events: Missing Part post which is a continuation of this article

Introduction

Domain Event is one of the building blocks of Domain Driven Design. It is something that happened in particular domain and it captures memory of it. We create Domain Events to notify other parts of the same domain that something interesting happened and these other parts potentially can react to.

Domain Event is usually immutable data-container class named in the past tense. For example:

Three ways of publishing domain events

I have seen mainly three ways of publishing domain events.

1. Using static DomainEvents class

This approach was presented by Udi Dahan in his Domain Events Salvation post. In short, there is a static class named DomainEvents with method Raise and it is invoked immediately when something interesting during aggregate method processing occurred. Word immediately is worth emphasizing because all domain event handlers start processing immediately too (even aggregate method did not finish processing).

2. Raise event returned from aggregate method

This is approach when aggregate method returns Domain Event directly to ApplicationService. ApplicationService decides when and how to raise event. You can become familiar with this way of raising events reading Jan Kronquist Don’t publish Domain Events, return them! post.

3. Add event to Events Entity collection.

In this way on every entity, which creates domain events, exists Events collection. Every Domain Event instance is added to this collection during aggregate method execution. After execution, ApplicationService (or other component) reads all Eventscollections from all entities and publishes them. This approach is well described in Jimmy Bogard post A better domain events pattern.

Handling domain events

The way of handling of domain events depends indirectly on publishing method. If you use DomainEvents static class, you have to handle event immediately. In other two cases you control when events are published as well handlers execution – in or outside existing transaction.

In my opinion it is good approach to always handle domain events in existing transaction and treat aggregate method execution and handlers processing as atomic operation. This is good because if you have a lot of events and handlers you do not have to think about initializing connections, transactions and what should be treat in “all-or-nothing” way and what not.

Sometimes, however, it is necessary to communicate with 3rd party service (for example e-mail or web service) based on Domain Event. As we know, communication with 3rd party services is not usually transactional so we need some additional generic mechanism to handle these types of scenarios. So I created Domain Events Notifications.

Domain Events Notifications

There is no such thing as domain events notifications in DDD terms. I gave that name because I think it fits best – it is notification that domain event was published.

Mechanism is pretty simple. If I want to inform my application that domain event was published I create notification class for it and as many handlers for this notification as I want. I always publish my notifications after transaction is committed. The complete process looks like this:

1. Create database transaction.
2. Get aggregate(s).
3. Invoke aggregate method.
4. Add domain events to Events collections.
5. Publish domain events and handle them.
6. Save changes to DB and commit transaction.
7. Publish domain events notifications and handle them.

How do I know that particular domain event was published?

First of all, I have to define notification for domain event using generics:

All notifications are registered in IoC container:

In EventsPublisher we resolve defined notifications using IoC container and after our unit of work is completed, all notifications are published:

This is how whole process looks like presented on UML sequence diagram:

You can think that there is a lot of things to remember and you are right!:) But as you can see whole process is pretty straightforward and we can simplify this solution using IoC interceptors which I will try to describe in another post.

Summary

1. Domain event is information about something which happened in the past in modeled domain and it is important part of DDD approach.
2. There are many ways of publishing and handling domain events – by static class, returning them, exposing by collections.
2. Domain events should be handled within existing transaction (my recommendation).
3. For non-trasactional operations Domain Events Notifications were introduced.

Related posts

Handling Domain Events: Missing Part
The Outbox Pattern

Processing commands with Hangfire and MediatR

In previous post about processing multiple instance aggregates of the same type I suggested to consider using eventual consistency approach. In this post I would like to present one way to do this.

Setup

In the beginning let me introduce stack of technologies/patterns:
1. Command pattern – I am using commands but they do not look like theses described in GoF book. They just simple classes with data and they implement IRequest  marker interface of MediatR.

2. Mediator pattern. I am using this pattern because i want to decouple my clients classes (commands invokers) from commands handlers. Simple but great library created by Jimmy Bogard named MediatR implements this pattern very well. Here is simple usage.

and handler:

3. Hangfire. Great open-source library for processing and scheduling background jobs even with GUI monitoring interface. This is where my commands are scheduled, executed and retried if error occured.

Problem

For some of my uses cases, I would like to schedule processing my commands, execute them parallel with retry option and monitor them. Hangfire gives me all these kind of features but I have to have public method which I have to pass to Hangifre method (for example BackgroundJob.Enqueue). This is a problem – with mediator pattern I cannot (and I do not want) pass public method of handler because I have decoupled it from invoker. So I need special way to integrate MediatR with Hangfire without affecting basic assumptions.

Solution

My solution is to have three additional classes:
1. CommandsScheduler – serializes commands and sends them to Hangfire.

2. CommandsExecutor – responods to Hangfire jobs execution, deserializes commands and sends them to handlers using MediatR.

3. MediatorSerializedObject – wrapper class for serialized/deserialized commands with additional properties – command type and additional description.

Finally with this implementation we can change our client clasess to use CommandsScheduler:

and our commands are scheduled, invoked and monitored by Hangfire. I sketched sequence diagram which shows this interaction:

Processing commands with MediatR and Hanfire

Additionally, we can introduce interface for CommandsSchedulerICommandsScheduler. Second implementation will not use Hangfire at all and only will execute MediatR requests directly – for example in development process when we do not want start Hangfire Server.

Summary

I presented the way of processing commands asynchronously using MediatR and Hangfire. With this approach we have:
1. Decoupled invokers and handlers of commands.
2. Scheduling commands mechanism.
3. Invoker and handler of command may be other processes.
4. Commands execution monitoring.
5. Commands execution retries mechanism.

These benefits are very important during development using eventual consistency approach. We have more control over commands processing and we can react quickly if problem will appear.

Processing multiple aggregates – transactional vs eventual consistency

When we use Domain Driven Design approach in our application, sometimes we have to invoke some method on multiple instances of aggregate of the same type.

For example, in our domain we have customers and when big Black Friday campaign starts we have to recalculate theirs discounts. So in domain model exists Customer aggregate with RecalculateDiscount method and in Application Layer we have DiscountAppService which is responsible for this use case.

There are 2 ways to implement this and similar scenarios:

1. Using transactional consistency

This is the simplest solution, we get all customers aggregates and on every instance the RecalculateDiscount method is invoked. We surrounded our processing with TransactionScope so after that we can be certain that every customer have recalculated discount or none of them. This is transactional consistency – it provides us ACID and sometimes is enough solution, but in many cases (especially while processing multiple aggregates in DDD terms) this solution is very bad approach.

First of all, customers are loaded to memory and we can have performance issue. Of course we can change implementation a little, get only customers identifiers and in foreach loop load customers one by one. But we have worse problem – our transaction holds locks on our aggregates until end of processing and other processes have to wait. For the record – default transaction scope isolation level is Serializable. We can change isolation level but we can’t get rid of locks. In this case application becomes less responsive, we can have timeouts and deadlocks – things we should avoid how we can.

2. Using eventual consistency

In this approach we do not use big transaction. Instead of this, we process every customer aggregate separately. Eventual consistency means that in specified time our system wile be in inconsistent state, but after given time will be consistent. In our example there is a time, that some of customers have discounts recalculated and some of them not. Let’s see the code:

In this case on the beginning we got only customers identifiers and we process customer aggregates one by one asynchronously (and parallel if applicable). We removed problem of locking our aggregates for a long time. The simplest solution is usage of Task.Run() , but using this approach we totally losing control of processing. Better solution is to use some 3rd party library like Hangfire, Quartz.NET or messaging system.

Eventual consistency is a big topic used in distributed computing, encountered together with CQRS. In this article I would like to show only another way of executing batch processing using this approach and its benefits. Sometimes this approach is not a good choice – it can have impact on GUI and users may see stale data for some time. That is why it is important to talk with domain experts because often it is fine for user to wait for update of data but sometimes it is unacceptable.

Summary

Transactional consistency – whole processing is executed in one transaction. It is “all or nothing” approach and sometimes can lead to decrease performance, scalability and availability of our application.

Eventual consistency – processing is divided and not executed in one big transaction. In some time application will be in inconsistent state. It leads to better scalability and availability of application. On the other hand can cause problems with GUI (stale data) and it requires supporting mechanisms which enable parallel processing, retries and sometimes process monitors as well.