Strangling .NET Framework App to .NET Core

Introduction

Each technology becomes obsolete after some time. It is no different with the .NET Framework – it can be safely said that after the appearance of the .NET Core platform, the old Framework is slowly disappearing. Few people write about this platform, at conferences it has not been heard for a long time in this topic. Nobody starts a project in this technology. Only the .NET Core everywhere… except for a beloved legacy systems!

Well, despite the fact that there is .NET Core in the new solutions, a huge number of systems stand on the old .NET Framework. If you are not participating in a greenfield project but more in maintenance, then you are very likely to be sitting in an old framework. Who likes to play with old toys? Nobody. Everyone would like to use new technology, especially if it is better, more efficient and better designed.

Is the only solution to change the project and even the employer to use the new technology? Certainly not. We have 2 other options: “Big Bang Rewrite” or the use of “Strangler Pattern”

Big Bang Rewrite

Big Bank Rewrite means rewriting the whole application to new technology/system. This is usually the first thought that comes to our minds. After rewriting the entire system, turn off the old system and start the new one. It sounds easy but is not.

First of all, users do not use the new system until we have the whole system rewritten. This approach resembles the methodology of running a Waterfall project with all its drawbacks.

Secondly, it is an “all or nothing” approach. If during the project it turns out that we have run out of budget, time or resources, the End User is left with nothing – we have not delivered any business value. Users use the old system, which is still difficult to maintain. We can put our unfinished product on the shelf.

Big Bang Rewrite
Big Bang Rewrite

Can we do it better? Yes, and here comes the so-called “Strangler Pattern”.

Strangler Pattern

The origin of this pattern comes from an article written by Martin Fowler called Strangler Fig Application. The general point is that in tropical forests live figs that live on trees and are slowly “strangling” them.

Apart from the issues of nature, the metaphor is quite interesting and transferring it to the context of rewriting the system is described by Martin Fowler as follows:

gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled

The keyword here is the word “gradually”. So, unlike the “Big Bang Rewrite”, we rewrite our system piece by piece. The functionalities implemented in the new system immediately replace the old ones. This approach has a great advantage because we immediately provide value and get immediate feedback. It is definitely a more “agile” approach with all the advantages.

Strangler Pattern
Strangler Pattern

Motivation

Before proceeding to implement this approach, I would like to present the motivations for rewriting the system into new technology. Because as with everything – “there is no such thing as a free lunch” – rewriting the system, even in an incremental approach, will cost money, time and resources anyway.

So below is a list of Architectural Drivers that may affect this decision:

  • Development will be easier, more productive
  • We can use new techniques, methods, patterns, approaches
  • System will be more maitainable
  • New technology gives us more tools, libraries, frameworks
  • New technology is more efficient and we want to increase performance of our solution
  • New technology has more support from vendor, community
  • Support of old technology is ended, no patches and upgrades are available
  • Old technology can have security vulnerabilities
  • Developers will be happy
  • It will be easier to hire new developers
  • We want to be innovative, increase Employer branding
  • We must meet some standards and old technology doesn’t make it possible for us

As you can see the list is long and you can probably mention other things. So if you think it’s worth it, here’s one way to do it in the Microsoft ecosystem – rewriting the .NET Framework application to .NET Core using “Strangler Pattern”.

Design

The main element of this pattern is the Proxy / Facade, which redirects requests to either the legacy system or the new one. The most common example is the use of a load balancer that knows where to direct the request.

If we have to rewrite the API, e.g. Web API, the matter seems to be simple. We rewrite each Web API endpoint and implement the appropriate routing. There is nothing to describe here.

Rewriting using load balancer
Rewriting using load balancer. FE and BE are separate.

However, in .NET, many systems are written combining both Backend and Frontend. I am thinking of all applications like WebForms, ASP.NET MVC with server-side rendering, WPF, Silverlight etc. In this case, placing the facade in front of the entire application would require generating FrontEnd by a new application as well.

FrontEnd and BackEnd combined
Frontend and Backend combined

Of course, this can be done but I propose a different solution – to rewrite only Backend.

If do not want to put a proxy in front of our old application, we can use the same old application as a proxy. In this way, all requests will go through the old application, while some of them will be processed on the side of the new system. This approach has the following implications:

Let’s see how the architecture of such a solution looks like:

Strangling .NET Framework App to .NET Core
Strangling .NET Framework App to .NET Core

The request process is as follows:

1. Client sends a request

2. Old Frontend receives the request and a forwards it to the old Backend (the same application) or new Backend (new .NET Core application).

3. Both backends communicate with the same database

Communication with the new backend takes place only through the Gateway (see Gateway pattern). This is very important because it highlights the synchronous API call (see Remote Procedure Call). Under no circumstances don’t hide it! On the contrary – by using the Gateway we show clearly what is happening in our code.

The main goal is that our proxy should be as thin as possible. It should know as little as possible and have the least responsibility. To do this, when communicating with the new system, we can use the CQRS architectural style using the fact that queries and commands can be serialized. In this way, our proxy is only responsible for invoking the appropriate query or command – nothing more. Let’s see how it can be implemented.

Implementation

Backend API

Let’s start by defining the API of the new system – the endpoints responsible for receiving queries and commands. Thanks to the Mediatr and JSON.NET library it is very easy:

We define a simple Data Transfer Object for these endpoints. It holds the type of object and data in serialized form (JSON):

At the moment, our API contract is just a list of queries and commands. For our old .NET Framework application to benefit from this contract using strong typing, we must put all of these objects to separate assembly. We want both .NET frameworks and .NET Core to be able to refer to it, so it must be set to target .NET Standard

.NET Standard Contract assembly sharing
.NET Standard Contract assembly sharing

The Gateway

As I wrote earlier, communication and integration with the new Backend must go through the gateway. The gateway responsibility is:

1. Serialization of the request object
2. Providing additional metadata for a request like the context of the user. This is important because the new Backend should be stateless.
3. Send a request to the appropriate address
4. Deserialization of response / error handling

However, for the client of our Gateway, we want to hide all these responsibilities behind the interface:

The implementation of Gateway looks like this:

Finally, we come to usage of our Gateway. Most often it will be a Controller (in the GRASP terms):

Represents the overall “system”, “root object”, device that the software is running within, or a major subsystem (these are all variations of a facade controller)

In other words, it will be a place in the old application that starts processing the request. In ASP.NET MVC applications this will be the MVC Controller, in WebForms application – the code behind class and in WPF application – ViewModel (from MVVM pattern).

For the purposes of this article, I have prepared a console application to make the example as simple as possible:

This sample application adds 3 products by executing 3 commands and finally gets the list of added products by executing the query. Of course, in a real application Gateway interface is injected in accordance with the principle of Dependency Inversion Principle.

Example repository

The entire implementation can be found on my specially prepared GitHub repository: https://github.com/kgrzybek/old-dotnet-to-core-sample

Summary

In this article I discussed the following topics:

  • Approaches and motivations for rewriting the system
  • Strangler Pattern
  • Designing the system for rewriting
  • Implementation of incremental migration from .NET Framework applications to .NET Core

The decision to rewrite the system is often not easy because it is a long-term investment. At the very beginning, the development of a new system rarely brings any business value – only costs.

That is why it is always important to clearly explain to all stakeholders what benefits the rewriting of the system will bring and what risks the lack of such rewriting may result. In this way, it will be much easier to convince anyone to rewrite the system and we will not look like people who only care about playing with new toys. Good luck!

Related Posts

1. Simple CQRS implementation with raw SQL and DDD
2. GRASP – General Responsibility Assignment Software Patterns Explained
3. Processing commands with Hangfire and MediatR

Domain Model Validation

Introduction

In previous post I described how requests input data can be validated on Application Services Layer. I showed FluentValidation library usage in combination with Pipeline Pattern and Problem Details standard. In this post I would like to focus on the second type of validation which sits in the Domain LayerDomain Model validation.

What is Domain Model validation

We can divide the validation of the Domain Model into two types based on scope – Aggregates scope and Bounded Context scope

Aggregates scope

Let’s remind what the Aggregate is by quoting a fragment of the text from Vaughn Vernon Domain-Driven Design Distilled book:

Each Aggregate forms a transactional consistency boundary. This means that within a single
Aggregate, all composed parts must be consistent, according to business rules, when the controlling
transaction is committed to the database.

The most important part of this quote in context of validation I underlined. It means that under no circumstances we can’t persist Aggregate to database which has invalid state or breaks business rules. These rules are often called “invariants” and are defined by Vaughn Vernon as follows:

… business invariants — the rules to which the software must always adhere — are guaranteed to be consistent following each business operation.

So in context of Aggregates scope, we need to protect these invariants by executing validation during our use case (business operation) processing.

Bounded Context scope

Unfortunately, validation of Aggregates invariants is not enough. Sometimes the business rule may apply to more than one Aggregate (they can be even aggregates of different types).

For example, assuming that we have Customer Entity as Aggregate Root, the business rule may be “Customer email address must be unique”. To check this rule we need to check all emails of Customers which are separated Aggregate Roots. It is outside of the scope of one Customer aggregate. Of course, supposedly, we could create new entity called CustomerCatalog as Aggregate Root and aggregate all of the Customers to it but this is not good idea for many reasons. The better solution is described later in this article.

Let’s see what options we have to solve both validation problems.

Three solutions

Return Validation Object

This solution is based on Notification Pattern. We are defining special class called Notification/ValidationResult/Result/etc which “collects together information about errors and other information in the domain layer and communicates it”.

What does it mean for us? Is means that for every entity method which mutates the state of Aggregate we should return this validation object. The keyword here is entity because we can have (and we likely will have) nested invocations of methods inside Aggregate. Recall the diagram from the post about Domain Model encapsulation:

Domain model encapsulation

The program flow will look like:

Validation Object Flow

and the code structure (simplified):

However, if we don’t like to return ValidationResult from every method which mutates the state we can apply different approach which I described in article about publishing Domain Events. In short, in this solution we need to add ValidationResult property for every Entity (as Domain Events collection) and after Aggregate processing we have to examine these properties and decide if the whole Aggregate is valid.

Deferred validation

Second solution how to implement validation is to execute checking after whole Aggregate’s method is processed. This approach is presented for example by Jeffrey Palermo in his article. The whole solution is pretty straightforward:

Deferred validation

Always Valid

Last but not least solution is called “Always Valid” and it’s just about throwing exceptions inside Aggregate methods. It means that we finish processing of the business operation with the first violation of the Aggregate invariant. In this way, we are assured that our Aggregate is always valid:

Always Valid program flow

Comparison of solutions

I have to admit that I don’t like Validation Object and Deferred Validation approach and I recommend Always Valid strategy. My reasoning is as follows.

Returning Validation Object approach pollutes our methods declarations, adds accidental complexity to our Entities and is against Fail-Fast principle. Moreover, Validation Object becomes part of our Domain Model and it is for sure not part of ubiquitous language. On the other hand Deferred Validation implies not encapsulated Aggregate, because the validator object must have access to aggregate internals to properly check invariants.

However, both approaches have one advantage – they do not require throwing exceptions which should be thrown only when something unexpected occurs. Business rule broken is not unexpected.

Nevertheless, I think this is one of the rare exception when we can break this rule. For me, throwing exceptions and having always valid Aggregate is the best solution. “The ends justify the means” I would like to say. I think of this solution like implementation of Publish-Subsribe Pattern. Domain Model is the Publisher of broken invariants messages and Application is the Subscriber to this messages. The main assumption is that after publishing message the publisher stops processing because this is how exceptions mechanism works.

Always Valid Implementation

Exception throwing is built into the C# language so practically we have everything. Only thing to do is create specific Exception class, I called it BusinessRuleValidationException:

Suppose we have a business rule defined that you cannot order more than 2 orders on the same day. So it looks implementation:

What we should do with the thrown exception? We can use approach from REST API Data Validation and return appropriate message to the client as Problem Details object standard. All we have to do is to add another ProblemDetails class and set up mapping in Startup:

The result returned to client:

Problem details validation domain model

For simpler validation like checking for nulls, empty lists etc you can create library of guards (see Guard Pattern) or you can use external library. See GuardClauses created by Steve Smith for example.

BC scope validation implementation

What about validation which spans multiple Aggregates (Bounded Context scope)? Let’s assume that we have a rule that there cannot be 2 Customers with the same email address. There are two approaches to solve this.

The first way is to get required aggregates in CommandHandler and then pass them to aggregate’s method/constructor as arguments:

However, this is not always a good solution because as you can see we need to load all Customer Aggregates to memory. This could be serious performance issue. If we can not afford it then we need to introduce second approach – create Domain Service which is defined as (source – DDD Reference):

When a significant process or transformation in the domain is not a natural responsibility of an entity or value object, add an operation to the model as a standalone interface declared as a service

So, for that case we need to create ICustomerUniquenessChecker service interface:

This is the implementation of that interface:

Finally, we can use it inside our Customer Aggregate:

The question here is whether pass Domain Service as an argument to aggregate’s constructor/method or execute validation in Command Handler itself. As you can see above I am fan of former approach because I like keep my command handlers very thin. Another argument for this option is that if I ever need to register Customer from a different use case I will not be able to bypass and forget about this uniqueness rule because I will have to pass this service.

Summary

A lot of was covered in this post in context of Domain Model Validation. Let’s summarize:
– We have two types of Domain Model validation – Aggregates scope and Bounded Context scope
– There are generally 3 methods of Domain Model validation – using Validation Object, Deferred Validation or Always Valid (throwing exceptions)
Always Valid approach is preferred
– For Bounded Context scope validation there are 2 methods of validations – passing all required data to aggregate’s method or constructor or create Domain Service (generally for performance reason).

Source code

If you would like to see full, working example – check my GitHub repository.

Additional Resources

Validation in Domain-Driven Design (DDD) – Lev Gorodinski
Validation in a DDD world– Jimmy Bogard

Related posts

REST API Data Validation
Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events

REST API Data Validation

Introduction

This time I would like to describe how we can protect our REST API applications from requests containing invalid data (data validation process). However, validation of our requests is not enough, unfortunately. In addition to validation, it is our responsibility to return the relevant messages and statuses to our API clients. I wanted to deal with these two things in this post.

Data Validation

Definition of Data Validation

What is data validation really? The best definition I found is from UNECE Data Editing Group:

An activity aimed at verifying whether the value of a data item comes from the given (finite or infinite) set of acceptable values.

According to this definition we should verify data items which are coming to our application from external sources and check if theirs values are acceptable. How do we know that the value is acceptable? We need to define data validation rules for every type of data item which is processing in our system.

Data vs Business Rules validation

I would like to emphasize that data validation is totally different concept than validation of business rules. Data validation is focused on verifying an atomic data item. Business rules validation is a more broad concept and more close to how business works and behaves. So it is mainly focused on behavior. Of course validating behavior depends on data too, but in a more wide range.

Examples of data validation:

– Product order quantity cannot be negative or zero
– Product order quantity should be a number
– Currency of order should be a value from currencies list

Examples of business rules validation

– Product can be ordered only when Customer age is equal or greater than product minimal age.
– Customer can place only two orders in one day.

Returning relevant information

If we acknowledge that the rules have been broken during validation, we must stop processing and return the equivalent message to the client. We should follow the following rules:

– we should return message to the client as fast as possible (Fail-fast principle)
– the reason for the validation error should be well explained and understood for the client
– we should not return technical aspects for security reasons

Problem Details for HTTP APIs standard

The issue of returned error messages is so common that a special standard was created describing how to handle such situations. It is called “Problem Details for HTTP APIs standard” and his official description can be found here. This is abstract of this standard:

This document defines a “problem detail” as a way to carry machine-readable details of errors in a HTTP response to avoid the need to define new error response formats for HTTP APIs.

Problem Details standard introduces Problem Details JSON object, which should be part of the response when validation error occurs. This is simple canonical model with 5 members:

– problem type
– title
– HTTP status code
– details of error
– instance (pointer to specific occurrence)

Of course we can (and sometimes we should) extend this object by adding new properties, but the base should be the same. Thanks to this our API is easier to understand, learn and use. For more detailed information about standard I invite you to read documentation which is well described.

Data validation localization

For the standard application we can put data validation logic in three places:

  • GUI – it is entry point for users input. Data is validated on the client side, for example using Javascript for web applications
  • Application logic/services layer – data is validated in specific application service or command handler on the server side
  • Database – this is exit point of request processing and last moment to validate the data
Data validation localization
Data validation localization

In this article I am omitting GUI and Database components and I am focusing on the server side of the application. Let’s see how we can implement data validation on Application Services layer.

Implementing Data Validation

Suppose we have a command AddCustomerOrderCommand:

Suppose we want to validate 4 things:

1. CustomerId is not empty GUID.
2. Products list is not empty
3. Each product quantity is greater than 0
4. Each product currency is equal to USD or EUR

Let me show 3 solutions to this problem – from simple to the most sophisticated.

1. Simple validation on Application Service

The first thing that can come to mind is a simple validation in the Command Handler itself. In this solution we need to implement private method which validates our command and throws exception if validation error occurs. Closing this kind of logic in separate method is better from the Clean Code perspective (see Extract Method too).

The result of invalid command execution:

This is not so bad approach but has two disadvantages. Firstly, it involves from us writing a lot of easy and boilerplate code – comparing to nulls, defaults, values from list etc. Secondly, we are losing here part of separation of concerns because we are mixing validation logic with orchestrating our use case flow. Let’s take care of boilerplate code first.

2. Validation using FluentValidation library

We don’t want to reinvent the wheel so the best solution is to use library. Fortunately, there is a great library for validation in .NET world – Fluent Validation. It has nice API and a lot of features. This is how we can use it to validate our command:

Now, the Validate method looks like:

The result of validation is the same as earlier, but now our validation logic is more cleaner. The last thing to do is decouple this logic from Command Handler completely…

3. Validation using Pipeline Pattern

To decouple validation logic and execute it before Command Handler execution we arrange our command handling process in Pipeline (see NServiceBus Pipeline also).

For the Pipeline implementation we can use easily MediatR Behaviors. First thing to do is behavior implementation:

Next thing to do is to register behavior in IoC container (Autofac example):

This way we achieved separation of concerns and Fail-fast principle implementation in nice and elegant way.

But this is not the end. Finally, we need to do something with returned messages to clients.

Implementing Problem Details standard

Just as in the case of validation logic implementation, we will use a dedicated library – ProblemDetails. The principle of the mechanism is simple. Firstly, we need to create custom exception:

Secondly, we have to create own Problem Details class:

Last thing to do is to add Problem Details Middleware with definition of mapping between InvalidCommandException and InvalidCommandProblemDetails class in startup:

After change in CommandValidationBehavior (throwing InvalidCommandExecption instead Exception) we have returned content compatible with the standard:

Problem details

Summary

In this post I described:
– what Data validation is and where is located
– what Problem Details for HTTP APIs is and how could be implemented
– 3 methods to implement data validation in Application Services layer: without any patterns and tools, with FluentValidation library, and lastly – using Pipeline Pattern and MediatR Behaviors.

Source code

If you would like to see full, working example – check my GitHub repository

Related posts

Domain Model Encapsulation and PI with Entity Framework 2.2
Simple CQRS implementation with raw SQL and DDD
How to publish and handle Domain Events
10 common broken rules of clean code