Domain Driven Design has been proving to be a powerful technique for designing complex software systems for many years. The publication of the book by Eric Evans Domain-Driven Design: Tackling Complexity in the Heart of Software marked the start of a period in which this design technique has been named Domain Driven Design explicitly, but in fact it has been a “secret” of master modellers for many years. This article, divided up in several chapters, will introduce a strategy to use DDD for legacy migration.
Domain Driven Design or DDD factors the design of software systems out in two main aspects or domains:
- Business domain
- Technology domain
Based on the simple and well known architectural strategy of loose coupling, it divides a complex solution into parts. In this case the two main parts are
- first the solution for the business or problem domain itself, and
- second the way we enable the use of this problem domain for users or other agents wishing to access the solution domain.
DDD has shown that it is worthwhile to separate these two main concerns in a quite strict way. In fact one can say that in a proper DD designed system the two aspects do not overlap at all.
This has many advantages, many of which has been described in other articles on this site. We will summarise a few of them.
- A dedicated team can focus on modelling and building the business part of the solution. This team can consist of persons with a intimate understanding of the business, without the need for them to be fluent in software technology or ICT architecture.
- Another team can focus on linking the domain part with the necessary technology, without any need for them to have intimate knowledge of the business domain itself. They will provide the domain component with the necessary user interfaces, network capabilities, persistency facilities et cetera to be usable in the enterprise
This is all very well, but for many people adopting DDD this seems to be exclusively suitable for green field software development. This article will attempt to explain an approach that will show DDD to be uniquely suited for legacy situations as well.
The legacy problem
Software solutions are legacy as soon as they are deployed. In fact, already before that. Not only does the business and its needs change constantly, the very fact of deploying a software solution is the introduction of an agent of change in itself. Software systems vary in their agility, their ability to cope with this change, but most if not all eventually fail the ultimate legacy litmus test: does the system help or hinder the business?
There comes a time when the negative impact of a legacy solution is larger than the cost of migrating the system. Migration in this context is meant to be the modification of an existing system or the more or less complete replacement of the system by another, new, system.
The decision to migrate is not an easy one to make. It may involve considerable investments, it will always involve capital destruction, and may even threaten the business continuity when done in a wrong way. Yet legacy migration is a constant in modern enterprises. We need to deal with it as best we can.
For some reason this very important problem has never been placed high on the agenda of software developers and computer scientists. There are many books on modelling, user interface design, architecture. But woefully few on legacy migration.
The DDD model of legacy migration
We will start by summarising the steps to be taken for a legacy migration when done in the DDD way, and after that explain this deceptively simple strategy.
- Build a domain model of the current solution, or a part of the current solution that is decided to be a candidate for migration (see illustration below for an example of what such a model looks like).
- Within this model, demarcate a suitable area for migration.
- Mark all elements that correspond to dependencies or interactions of the demarcated area in the model with those outside this area.
- Identify all elements in the current legacy solution corresponding to the elements marked in the previous step.
- Implement the demarcated area in a new solution.
- Modify the existing system by going through all elements identified in step 4, whereby all these elements are modified in such a way that they test whether they are called or activated by the existing system (they should behave as they did before), but if they are called by the new system, they should not do anything.
- Deploy the new solution and activate the modifications of the previous step.
- This is not really a step but a maintenance rule: when, after having migrated a part of your system, you need to do maintenance on that part of the legacy that has migrated (but has not yet been phased out), you need to follow these rules:
- When the change impacts business functionality, never touch the legacy, but implement the change in the new component.
- When the change concerns a technical problem or bug you can safely modify the system.
The following sketch illustrates this approach:
The red line is the demarcation of the wrapped component. The purple lines show the links that cross the boundary.
Below an example from a real project, where the dotted areas depict the “Barbapapas” that were identified as more-or-less independent subdomains (domain model in Dutch, alas). Since the way classes collaborate across domain boundaries is quite clear and specifiable from the model, we are able to know exactly what to look for in our “big ball of mud”. There may very well be hundreds of places.
This seems to be a simple bullet list or recipe, but that is exactly what it is. There is minimal complexity in this approach, which has shown to be extremely effective in several implementations we have done in the past. Of course your mileage may vary, and it is always impossible to predict the effectiveness of an approach in the many legacy migration situations that may occur. But we think that in the majority of situations this approach will pay off, and even pay off handsomely.
Some of the steps may feel like a lot of work or complex. For example identifying all elements in the current system (step 4) could be overwhelmingly complex. This is very probably not so. In our experience the software engineers that have been given the task of identifying these elements in the current solution were able to so in a very short amount of time, because the task has been made simpler by the clear and unambiguous description of where they need to look. This description is distilled from the domain model which is the starting point for the whole exercise. It is a semantic definition of what to look for.
Demarcation strategies and the Business Capability Model
The strategy to find the demarcation boundaries described above is simple, but effective. Since it is based on an intuitive understanding of the dependencies between the business objects or information elements (note: this takes into account both the state and the behaviour of these elements!) these boundaries are usable and effective.
However, many enterprises are currently employing a very effective modelling technique called Business Capability Modelling. Over the past few years this technique has really been matured dramatically, mostly based on the good work done within the Business Architecture Guild. This work has also been adopted by both The Open Group (and is in the process of being incorporated into TOGAF and ArchiMate for example) as the OMG (to be incorporated into the UAF).
Below a top-level Capability Model which has been developed by the Business Architecture Guild in its reference models.
A particular ability or capacity that a business may possess or exchange to achieve a specific purpose or outcome.
Example: Investment Management
Ability to identify, develop, analyse, valuate, exchange, acquire, dispose of, and report of any type of monetary asset purchased with the idea that the asset will provide income in the future or will be sold at a higher price for a profit
Business Capabilities are a very powerful segmentation strategy for various applications, one of which is partitioning your legacy migration effort. Since capability models can be decomposed to arbitrary depth you can even use them at the required size for example for an agile team to migrate a specific capability within two weeks.
What are the advantages of this approach?
The first advantage is of course the simplicity of the approach. Although the steps outlined above certainly need some explanation, which will follow below, in essence they are really simple and offer a clear cut way to deal with a very complex problem.
- You are free to decide which speed you will take in migrating. Start slow, and go faster as the effectiveness of the approach proves itself, or synchronise your speed with changing budgets. Or implement the migration in one big bang.
- The risk of migrating is minimised by the deployment strategy which always offers a fall-back scenario to the existing solution, because the existing solution is effectively not changed at all. The modifications to the existing solution are essentially nothing but switches that you can switch back at your convenience.
- You can take advantage of a much improved architecture without the need to throw away existing investments.
- Because of the DDD approach you can combine your legacy migration effort with an outsourcing strategy. The new component can be built by a third party based on the domain functionality demarcated in the domain model, and the dependencies of this component with the legacy is well defined.
- Demarcations in the domain model are exquisitely suited for candidate services, especially microservices (MSA)
Building the domain model itself is an exercise that may possibly be a very expensive and hard to control process, especially since that effort requires considerate skill to do properly, not to mention solid business knowledge. However, once in place, the maintenance of that model can be done with minimal effort. And we also have information on how to do that effectively: CRC Cards!
What about BPM?
It is crucial to understand that the basis for the slicing strategy for the existing legacy (the “big ball of mud”) is a special kind of model. Since many organisations are analysing their business with process models in one form or another, the question often arises whether we could not use those to base the slicing decisions on. For example, we could use the process boundaries for the top-level sub-processes.
The answer is that in most cases process models are not the ideal source for slicing decisions. There are actually several reasons for this:
- Most process models in existence are not very well designed. If you have a set of models, especially the top 3 levels if you use the Method and Style decomposition, and these models are Method and Style compliant (clean models) than you can proceed to use these although I still think the DDD approach is more powerful.
- Process models do not isolate responsibility areas within the enterprise properly. This is not because the models themselves are no good, but because isolating them never was the main reason for process modelling. Consequently these models cross boundaries all over the place. Well-designed process models that mitigate this problem are, by the way, designed in parallel with a solid domain model using the DDD approach. This uses a concept called Bounded Context, and the guideline is to limit one (sub-) process to within one defined Bounded Context.
Process models, even if you go the DDD path, are still essential though, but not for designing your solution from, but as validation tools for the solution you eventually come up with.
Note: As an Amazon Associate I earn from qualifying purchases.