Top Domain Model

It looks like a new series is going to be published on this blog!

I received some feedback related to the recent series making your event sourcing functional. Posts in this series will be loosely coupled and more-less self-contained. The topic that will connect them is modelling your domain, aggregates, processes towards models that are useful, not necessarily correct. We’ll go through different domains & different requirements and we’ll try to focus not on the created model (it’s just an exercise) but rather on the modelling approaches and practices.

Put on your modeller’s gloves and safety glasses. It’s about time to crunch a few domains!

 

Event stores and event sourcing: some not so practical disadvantages and problems

TL;DR

This post is some kind of answer to the article mentioned in a tweet by Greg Young. The blog post of the author has no comment section. Also, this post contains a lot of information, so that’s why I’m posting it instead of sending as an email or DM.

Commits

Typically, an event store models commits rather than the underlying event data.

I don’t know what is a typical event store. I know though that:

  1. EventStore built by Greg Young company, a standalone event database, fully separates these two, providing granularity on the event level
  2. StreamStone that provides support for Azure Table Storage, works on the event level as well
  3. Marten , a PostgreSQL based document&event database also works on the singular event level

For my statistical sample, the quoted statement does not hold true.

Scaling with snapshots

One problem with event sourcing is handling entities with long and complex lifespans.

and later

Event store implementations typically address this by creating snapshots that summarize state up to a particular point in time.

and later

The question here is when and how should snapshots be created? This is not straightforward as it typically requires an asynchronous process to creates snapshots in advance of any expected query load. In the real world this can be difficult to predict.

The first and foremost, if you have aggregates with long and complex lifespans, it’s your responsibility because you chose a model where you have aggregates like that. Remember, that there are no right or wrong models, only useful or crappy ones.

The second. Let me provide an algorithm for snapshoting. If you retrieved 1000 events to build up aggregate, you should snapshot it (serialize + put into cache in memory + possibly store in a db). Easy and simple, I see no need for fancy algorithms.

Visibility of data

In a generic event store payloads tend to be stored as agnostic payloads in JSON or some other agnostic format. This can obscure data and make it difficult to diagnose data-related issues.

If you as an architect or developer know your domain and you know that you need a strong schema, because you want to use it as published interface but still persist data in JSON instead of some schema-aware serialization like protobuf (binary, schema-aware serialization from Google) it’s not the event store fault. Additionally,

  1. EventStore
  2. StreamStone

both handle binary just right (yes, you can’t write js projections for EventStore, but still you can subscribe).

Handling schema change

If you want to preserve the immutability of events, you will be forced to maintain processing logic that can handle every version of the event schema. Over time this can give rise to some extremely complicated programming logic.

It was shown, that instead of cluttering your model with different versions (which still, sometimes it’s easier to achieve), one could provide a mapping that is applied on the event stream before returning events to the model. In this case, you can handle the versioning in one place and move forward with schema changes (again, if it’s not your published interface). This is not always the case, but this patter can be used to reduce the clutter.

Dealing with complex, real world domains

Given how quickly processing complexity can escalate once you are handling millions of streams, it’s easy to wonder whether any domain is really suitable for an event store.

EventStore, StreamStone – they are designed to handle these millions.

The problem of explanation fatigue

Event stores are an abstract idea that some people really struggle with. They come with a high level of “explanation tax” that has to be paid every time somebody new joins a project.

You could tell this about messaging and delivery guarantees, fast serializers like protobuf or dependency injection. Is there a project, when a newbie joins, they just know what and how to do it? Nope.

Summary

It’s your decision whether to use event sourcing or not, as it’s not a silver bullet. Nothing is. I wanted to clarify some of the misunderstandings that I found in the article. Hopefully, this will help my readers in choosing their tooling (and opinions) wisely.

Event sourcing: making it functional (7)

TL;DR

It’s the seventh chapter of making event sourcing functional. In the last post we introduced the base class for aggregates capturing events and applying them on the state. In this entry we question this choice and make the final step to make our code more functional. Let’s go!

All entries in this series:

Question your choices

Let’s again take a look at the method responsible for receiving the gateway response

paymentstate

This is a void method. It’s not a pure function though. It “returns” its result by passing emitted events to the apply function. We could rewrite it in the following manner:

paymentstate

Now you can probably see, that there are two methods in there. One that emits, the event, possibly taking the state into consideration and the second that applies the event. We can make the first one even more generic and make it return IEnumerable of events.

PaymentState.png

Final revelation

Now you can see, that ReceiveImpl is the true implementation of the aggregate action, but it does not require the aggregate class at all! It gets the state, the action parameter and returns events! The ReceiveGatewayResponse is now just an infrastructure code that applies events, which is totally unneeded! We no longer have the Payment aggregate! All we have is just a set of functions that acts on the state basis, accepts some parameters and returns events. We can make it even an extension to call it in an easier way.

paymentstate

This pure function is so easy to test. You can test it using Given, When, Then with events, but you can test it with regular unit testing as well.

Now you can see that we were able to split the Payment aggregate into set of functions, that accept a state and other needed parameters returning a enumerable of events. Is there anything easier to test and to work with? For sure you need to pay some tax by introducing storing and applying events on the infrastructure side, but still it’s worth it, as we change aggregates, from being classes to just sets of simple functions.

Summary

I hope you enjoyed this series and that I was able to encourage you to see aggregates and event sourcing from a bit more functional point of view. All the commenters, thank you for providing valuable feedback. See you soon!

Event sourcing: making it functional (6)

TL;DR

In the last entry we changed the Payment aggregate to modify state by raising events. The events weren’t captured though. In this post, we’ll change the aggregate to enable recording of these changes.

All entries in this series:

Capturing events

The easiest way to capture events is to make them pass one additional method. We could provide one Apply method in the aggregate itself that beside applying the event it would. To make it usable in all the aggregates we could create a base class that would have this method. Let’s apply it to the Payment first

paymentstate

Now, we need the aggregate base class to hold the capture the applied events.

The base aggregate

We use the base class to track the changes and provide one point of entry to applying events. The dynamic is used to implement it fast. If you wanted to optimize, you could remove it by writing a custom dispatcher that calls directly a method from the derived aggregate.

paymentissued.png

When the action ends, a handler executing it calls GetEvents and gets all the events that were raised during this session. This enables you to write simple tests with a Given-When-Then approach and easily extract the changes for persistence purposes. On the other hand, we introduced the base class for the aggregate. What could be done more?

Summary

By introducing a base AggregateRoot class we enabled capturing all raised events. This simplified a little bit Payment aggregate itself, but introduced the base class. It’s time to move to the final part where we make it functional removing some code we’ve written so far.

Event sourcing: making it functional (5)

TL;DR

After defining events of the Payment aggregate it’s time to move on and work on applying these events.

All entries in this series:

State

To apply events, the state of Payment will be extracted to a separate class. This should be a familiar pattern to all event sourcing practitioners, but let me show the code for clear understanding of our approach

paymentstate

How to construct the state

  1. There are no public setters. The state has readonly properties.
  2. All events are applied with an Apply method.
  3. The only way to change the state is to apply an event.
  4. No Apply throws an exception. The event has already happened and the state is just an accumulator of the changes.

New Payment Aggregate

The Payment aggregate now can be transformed to use this state, by removing all the state changes and raising/applying events in these places:

paymentstate

Summary

We know how to extract a state from the aggregate and apply all the events. Although Payment uses events, it does not store them in any form nor allows accessing them for processing. In the current shape the Payment isn’t much different from the previous version. We extracted a few event classes and the state, but we still can’t treat events as the first class citizen. we need to move one step further and we will do it in the next blog post.

Event sourcing: making it functional (4)

TL;DR

In the last entry we defined the aggregate implementation that we’ll work on. Let’s move forward and make it an event sourced aggregate.

All entries in this series:

Events

The first and the most important event, is the fact of issuing the payment itself. Let’s define it in the following way:

paymentissued

The event contains all the data provided to the constructor of the payment aggregate in the past.

The next one is raised and applied when the payment is processed successfully. Let’s make it a class without any members. We just want to record a notion of success.

paymentissued

The last but not least is the one raised in case of the error. We make the errors explicit in here as we’d like to react to payments failure. One could model it in a different way, providing one event class, but then again, just follow this take on the payment problem.

paymentissued

Summary

We discovered three meaningful events:

  1. PaymentIssued
  2. PaymentProcessedSuccessfully
  3. PaymentProcessedWithFailure

In the next entry we will start rewriting the aggregate to use these.

Event sourcing: making it functional (3)

TL;DR

We’re on our journey to move from event sourcing oriented on the aggregates to a more functional approach. In the first entry, we went through some of the DDD building blocks. In the second, we defined an interface of the aggregate that we’ll work with. Before, going to the event sourced approached, let’s go through the aggregate’s implementation, somehow related to the Blue Book approach, without event sourcing.

All entries in this series:

Payment aggregate!

Let’s dive straight into the code

paymentapi

We can see that a payment is issued for a user, with a specified payment method and a given amount of money. Yes, it’s simplistic, but bear with me, and follow its implementation for a while.

The other method is responsible for receiving the gateway response and applying it onto the aggregate. The status and the description are updated accordingly.

So far so good? In the next entry we’ll make this aggregate event sourced!