Different forms of drag

Have you heard about this new library called ABC? If not, you don’t know what you’re missing! It enables your app to do all these things! I’ll send you the links to tutorial so that you can become a fan as well. Have I tested it thoroughly? Yeah, I clicked through demo. And got it working on my dev machine. What? What do you mean by handling a moderate or high traffic? I don’t get it. I’m telling you, that I was able to spin an app within a few minutes! It was so easy!

Drag (physics) is a very interesting phenomenon. It’s a resistance of a fluid that behaves much different from regular, dry friction. Instead of being a stable force, the faster an object moves, the stronger the drag is. Let’s take a look what kind of drags we could find in modern IT world.

 

Performance drag

The library you chose works on your dev machine. Will it work for 10 concurrent users? Will it work for another 100 or 1000? Or, let me rephrase the question: how much of RAM and CPU it will consume? 10% of your application resources or maybe 50%? A simple choice of a library is not that simple at all. Sometimes your business have money to just spin up 10 more VMs in a cloud or pay 10x more because you prefer JSON over everything, sometimes it does not. Choose wisely and use resources properly.

Technical drag

You probably have heard about the technical debt. With every shortcut you make, just to deliver it this week, not the next one, there’s a non zero chance of introducing parts of your solution that aren’t a perfect fit. Moreover, in a month or two, they can slow you down, because the postponed issues will need to be solved eventually. Recently, instead of debt it was proposed to use the word drag. You move on with a debt, but moving with a drag, for sure will make you slower.

Environment drag

So you chose your library wisely. You know that it will consume a specific amount resources. But you know that it has a configuration parameter, that allows you to cut off some data processing or RAM usage or data storage costs. One example that automatically comes to my mind are logging libraries. You can use the logging level as a threshold for storing data or not. How many times these levels are changed only to store less data on these poor productions servers? When this happens, scenario for a failure is simple:

  1. cut down the data
  2. an error happens
  3. no traces beside the final catch clause
  4. changing the logging level for one hour
  5. begging asking users to trust us again and click one more time

This and similar stories heard tooooo many times.

Summary

There are different forms of a drag. None of them is pleasant. When choosing approaches, libraries, tools, choose wisely. Don’t let them drag you.

Shallow and deep foundations of your architecture

TL;DR

This entry addresses some of the ideas related to various types of foundations one can use to create a powerful architectures. I see this post as somewhat resonating with the Gregor Hohpe approach for architecture selling options.

Deep foundations

The deep/shallow foundations allegory came to me after running my workshop about event sourcing in .NET. One of the main properties of an event store was the fact whether it was able to provide a linearized view of all of its events. Having or not this property was vital for providing a simple marker for projections. After all, if all the events have a position, one could easily track only this one number to ensure, that an event was processed or not.

This property laid out a strong foundation for simplicity and process management. Having it or not, was vital for providing one design or another. This was a deep foundation, that was doing a lot of heavy-lifting of the design of processes and views. Opting out later on, from a store that provides this property, wouldn’t be that easy.

You could rephrase it as having strong requirements for a component. On the other hand, I like to think about it as a component providing deep foundations for the emerging design of a system.

Shallow foundations

The other part of a solution was based on something I called Dummy Database. A store that has only two operations PUT & GET, without transactions, optimistic versioning etc. With a good design of the process that can store its progress just by storing a single marker, one could easily serialize it with a partition of a view and store it in a database. What kind of database would it be? Probably any. Any SQL database, Cassandra or Azure Storage Tables are sufficient enough to make it happen.

Moving up and down

With these two types of foundations you have some potential for naturally moving your structure. The deep foundations provide a lot of constraints that can’t be changed that easily, but the rest, founded on the shallow ones, can be changed easily. Potentially, one could swap a whole component or adjust it to an already existing infrastructure. It’s no longer a list of requirement to satisfy for a brand new shiny architecture of your system. The list is much shorter and it’s ended with a question “the rest? we’ll take whatever you have already installed”.

Summary

When designing, split the parts that need to be rock solid and can’t be changed easily from the ones that can. Keep it in mind when moving forward and do not poor to much concrete under parts where a regular stone would just work.

Converging processes

TL;DR

Completing an order is not an easy process. A payment gateway may not accept payments for some time. A coffee can be spilled all over the book you ordered and a new one needs to be taken from a storage. Failures may occur in different moments of the pipeline, but still your ordered is received. Is it a one process or multiple? If one, does anyone takes every single piece into consideration? How to ensure that eventually a client will get their product shipped?

Process managers and sagas

There are two terms used for these processors/processes. They are process managers and sagas. I don’t want to go through the differences and marketing behind using one or another. I want you to focus on a process that handles an order and that reacts to three events:

  • PaymentTimedOut – occurring when the response for Payment was not delivered before the specific timeout
  • PaymentReceived – the payment was received in time
  • OrderCancelled – the order for which we requested this payment was cancelled

What messages will this process receive and in which order? Before answering, take into consideration:

  • the scheduling system that is used to dispatch timeouts,
  • the external gateway system (that has its own SLA),
  • messaging infrastructure

What’s the order then? Is there a predefined one? For sure there isn’t.

Convergence

Your aim is to provide a convergent process. You should review all the permutations of the process inputs. Being given 3 types of input, you should consider 3! = 6 possibilities. That’s why building a few processes instead of one is easier. You can think of them as sub-state machines that later, can be composed into a bigger whole. This isn’t only a code complexity, but a real mental overhead that you need to design against.

Summary

Designing multiple small processes as sub-state machines is easier. When dealing with many events to react to, try to extract smaller state machines and aggregate them on a higher level.

 

Service kata with Business Rules

TL;DR

In the previous post we started working on a code kata and discovered that instead of creating a new monolithic giant we could tackle the complexity of a process by modelling it right in its natural boundaries: contexts. It this post we continue this journey.

Requesting payment

Let’s spend some time on modelling a process of ordering a membership. It’s been said that it requires a payment and as soon as the payment is done, the membership is activated. We introduced the PaymentReceived event as an asynchronous response to the payment request.

Consider a membership order with the following identifier

11112222-3333-4444-5555-666677778888

When accepting the request for a membership, Membership sends a request to the Payments with the following information

payment_request

It is important to see, that the caller generates identifier, which has following properties:

  • In this case it reuses it’s own identifier for a different context to use snowy identifiers to create snowflake entities
  • As the caller generates id and stores it, in case of the failure when requesting a payment, it can be POSTed again as it’s idempotent (any http status indicating that it already exists means that the previous call was accepted and processed).

Using approach in a service oriented architecture enables idempotence (everyone knows the id upfront).

Events strike back

The result of the payment, after receiving money is an event PaymentReceived which is published to all the interested parties. One of them will be Membership, which would simply take the paymentId and check if there’s an order for a membership with the same identifier. It there is, it can be checked as paid. Simple and easy. The same will apply to other rules in other contexts.

There’s really no point of making this ONE BIG APP TO RULE THEM ALL. You can separate services according to business units and design towards integrating them.

Again, depending on the used tools, you can have events delivered by a bus to all the subscribers all use ATOM feed to publish events in there, and consume them by polling from other services.

Summary

These two posts show that raising modelling questions is important and that it can help to reuse existing structures and applications in creating new robust systems. They do not cover transactions, retries and more. You can use tools that solve it for you like a messaging bus or you’ll need to handle it on your own. Whatever path you choose, the modelling techniques will be generally the same and you can use them to bring real value into the existing ecosystem instead of creating the single new shiny application that will rule them all.

 

Code kata with Business Rules

TL;DR

How many times you were given an implicit requirements that you’d create one application or two services? How many times the architecture and design were predetermined before any modelling with business stake holders? Let’s take a dive into a code kata, that will reveal much more than code.

Kata

The kata we’ll be working on is presented here. It covers writing a tool for a set of business rules gathered across the whole company. The business rules depends on the payment (the fact that it is done) and some other conditions, for example:

  • If the payment is for a physical product, generate a packing slip for shipping.
  • If the payment is for a book, create a duplicate packing slip for the royalty department.
  • If the payment is for a membership, activate that membership.

The starting point for every rule is a payment that is accepted. Another observation is that these rules are scattered across the whole company (as the author mentions Carol on the second floor handles that kind of order). Do you think that having a new single application that gathers all the rules is the way to go?

Contexts

If a mythical Carol is responsible for some part of the rules, maybe another department/team is responsible for membership? What about the payments? Is the bookstore part of your organization really interested if a payment was done with a credit card or a transfer? Is a membership rule really valid outside of the membership context? Is it needed to anyone without membership specific knowledge should be able to say when the membership is activated?

I hope you see the way through these questions. There are multiple contexts that are somehow dependent but that are not a truly one:

  • payments – a part responsible for accepting money and making them transferred to the company’s account
  • membership – taking care of (possibly) accounts, monitoring activity, activating/disactivating accounts
  • bookstore/videostore or simply store – the sales part
  • shipping – for physical products

Are these areas connected? Of course they are!

PaymentReceived

The first visible connector is the payment. To be precise, the fact of receiving it, which can be described in a passive tense PaymentReceived. You can imagine, when requesting a membership, a payment is required. This can be perceived as a whole process, but could be split into following phases:

  • gathering membership data
  • requesting a payment
  • receiving a payment
  • completing the membership order

This is the Membership point of view. As you can see it requests a payment but does not handle it. We will see in the next post how it can be solved.

 

Why persistent memory will change your world

TL;DR

If you haven’t heard, the non volatile RAM memory is coming to town and for sure will change the persistence patterns of databases, queues, loggers. Want to know more about this new wave of hardware? Read along.

API

The first and the most important aspect is that persistent memory on Windows, reuses already existing APIs. First, if you want to use the drive just as a block storage, you can do it. You’ll be able to create files, write them etc. There’s another way of using it, which is much faster, called DAX.

The direct access enables to use the non volatile memory directly. What do I mean by directly? I mean accessing the memory with a raw pointer. How do you obtain a pointer? The old fashioned memory mapped file API is used. First, create a file, then map it and here it is! No FlushFileBuffers, no fsync. Just a raw pointer to the memory. Can you imagine writing to a mapped file and just having it persisted?

Speed

The non volatile memory is really fast. You can write 4GB per second. Yes, it’s 4GB per second of a persistent memory. The latency is extremely low. It’s so low, that using any form of asynchronous programming (raw completion ports, async-await) brings more havoc than just waiting for having this memory written. Yes, this means that your methods hitting files mem mapped with DAX will not need async signatures. Of course you’ll be able to preserve them just for the compatibility.

Ordering matters

It looks like tech heaven. No more data loss during power outages, right? It’s not entirely true. The persistent memory acts as a memory. There is an order of execution in which data are transported there. Imagine now writing the following string:

BLAH

If power went down after copying first three letters you’d be left with

BLA

Which although shows the same attitude, is not what we wish for when thinking about persistence. This example shows, that good old fashioned IO access patterns will still be important, like write-ahead logging or copy-on-write. But let me remind you again, they will be free with no synchronization, no flushing required.

Adopters

The persistent memory will change the world. SQL Server 2016 has already adopted it, as you can see here. Some databases are already there, like LMDB using memory mapped files (same API for the win!) with an ability to run as non-durable. Guess what. Now it’s durable. More databases will follow.

Summary

The persistent memory is here. You won’t probably rewrite or rethink your application as majority of apps do not deal with IO directly, but just by applying it to your database or other IO bound systems will be a real game changer.

Snowy identifiers

TL;DR

When using the snowflake entities pattern, it’s quite easy to forget about using external identifiers that we need to communicate with external systems. This post provides an easy way to address this concern.

Identity revisited

The identifier of a snowflake entity was presented as a guid. We use an artificial non-colliding client-generated identifier to ensure, that any part of the system can generate one without validating that a specific value hasn’t been used before. This enables storing different pieces of data, belonging to different contexts in different services of our system. No system leaves in vacuum though, and sometimes it requires communication with the rest of the world.

Gate away!

A common aspect that is handled by an external system are payments. When you consider credit cards, native bank applications, PayPay, BitCoin and all the rest, providing that kind of a service on your own is not a reasonable option. That’s why external services are used – the price of using one is much cheaper than delivering one. Let’s stick to the payments example. How would you approach this? Would you call the external payment service from each of your services? I hope you’d not. A better approach is to create a gateway, that will act as a translator between your system and the external one.

How many ids do I need?

Using a gateway provides a really interesting property. As the payment gateway is a part of your system, it can use the snowflake identifier. In other words, if there’s an order, it’s ok (under given circumstances) to use its identifier as identifier of the payment as well. Of course if you want to model these two as a part of a snowflake entity spanning across services. It’d be the payment gateway responsibility to correlate the system snowflake identifier with the external system id (integer, some string, whatever). This would create a coherent view of an entity within your system boundaries, closing the mapping in a small dedicated area of the payment gateway.

An integration with an external system closed in a small component leaving your system agnostic to this? Do we need more?

Summary

As you can see, closing the external dependency as a gateway provides value not only by separating the interface of the external provider from your system components, but also preserves a coherent (but distributed) view of your entities.