Anomalies: Listening to your secondaries with Service Fabric

This is the second post in the series describing different anomalies you may run into using modern databases or other storage systems.

Just turn this on

This story has a similar beginning as the last one. It starts when one of developers working on a project built with ServiceFabric finds this property ListenOnSecondary and enables this feature. After all, if now every node in my cluster can answer queries sent by other parts, that should be good, right? I meant, it’s even more than good! We’re faster now!

Replication

To answer this, we need to dive a bit deeper . We need to know how Service Fabric internal storage works. Service Fabric provides a clustered storage. To ensure that your data are properly copied, it uses a replication protocol. In every single moment, there’s only one active master, the copy accepting all the write and read operations, replicating its data to all the secondary replicas. Because of various reasons, replicas that data are copied to, can be not always up to date. To give an example, imagine that we sent three commands to Service Fabric to write different pieces of data. Let’s take a look at the state

  • master: cmd1, cmd2, cmd3
  • replica2: cmd1, cmd2,
  • replica3: cmd1, cmd2, cmd3

Eventually, replica2 will receive the missing cmd3, but depending on you hardware (disks, network), there can be a constant small lag, where it has some of the operations not replicated yet.

Now, after seeing this example of how replication works and noticing that the state on replicas might be occasionally stale, can we turn on ListenOnSecondary that easily?

It depends (TM)

There is no straight answer to this. If your user first calls an action that might result in a write, and then, almost-immediately, queries for the data, they might not see their writes, which are replicated with some lag.

If your writes are not followed with reads, and you always cheat by updating the view for the user as it would be, if data were read from the store, then, you might not run into a problem.

Unfortunately, before switching on this small flag, you should think about concerns I raised above.

Wrapping up

Unfortunately for us, we’ve been given a very powerful option, configured with a single call to a method. Now, we can enable reading potentially stale data to gain bigger query throughput. It’s still up to us, whether we want to do it and whether we can do it, being given the environment and the architecture our solution lives in.

Anomalies: Snapshot Isolation

This post starts a short series about different anomalies you may run into using modern databases or other storage systems.

Snapshot Isolation to the rescue

Imagine a system that frequently deals with database locks and transactions that run much too long because of the locks being taken. Imagine that someone applies a magic fix, simply changing the isolation level to the snapshot isolation. Finally, this app is working, throwing an exception from time to time. Owners are happy, until they find, that somehow users are able to write more data than are allowed to write. The investigation starts.

What are you made of, Snapshot Isolation?

If you wonder what Snapshot Isolation means, the description is quite simple. Instead of having locks on rows and checking whether or not a row can be locked/updated,etc, every row is now versioned. To simplify, imagine that a date is added to every row when it’s modified somehow. Now, whenever a transaction starts, it has a date assigned, that creates a boundary for seeing newer records. Consider the following example:

  1. BEGIN TX1
  2. BEGIN TX2
  3. TX2: INSERT row1 INTO tableA
  4. COMMIT TX2
  5. TX1: SELECT * from tableA

the last statement won’t return row1 as it was committed after transaction TX1 started. Simple and easy, right? You can read only rows that were committed before you. What can go wrong then?

Write skew

Now imagine a blogging service, that allows only 5 posts per one user. Let’s consider a situation when a user has two employees entering posts for him/her. Additionally, let’s assume that there are already 4 posts.

  1. BEGIN TX1
  2. BEGIN TX2
  3. TX1: SELECT COUNT(*) FROM Posts returns 4
  4. TX2: SELECT COUNT(*) FROM Posts returns 4
  5. TX1: INSERT post5a INTO Posts
  6. TX2: INSERT post5b INTO Posts
  7. COMMIT TX1
  8. COMMIT TX2

As you can see, both transactions read the same number of posts: 4 and were able to add one more. Unfortunately, for the owners of the portal, now, their users know, that by issuing multiple requests at the same time, they can do much much more without paying for additional entries.

This error is called the write skew.

Mitigations

The first mitigation you might think about is a simple update on the number of posts already published. Once a conflicting write is found, the database engine will kill the transaction. Another one could be replacing a record with itself. This still, qualifies as a conflict, and again, will kill the transaction committed afterwards. Are there any other tools?

Yes they are, but they are not available in every database. There’s a special isolation level called Serializable Snapshot Isolation (SSI) that is less than 10 years old. It’s capable of automatically checking whether or not two transactions overlap in a way, that one could impact another. One of the databases that is capable of doing it, is PostgreSQL. Another one is the open source Spanner clone, called CockroachDB. Interestingly it defaults to SSI as it’s described in here.

Wrapping up

As always, don’t apply things automagically, especially, if you deal with isolation levels. If you select one, learn how does it work and what anomalies are possible. When thinking about Snapshot Isolation, consider databases that support you with Serializable Snapshot Isolation, which removes the burden of updating rows “just-in-case” and can actually prove correctness of your operations.