Heavy cloud but no rain

Recently I’ve been playing with Azure Functions. Probably, I should use a bigger word than “playing”, because I implemented a full working app using only functions. 4$ , that was all that I needed to pay for the whole month after running some synthetic load through the app. I spent a few additional hours just to make it 3$ next month. You could ask, what’s the reason. Read along.

Heavy cloud

Moving to cloud is getting easier and easier. With the new backendless (let’s stop calling it serverless) you can actually chop your app into pieces and pay only when they are run. More than this. You’ve got everything monitored, so effectively you can see where you spend your money. If you’re crazy enough, you could even modify the workflow of your app, to make the heavy work at the end of a chain, to postpone it till a user really needs it. Still, these optimizations and thinking don’t seem to be popular this days (or at least I haven’t seen it popping up that frequently).

But no rain

The synthetic load I used to stress the app was simulating a single not that active user. A real usage would be probably much higher, with the price being much bigger. Effectively, instead of treating this optimizations as 1$ only, I could say that I cut the cost by 25%. Now this was only an experiment, but think about it again. A dummy, fast implementation was cheap, but with some additional work I could have done it more profitable. If a price for the cheapest option would be 5$, these are some real gains. These are differences that can make you either profitable or bankrupted.

Make it rain

In past years developers weren’t dealing with money. Servers were there, sometimes faster, sometimes slower. Databases were there, spending countless hours on running our not optimized queries. This time is ending now. Our apps will be billed and it’ll be our responsibility to earn money by making them thinner and faster. Welcome to the cost aware era of software engineering.

Cloudy cost awareness

TL;DR

Our industry was forgiving, very forgiving. You could not put an index, run a query for 1 minutes and some users of your app would be disappointed. If you were the only one on the market or delivered banking systems, that was just fine as you’d no loose clients because of it. The public cloud changes it and if you can’t embrace it, you will pay. You will pay a lot.

Pay as you crawl

If you issue a query scanning a table in Azure Table Storage, every entity you access will be counted as a storage transaction. Run millions of them and your bill will be increased. Maybe not that much, but it will.

If you deploy a set of services as Azure Cloud Services, each of them consuming just 100MB of memory, your VMs will be undersaturated. You’ll pay for memory you don’t use and CPU that just sits in the rack that hosts your VM.

Design is money

Before public cloud, all these inefficiencies could be more or less tolerated, but were not that easy to spot on. Nowadays, with a public cloud, it’s the provider, the host that will notice them and charge you for them. If you don’t design your systems with the awareness of the environment, you will pay more.

Mitigations

This is not black or white situation. It never is. You’ll probably be able to dockerize some parts of your app and host it inside of Service Fabric cluster. You’ll probably be able to use CosmosDB and its autoindexing feature to just fix the performance for lookups in your Azure Storage Tables. There’s a lot of ways to mitigate these effects, but still, I consider a good appropriate design as the most valuable tool for making your systems not only well performing and effective but, eventually, cheap.

Summary

Don’t throw your app against the wall of clouds and check if its sticks. Design it properly. Otherwise, it may stick in a very painful and cost ineffective way.

Hot or not? Data inside of Service Fabric

TL;DR

When calculating the space needed for your Service Fabric cluster, especially in Azure, one can hit machine limits. After all, a D2 instance has only 100 GiB of local disk and this is the disk used by Service Fabric to store data onto. 100 GiB might be not that small, but if you use your cluster for more than one application, you can hit the wall.

Why local?

There’s a reason behind using local, ephemeral disk for Service Fabric storage. The reason is locality. As Service Fabric replicates the data, you don’t need to store them in a highly available storage as the cluster provides one on its own. Storing data in multiple copies by using Azure Storage Services is not needed. Additionally, using local, SSD drives is much faster. It’s a truly local disk after all.

Saturation

Service Fabric is designed to run many applications with many partitions. After all, you want to keep your cluster saturated (almost) as using a few VMs just to run an app that is needed once in a month would be useless. Again, if you run many applications, you need to think about capacity. Yes, you might be running stateless services which don’t require one, but not using stateful services would be a waste. They provide an efficient, transactional, replicated database built in inside of them. So what about the data? What if you saturate the cluster not in terms of CPU but the storage.

Hot or not

One of the approaches you could use is the separation between hot and cold data. For instance, users haven’t logged in for one month could have their data considered as cold. These data could be offloaded from the cluster to Azure Storage Services, leaving more space for one that are needed. When writing applications that use an append only model (for instance ones based on event sourcing) you could think about offloading events older than X days, at the same time ensuring that they can be accessed. Yes, the access will be slower, but it’s unlikely that you’ll need them on regular basis.

Summary

When designing your Service Fabric apps and planning your cluster capacity think through the hot/cold approach as well. This, could lower your requirements for the storage space and enable you to use the same cluster for more application, which effectively is what Service Fabric is for.

How does Service Fabric host your services?

TL;DR

Service Fabric provides an amazing fully automated hosting for any number of services with any number of instances each (up to the physical limits of your cluster). But how are these hosted? What if you have more partitions than nodes?

Structure recap

When building an app that is meant to be hosted in Service Fabric you build an… app. This application might consist of multiple stateful and stateless services. The application is packaged to a… package that, when uploaded to the cluster as an image, provides an application type. From this, you can instantiate multiple application instances. Let me give you an example.

Application “Bank” consists of two services:

  1. “Users”
  2. “Accounts”

When build with version “1.0.0” and packaged, it can be uploaded to the SF cluster and is registered as “Bank 1.0.0”. From now on you can instantiate as many banks as you want within your cluster. Each will be composed of two sets of services: “Users” and “Accounts”.

Services, stateful services

When defining stateful services, these that have a built in kind-of database (reliable collections, or SewingSession provided by my project SewingMachine) you need to define how many partitions they will have. You can think of partitions as separate databases. Additionally, you define the number of replicas every partition will have. That’s done to ensure high availability. Let me give you an example.

  1. “Users” have the number of partitions set to 100 and every partition is replicated to 5 replicas (let’s say P=100, R=5)
  2. “Accounts” are configured with P=1000, R=7

Imagine that it’s hosted on a cluster that has only 100 nodes. This will mean, that on every node (on average) system will place 5 replicas of “Users” and 70 replicas of “Accounts”. It’s a valid scenario. Once some nodes are added to the cluster, replicas will be automatically moved to new nodes lowering the saturation of previously existing.

What if a node hosts more than one replica of one service, how are they hosted? Moreover, how do the communicate, as there’s only one port assigned to do it?

Cohosting to the rescue

Currently, all the replicas are hosted within the same process. Yes, although 5 “Users” instances will be created, they will all be sitting in the same AppDomain of the same process. The same goes for 70 “Accounts”. You can check it on your own by obtaining the current process ID (PID) and AppDomain.Current and compare. This reduces the overhead of hosting as all assemblies and static resources (assemblies loaded, static fields, types) are shared across replicas.

One port to rule them all

By default, when using native Service Fabric communication listener, only one port is used by an endpoint. How is possible that the infrastructure knows how to route messages to the right partition and replica? Under the hood, when opening a communication listener, replica registers the identifier of the partition it belongs to and its replica number. That’s how, when a message arrives, Service Fabric infrastructure is capable of sending the message to the right communication listener, and therefore, to the right service instance.

Summary

Now you know, that all replicas of partitions of the same service on one node are cohosted in the same process and that Service Fabric infrastructure dispatches messages accordingly to the registered partition/replica pair.

Orchestrating processes with full recoverability

TL;DR

Do you call a few services in a row as a part of a bigger process? What if one of the calls fails? What if your hosting application fails? Do you provide a reliable way for successfully finishing your process? If not, I might have a solution for you.

Anatomy of a process

A process can be defined as at least two calls to different services. When using a client library of some sort and C# async-await feature one could write a following process


var id = await invoiceService.IssueInvoice(invoiceData);
await notificationService.NotifyAboutInvoice(id);

It’s easy and straightforward. First, we want to issue an invoice. Once it’s done, a notification should be sent. Both calls although they are async should be executed step by step. Now. What if the process is halted after issuing the invoice? When we rerun it, there’s no notion of something stopped in the middle. One could hope for good logging, but what if this fails as well.

Store and forward

Here comes the solution provided by DurableTask library provided by the Azure team. The library provides a capability of recording all the responses and replaying them without execution. All you need is to create proxies to the services using a special orchestration context.

With a process like the above when executing following state is captured:

  1. Initial params to the instance of the process
  2. invoiceData are stored when first call is done
  3. invoiceService returns and the response is recorded as well
  4. invoiceNumber is stored as a parameter to the second call
  5. notificationService returns and it’s marked in the state as well

As you can see, every execution is stored and is followed by storing it’s result. OK. But what does it mean if my process fails?

When failure occurs

What happens when failure occurs. Let’s consider some of the possibilities.

If an error occurs between 1 and 2, process can be restarted with the same parameters. Nothing really happened.

If an error occurs between 2 and 3, process is restarted. The parameters to the call were stored but there’s no notion of the call to the first service. It’s called again (yes, the delivery guarantee is at-least-once).

If an error occurs between 3 and 4, process is restarted. The response to the call to the invoice service is restored from the state (there’s no real call made). The parameters are established on the basis of previous values.

And so on and so forth.

Deterministic process

Because the whole process is based either on the input data or already received calls’ results it’s fully deterministic. It can be safely replayed when needed. What are not deterministic calls that you might need? DateTime.Now comes immediately to one’s mind. You can address it by using deterministic time provided by the context.CurrentUtcDateTime.

What’s next

You can build a truly powerful and reliable processes on top of it. Currently, implementation that is provides is based on Azure Storage and Azure Service Bus. In a branch you can find an implementation for Service Fabric, which enables you to use it in your cluster run on your development machine, on premises or in the cloud.

Summary

Ensuring that a process can be run till a successful end isn’t an easy task. It’s good to see a library that uses a well known and stable language construct of async-await and lifts it to the next level, making it an important tool for writing resilient orchestrations.

 

Much Better Stateful Service Fabric

TL;DR

In the last post we found the COM+ interface that is a foundation for KeyValueStoreReplica persistence. It’s time to describe the way, how the underlying performance can be unleashed for the managed .NET world.

Internal or not, I don’t care

The sad part of this COM+ layer of ServiceFabric is that’s internal. The good part of .NET is that when running in Full Trust, you can easily overcome this obstacle with Reflection.Emit namespace and emitting helper methods wrapping internal types. After all, there is a public surface that you can start with. The more you know about MSIL and internals of CLR and the more you love it, the less tears and pain will be caused by the following snippet code. Sorry, it’s time for some IL madness.


var nonNullResult = il.DefineLabel();

// if (result == null)
//{
//    return null;
//}
il.Emit(OpCodes.Ldloc_0);
il.Emit(OpCodes.Brtrue_S, nonNullResult);
il.Emit(OpCodes.Ldnull);
il.Emit(OpCodes.Ret);

il.MarkLabel(nonNullResult);

// GC.KeepAlive(result);
il.Emit(OpCodes.Ldloc_0);
il.EmitCall(OpCodes.Call, typeof(GC).GetMethod("KeepAlive"), null);

// nativeItemResult.get_Item()
il.Emit(OpCodes.Ldloc_0);
il.EmitCall(OpCodes.Callvirt, InternalFabric.KeyValueStoreItemResultType.GetMethod("get_Item"), null);

il.EmitCall(OpCodes.Call, ReflectionHelpers.CastIntPtrToVoidPtr, null);
il.Emit(OpCodes.Stloc_1);

// empty stack, processing metadata
il.Emit(OpCodes.Ldloc_1);   // NativeTypes.FABRIC_KEY_VALUE_STORE_ITEM*
il.Emit(OpCodes.Ldfld, InternalFabric.KeyValueStoreItemType.GetField("Metadata")); // IntPtr
il.EmitCall(OpCodes.Call, ReflectionHelpers.CastIntPtrToVoidPtr, null); // void*
il.Emit(OpCodes.Stloc_2);

il.Emit(OpCodes.Ldloc_2); // NativeTypes.FABRIC_KEY_VALUE_STORE_ITEM_METADATA*
il.Emit(OpCodes.Ldfld, InternalFabric.KeyValueStoreItemMetadataType.GetField("Key")); // IntPtr

il.Emit(OpCodes.Ldloc_2); // IntPtr, NativeTypes.FABRIC_KEY_VALUE_STORE_ITEM_METADATA*
il.Emit(OpCodes.Ldfld, InternalFabric.KeyValueStoreItemMetadataType.GetField("ValueSizeInBytes")); // IntPtr, int

il.Emit(OpCodes.Ldloc_2); // IntPtr, int, NativeTypes.FABRIC_KEY_VALUE_STORE_ITEM_METADATA*
il.Emit(OpCodes.Ldfld, InternalFabric.KeyValueStoreItemMetadataType.GetField("SequenceNumber")); // IntPtr, int, long

il.Emit(OpCodes.Ldloc_1); // IntPtr, int, long, NativeTypes.FABRIC_KEY_VALUE_STORE_ITEM*
il.Emit(OpCodes.Ldfld, InternalFabric.KeyValueStoreItemType.GetField("Value")); // IntPtr (char*), int, long, IntPtr (byte*)

The part above is just a small snippet responsible partially for reading the value. If you’re interested in more, here are over 200 lines of emit that brings the COM+ to the public surface. You don’t need to read it though. SewingMachine delivers a much nicer interface for it.

RawAccessorToKeyValueStoreReplica

RawAccessorToKeyValueStoreReplica is a new high level API provided by SewingMachine. It’s not as high level as the original KeyValueStoreReplica as it accepts IntPtr parameters, but still, it removes a lot of layers leaving the performance, serialization, memory management decisions to the end user of the library. You can use your own serializer, you can use stackalloc to allocate on the stack (if values are small) and much much more. This accessor is a foundation for another feature provided by SewingMachine, called KeyValueStatefulService, a new base class for your stateful services.

Summary

We saw how KeyValueStoreReplica is implemented. We took a look at the COM+ interface call sites. Finally, we observed how, by emitting IL, one can expose an internal interface, wrap it in a better abstraction and expose it to the caller. It’s time to take a look at the new stateful service.

 

Better Stateful Service Fabric

TL;DR

This is a follow up entry to Stateful Service Fabric that introduced all the kinds of statefulness you can out of the box from the Fabric SDK. It’s time to move on with providing a better stateful service. First, we need to define what better means.

KeyValueStoreReplica implementation details

The underlying actors’ persistence is based on KeyValueStoreReplica. This class provides a high level API for low level interop interfaces provided by Service Fabric. Yep, you heard it. The runtime of the Fabric is unmanaged and is exposed via COM+ interfaces. Then they are wrapped in a nice .NET classes just to be delivered to you. The question is how nice are these classes? Let’s take a look at the decompiled part of a method.


private void UpdateHelper(TransactionBase transactionBase, string key, byte[] value, long checkSequenceNumber)
{
using (PinCollection pin = new PinCollection())
{
IntPtr key1 = pin.AddBlittable((object) key);
Tuple<uint, IntPtr> nativeBytes = NativeTypes.ToNativeBytes(pin, value);
this.nativeStore.Update(transactionBase.NativeTransactionBase, key1, (int) nativeBytes.Item1, nativeBytes.Item2, checkSequenceNumber);
}
}

 

pinning + Interop + pointers = fun

As you can see above, when you pass a string key and a byte[] value a lot must be done to execute the update:

  1. value needs to be allocated every time

    as there’s no interface accepting Stream or ArraySegment<byte> that would allow you to reuse bytes used for allocations, you always need to ToArray the payload

  2. key and value are pinned for the time of executing update.

    Pinning, is nothing more or less than prohibiting object from being moved by GC. The same effect that you get when using fixed keyword or using GCHandle.Alloc. When handles not that many requests it’s ok to do it. When GC kicks in frequently, this might be a problem.

The interesting part is the nativeStore field that provides the COM+ seam to the Fabric internal interface. This is the interface that is closest to the Fabric surface and that allows to squeeze performance out of it.

Summary

You can probably see, when this leads us. Seeing that underneath the .NET wrapper there is a COM+ that has much more low level interface and allows to use raw memory, we can try to access it directly, skipping KeyValueStoreReplica altogether and write a custom implementation that will enable to maximize the performance.