TL;DR I'm working currently on SewingMachine, an OSS project of mine, that is aimed at unleashing the ultimate performance for your stateful services written in/for Service Fabric (more posts: here). In this post I'm testing whether it would be beneficial to write a custom unmanaged writer for protobuf-net, instead of using some kind of object … Continue reading ThreadStatic vs stackalloc
This is a follow-up post about Marten's performance. It shows that saved allocations are not only about allocations and memory. It's also about you CPU ticks, hence the speed of your library.
When using Event Sourcing as a foundation for your solution, the command part is a solved problem. Just take an aggregate version, a command, apply onto a state and try append created events to a store, checking the version again. There is a read part of this as well, called views, which is nothing more … Continue reading Views’ warm up for Event Sourcing
If you want to write a performant multi threaded application which actually is an aim of RampUp, you have to deal with padding. The gains can be pretty big, considering that the whole work with threads mean, that you need to give them their own spaces to work in. False sharing False sharing is nothing … Continue reading StructLayoutKind.Sequential not
There are multiple articles describing the performance of Azure Table Storage. You probably read the entry of Troy Hunt, Working with 154 million records on Azure Table Storage.... You may have invested your time in reading How to get most out of Windows Azure Tables as well. My question is have you really considered the … Continue reading The cost of scan queries in Azure Table Storage
Applications have layers. It's still pretty common to see an enterprise application being built with layers like DAL, Business Logic (or Domain), Services, etc. Let's not discuss this abomination itself. Let us rather consider the flow of the data within the application. SELECT * FROM That's where the data are stored. Let us consider a … Continue reading Do we really need all these data transformations?
I hope you're aware of the LMAX tool for fast in memory processing called disruptor. If not, it's a must-see for nowadays architects. It's nice to see your process eating messages with speeds ~10 millions/s. One of the problems addressed in the latest release was a fast multi producer allowing one to instantiate multiple tasks … Continue reading Disruptor with MultiProducer