Meritocracy: all-in or all-out

There are books that are powerful. One of them is for sure Principles of Ray Dalio.

One of the most interesting ideas presented in this book was the meritocratic approach to decision making. Using a weighted voting and gather these data over and over again to improve the whole system. Noticing and measuring. Doing it over and over again. The interesting thing was an ability to veto any decision made by this system. Even more interesting thing was, the fact that this ability (as author claims) was not used any single time. I think it’s so true to the core of meritocracy. Imagine vetoing or changing one decision and then another one and then another one. How would it support the proposed approached? Once you play this veto card, it’s all out. It’s either all-in or all-out. There’s no middle ground.

DotNetos – ingredients for The Secret Serialization Sauce

Today is the first day of our first dotnetos tour across Poland. I thought that it would be useful to gather all the links, repositories in one post and share it with you in a nicely wrapped burrito preserving the spiciness of the The Secret Serialization Sauce. All right: here goes links:

  1. XML – sorry, no links for this
  2. Jil – the fastest JSON serializer for .NET, seriously, it’s crazy fast. If you’re interested why is it that fast this pearl from its design can reveal some ideas behind.
  3. protobuf-net – my favorite implementation of Google standard for protocol buffers, for .NET. In the recent version it supports discriminated unions and much more. Yummy!
  4. Hyperion/Wire – a new player in the serialization area. Started as a Wire, later on forked by Akka.NET team. I invested some time to make it faster.
  5. Simple Binary Encoding – we’re getting soooo close to the wire. It’s simple, binary and uses a special field ordering to allow reading values of a constant length with ease.
  6. NServiceBus monitoring – it uses a custom, extremely efficient protocol for reporting all metrics from a monitored endpoint to the monitoring instance. All the optimizations, the whole approach behind it is described in here.
  7. New wave o serializers – my prediction is that within a few months, a year maybe, we’ll see a new wave of serializers, or at least a wave of refurbished once, based on Span and Memory (read this in-depth article by Adam Sitnik for more info). I’m working on something related to this area and results are truly amazing.

Here you have! All the needed ingredients for The Secret Serialization Sauce. Enjoy 🌶🌶🌶

On playing (long) game

So you heard that this company used this awesome tool and was able to ship their product in 3 months? So you heard that this book helped somebody to optimize their time spent on X in some way? So you heard that he/she dropped 10kg in one month?

With every success story comes a peril. It’s easy to celebrate a success. It’s even easier to celebrate it if you don’t mention some of the dimensions you were optimizing for.

A fast shipping company could be a software house not caring about the maintainability of their product. Ship fast, ship cheap, earn fast. That’s the background of the story.

The time optimization could be measured for 1 month. What about the following 5? Could this be maintained? Maybe the book was about drinking more coffee and doing more?

Dropping 10kg in a month is not a problem. You can just starve yourself. What about following months? Are they ok? Can you maintain it?

Every single time you hear these awesome news, this miraculous solution to the problem, ask yourself what kind of game is it. A long game or a short one? With this, act accordingly.

Semantic logging unleashed

Who didn’t use printf or Console.WriteLine to just something logged. Possibly, you were a bit more advanced and used a custom library that does this line printing to a separate file, or event a rolling file. What’s printed is just a text. If you’re aware enough, you’d put probably some ‘around printed values’. In a moment of doubt, like a downtime or a serious client claiming that he lost money, you’d try to use these and the state of your app to find what’s going on.

You may be on the other spectrum, using approaches like event sourcing where every single business decision is captured in a well-named, explicitly modeled object called an event. Your storage, providing a continuous log of all the events could be treated as the source of the truth. Is there anything in the middle of the spectrum?

Semantic logging is an approach where you model your log entries to be more explicit. You separate the template, from the actual values passed into the specific entry. See the following example:

log.Error("Failed to log on user {UserId}", userId);

The logging template, is constant. It informs about an error that occurred. In this case, it’s a failed log. Is there a schema? Of course, there is. It’s not strict, as it may change whenever a developer augments the statement. Nevertheless, the first part:

"Failed to log on user {UserId}"

provides a schema. The value, passed as the second parameter, is the value for the occurrence of the event. Depending on the storage system, it can be indexed and searched for. Same for templates. If you have this part specifically separated (and we do have it in the semantic approach), it’s easy to index against this part and search for all entries of failed user logons.

I’m not a fan of implicitness, I’d say I’m totally on the opposite side of the spectrum, still, I find the semantic logging approach, explicit enough to capture important information, without thinking about all the schema celebration. Eventually, the only consumer of the schema and the payload is the logging tool. If it’s smart enough to make sense out of it, without spinning tens of VMs or using tens of GB of RAM, I’m fine with this semi-strict approach.

On saying “Yes, and…”

“Yes, and…” is one of the rule of improvisational theater. It’s so simple and powerful. You acknowledge what have already been said, adding more, and building up the narrative. It’s not for theaters only though.


Supporting and adding new things to the idea you’ve just heard. A positive snow ball? Why not?


So you’re presenting something or doing a workshop with a colleague? There’s nothing more supporting and encouraging than saying “Yes, and”. You can use different phrases like “as X mentioned before” or even “as X awesomely presented”. Sometimes, I call it “high fives”. It works for presenters, it works for the group.


Having a serious conversation? How about saying “Yes, and” instead of “No, but” and stating the same but slightly rephrased? You make a positive move, the other side makes as well. It’s a win-win.

Yes, and…

it’s your turn to acknowledge and pass it forward. The positiveness of the affirmation of yes, and…

Pearls: the protocol for monitoring NServiceBus

It’s time for another pearl of design, speed and beauty at the same time. Today, I’m bringing you a protocol used by NServiceBus to efficiently report its measurements to a monitoring endpoint. It’s really cool. Take a look! Not that I co-authored it or something… 😉

Measure everything

One of the assumptions behind monitoring NServiceBus was ability to measure everything. By everything, I mean a few values like Processing Time, Criticial Time, etc. This, multiplied by a number of messages an endpoint processes can easily add up. Of course, if your endpoints processes 1 or 2 messages per second, the way you serialize data won’t make a difference. Now, imagine processing 1000 messages a sec. How would you record and report 1000 messages every single second? Do you think that “just use JSON” would work in this case? Nope, it would not.

How to report

NServiceBus is all about messages, and being given, that the messaging infrastructure is already in place, using messages for reporting messaging performance was the easiest choice. Yes, it gets a bit meta (sending messages about messages) but this was also the easiest to use for clients. As I mentioned, everything was already in place.


As you can imagine, a custom protocol for custom needs like this could help. There were several items that needed to be sent for every item being reported:

  1. the reporting time
  2. the value of a metric (depending on a metrics type it can have a different meaning)
  3. the message type

This triple, enables a lot of aggregations and enables dealing with out of order messages (temporal ordering) if needed. How to report these three values. Let’s consider first a dummy approach and estimate needed sizes:

  1. the reporting time – DateTime (8 bytes)
  2. the value of a metric – long (8 bytes)
  3. the message type (N bytes using UTF8 encoding)

You can see that beside 16 bytes, we’re paying a huge tax for sending the message type over and over again. Sending it 1000 times a second does not make sense, does it? What we could do is to send every message type once per message and assign an identifier to reuse it in a single message. This would prefix every message with a dictionary of message types used in the specific message Dictionary<string,int> and leave the tuple in the following shape:

  1. the reporting time – DateTime (8 bytes)
  2. the value of a metric – long (8 bytes)
  3. the message type id – int (4 bytes)

20 bytes for a single measurement is not a big number. Can we do better? You bet!

As measurements are done in a temporal proximity, the difference between reporting times won’t be that big. If we extracted the minimal date to the header, we could just send difference between the starting date and a date for the entry. This would make the tuple look like:

  1. the reporting time difference – int (4 bytes)
  2. the value of a metric – long (8 bytes)
  3. the message type – int (4 bytes)

16 bytes per measurement? Even if we’re recording 1000 messages a sec this gives just 16kb. It’s not that big.

Final protocol

The final protocol consists of:

  1. the prefix
    1. the minimum date for all the entries in a message (8 bytes)
    2. the dictionary of message types mapped to ints (variable length)
  2. the array of
    1. tuples each having
      1. the reporting time difference – int (4 bytes)
      2. the value of a metric – long (8 bytes)
      3. the message type – int (4 bytes)

With these schema being written binary, we can measure everything.


Writing measurements is fun. One part, not mentioned here, is doing it in a thread/task friendly manner. The other, even better, is to design protocols that can deal with the flood of values, and won’t break because someone pushed a pedal to the metal.




On structures that last

Last year I read over 20 books. One of them was Antifragile, by Nassim Nicholas Taleb. One of the ideas that I found intriguing, was the following statement (I’m quoting from memory): things, that have been here for some time, are much more likely to stay, than all the new and shiny.

Herds & ownership

This is my herd. You and me, we’re in the same herd. This person is from another herd. In herd we trust. We, the herd, share secrets, stories and fun. The herd lasts, building its strength over time. Support, knowing each other, help – you get it for free. No matter how you call this herd, a team, a group, people did not change that much. We need herds.

This is mine, that is yours. We own things. Collectively (we, the herd) or individually (“don’t you dare to touch MY phone”). We care about things we own. We care less about things we don’t. We need ownership.

Say Conway’s law, one more time

If you haven’t heard about this law, here it is:

organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations

This one sentence was a reason for never-ending debates, tooooo many presentations and many people nodding and murmuring “Yeah, this is because of the Conway’s law”.

Now, think again about structures, the Conway’s law and things that last. Is it a valid approach to organize things in different way? If it is, what things are included in the new order and what are excluded? Is there any chance that by designing a new approach, the proven approaches are being thrown away? Whatever you do, don’t throw the baby out with the bathwater.