DotNetos – podsumowanie

12.03.2018 – 16.03.2018 to czas, który zapisze się złotymi byte’ami na dysku historii. Razem z Konradem Kokosą i Łukaszem Pyrzykiem w ciągu 5 dni odwiedziliśmy 5 miast w Polsce, prezentując kontent związany z performance’em w świecie .NET. Czas na krótkie podsumowanie.


Wstępne spojrzenie na ankiety oraz na feedbacku który otrzymujemy, nie tylko za ich pośrednictwem, pokazuje, że było to wydarzenie, które:

  1. przyniosło wartość – wynieśliście z tego wiedzę
  2. podobało się – pod względem motywu przewodniego, akcji promocyjnej
  3. wyróżniało się na tle innych

Niezwykle cieszy mnie ten pozytywny odbiór naszego przedsięwzięcia. Wsadziliśmy w nie niemały kawałek pracy, ale to właśnie Wasz feedback, to owoce, które pokazują czy się udało czy nie. Patrząc po tych owocach: udało się bardzo.

Tydzień z DotNetosami

To był niezwykle intensywny tydzień. Codziennie pobudka, śniadanie, trasa+praca, prezentacje. Niezwykle interesujący i zupełnie niepikantny szczegół to to, że ani razu nie mieliśmy sytuacji konfliktowej. Alignment na pokładzie DotNetos wynosił 300% normy i było to prawdziwie unikatowe przeżycie. Konrad, Łukasz, dzięki! Z takimi amigos nawet Carolina Reaper jest niestraszna!


Gigantyczne kudosy należą się naszemu sponsorowi, firmie 7N. Zdjęła z nas caaaałą masę pracy związanej z logistyką, szukaniem sal, kontaktami z hotelami. Mogę wyobrazić sobie jaki to był wysiłek. Dzięki wielkie!!!

Co dalej? A moje miasto? Jak nie alokować, no jak?

Zarówno podczas samego tournee jak i po nim, otrzymaliśmy pytania dotyczące tego, czy wystąpimy w jakimś mieście, co dalej z DotNetos. Obecnie regenerujemy się, aby w kolejnym tygodniu zrobić retrospekcję naszego wyjazdu. Zapału i pomysłów mamy masę, teraz czas na plany, a potem wykonanie, które będzie na poziomie poprzeczki, którą sami postawiliśmy. DotNetos nie powiedzieli jeszcze ostatniego słowa!


  1. Meetup
  2. Tweeter
  3. Facebook
  4. Strona

Serverless & calling no functions at all

If you ever used serverless approach, you know that limiting the number of executions can save your money. This is true especially, when you run your app under a consumption plan. Once the free budget is breached, you’ll start paying for every execution and consumption of precious GigaByte-seconds. Is there a way to minimize this cost?

A cheaper option

One of the approaches that is often overlooked by people starting in serverless environment is not calling a function at all. If the only thing that your functions does is obtaining a blob from the storage, or getting a single entity from a table, there is a cheaper option. The option is to use SAS tokens, which stands for Shared Access Signatures Token. Let’s take a look at the following example, that generates an URL that allows you to access blobs, list them and read them from a specific storage account.

Shared Access Signature is a signed token that enables a bearer to perform specific actions. For instance, you can encode into an url values enabling user to get a specific blob or to add messages to an Azure Storage Queue. The bearer will be able only to perform the set of actions that was specified when the url was created (you can find more, about SAS tokens in Azure docs). How can we use this to lower our costs?

Instead of returning the payload of a blob, a function can return a properly scoped token, that enables a set of operations. In this way, it will be the client’s responsibility to call Azure services directly, without going through the function proxy. It may not only lower your serverless bill, but also, decrease the latency, as the value is not copied first to the function and then sent to a client, but it’s used directly, with no proxy at all.

The idea above looks great, but there’s a single if. What if the token is stolen and another party uses it?

Time and address

The first option to address a possibility for leakage is to use time-based tokens. Instead of issuing an infinite token, issue a token for a specific time and refresh it from the client side, before the previous token expires. Another option is to use another feature of SAS tokens. A majority of methods for obtaining them, enables to pass IPAddressOrRange. With this, you can specify that the token will be correct only if the caller of a specific operation, meets specified criteria. If you issue a token for a single IP, then you can greatly limit the surface of a potential attack.


A good, old saying that doing nothing is cheaper than doing anything applies also to serverless. It might not only cheaper to not call a function, but also much faster as there’s no additional step, just for copying the data. It’s about time to renew your SAS token, so you’d better hurry up!

Meritocracy: all-in or all-out

There are books that are powerful. One of them is for sure Principles of Ray Dalio.

One of the most interesting ideas presented in this book was the meritocratic approach to decision making. Using a weighted voting and gather these data over and over again to improve the whole system. Noticing and measuring. Doing it over and over again. The interesting thing was an ability to veto any decision made by this system. Even more interesting thing was, the fact that this ability (as author claims) was not used any single time. I think it’s so true to the core of meritocracy. Imagine vetoing or changing one decision and then another one and then another one. How would it support the proposed approached? Once you play this veto card, it’s all out. It’s either all-in or all-out. There’s no middle ground.

DotNetos – ingredients for The Secret Serialization Sauce

Today is the first day of our first dotnetos tour across Poland. I thought that it would be useful to gather all the links, repositories in one post and share it with you in a nicely wrapped burrito preserving the spiciness of the The Secret Serialization Sauce. All right: here goes links:

  1. XML – sorry, no links for this
  2. Jil – the fastest JSON serializer for .NET, seriously, it’s crazy fast. If you’re interested why is it that fast this pearl from its design can reveal some ideas behind.
  3. protobuf-net – my favorite implementation of Google standard for protocol buffers, for .NET. In the recent version it supports discriminated unions and much more. Yummy!
  4. Hyperion/Wire – a new player in the serialization area. Started as a Wire, later on forked by Akka.NET team. I invested some time to make it faster.
  5. Simple Binary Encoding – we’re getting soooo close to the wire. It’s simple, binary and uses a special field ordering to allow reading values of a constant length with ease.
  6. NServiceBus monitoring – it uses a custom, extremely efficient protocol for reporting all metrics from a monitored endpoint to the monitoring instance. All the optimizations, the whole approach behind it is described in here.
  7. New wave o serializers – my prediction is that within a few months, a year maybe, we’ll see a new wave of serializers, or at least a wave of refurbished once, based on Span and Memory (read this in-depth article by Adam Sitnik for more info). I’m working on something related to this area and results are truly amazing.

Here you have! All the needed ingredients for The Secret Serialization Sauce. Enjoy 🌶🌶🌶

On playing (long) game

So you heard that this company used this awesome tool and was able to ship their product in 3 months? So you heard that this book helped somebody to optimize their time spent on X in some way? So you heard that he/she dropped 10kg in one month?

With every success story comes a peril. It’s easy to celebrate a success. It’s even easier to celebrate it if you don’t mention some of the dimensions you were optimizing for.

A fast shipping company could be a software house not caring about the maintainability of their product. Ship fast, ship cheap, earn fast. That’s the background of the story.

The time optimization could be measured for 1 month. What about the following 5? Could this be maintained? Maybe the book was about drinking more coffee and doing more?

Dropping 10kg in a month is not a problem. You can just starve yourself. What about following months? Are they ok? Can you maintain it?

Every single time you hear these awesome news, this miraculous solution to the problem, ask yourself what kind of game is it. A long game or a short one? With this, act accordingly.

Semantic logging unleashed

Who didn’t use printf or Console.WriteLine to just something logged. Possibly, you were a bit more advanced and used a custom library that does this line printing to a separate file, or event a rolling file. What’s printed is just a text. If you’re aware enough, you’d put probably some ‘around printed values’. In a moment of doubt, like a downtime or a serious client claiming that he lost money, you’d try to use these and the state of your app to find what’s going on.

You may be on the other spectrum, using approaches like event sourcing where every single business decision is captured in a well-named, explicitly modeled object called an event. Your storage, providing a continuous log of all the events could be treated as the source of the truth. Is there anything in the middle of the spectrum?

Semantic logging is an approach where you model your log entries to be more explicit. You separate the template, from the actual values passed into the specific entry. See the following example:

log.Error("Failed to log on user {UserId}", userId);

The logging template, is constant. It informs about an error that occurred. In this case, it’s a failed log. Is there a schema? Of course, there is. It’s not strict, as it may change whenever a developer augments the statement. Nevertheless, the first part:

"Failed to log on user {UserId}"

provides a schema. The value, passed as the second parameter, is the value for the occurrence of the event. Depending on the storage system, it can be indexed and searched for. Same for templates. If you have this part specifically separated (and we do have it in the semantic approach), it’s easy to index against this part and search for all entries of failed user logons.

I’m not a fan of implicitness, I’d say I’m totally on the opposite side of the spectrum, still, I find the semantic logging approach, explicit enough to capture important information, without thinking about all the schema celebration. Eventually, the only consumer of the schema and the payload is the logging tool. If it’s smart enough to make sense out of it, without spinning tens of VMs or using tens of GB of RAM, I’m fine with this semi-strict approach.

On saying “Yes, and…”

“Yes, and…” is one of the rule of improvisational theater. It’s so simple and powerful. You acknowledge what have already been said, adding more, and building up the narrative. It’s not for theaters only though.


Supporting and adding new things to the idea you’ve just heard. A positive snow ball? Why not?


So you’re presenting something or doing a workshop with a colleague? There’s nothing more supporting and encouraging than saying “Yes, and”. You can use different phrases like “as X mentioned before” or even “as X awesomely presented”. Sometimes, I call it “high fives”. It works for presenters, it works for the group.


Having a serious conversation? How about saying “Yes, and” instead of “No, but” and stating the same but slightly rephrased? You make a positive move, the other side makes as well. It’s a win-win.

Yes, and…

it’s your turn to acknowledge and pass it forward. The positiveness of the affirmation of yes, and…