Bounded design

If you wake up a Domain Driven Design fan and scream the word “Bounded” the answer will be immediate and always the same: “Context”. It’s funny that this word is having so much trouble in leaving DDD context. I’d like to encourage you for broadening it a little bit, to a design, architecture space.

Reality strikes back

It’s quite interesting that often, when we hit a limitation of a service of any kind, the first reaction is negative. Probably you heard statements similar to these:

  • Why did they set the value at this level?
  • I want to be able to put a transaction on top of anything
  • I don’t need to change my design. It’s their fault that the created a service that sucks

I’m not writing about malicious providers claiming, that they do something when they don’t. I’m talking about a well described limitations that you can find in a documentation of a tool that you’re using. You can try to kick the wall, but聽still, the limitation will be聽 there. The better question to ask is why?

Limit to profit

Every limitation that you put in your service provides more space for designing it. Let’s take into consideration a supported type of data for a custom database. If all values and keys could be only of type int (4 bytes), then I would not need to care about variable lengths of buffers used for storing them! Every pair of a key and a value would be written on 8 bytes! And this is memory aligned as well! And this and that. Let’s see another example.

Often, when using cloud databases there’s a notion of a partition or a shard. You are promised to have some kind of transactionality within one partition, but you can’t use one transaction across partitions. Again, you may raise all the blaming questions, or think about partition as a unit of a scalability. Imagine that you’d have to store all the data of a single database on one machine. That’s at least highly inefficient. Now think. You create partitions from the very beginning. They can be moved to other machines, as all the data of one partition resides on one machine (this is implication, so multiple partitions can reside on the same machine). This could be a game changer, especially, if you’re writing a cloud service like Azure Storage Tables.


You can see the pattern now. Behind majority of these limitations, are some design decisions. They were made to make a service work. Of course there are probably some sloppy ones, but still, whenever you see a limitation like this, think about it. Then, design your solution accordingly.

On getting things done

It’s the 6th month of me using one of the tasks management tools. I must say, that even if I raised several cases how the tool could be improved, the methodology of putting things in the inbox, grooming them and then executing already proved itself. Sure, from time to time a ball is being dropped, but it’s so much better than it was in the past. Let me share how I approach it.


I use an inbox. Sometimes it’s easier to put a task in there rather than think where it belongs. These will be clarified later.


Different projects for different parts of my activities. There’s a “Life” project (I hope that this one will last for some time), which covers my personal things. There’s a “House” project that holds all the items that should be done in-da-house. There are different initiatives, like DotNetos, that are projects on their own. There’s a list of movies to watch and a list of presentations/courses to take. For books, I use my trustworthy goodreads account.

On grooming and setting dates

I try not to spend a lot of time on selecting things to do. My preferred approach is to put a date on it (if I know it). On a given day, it will magically appear in my things that need to be done. Once a week I review the lists and select what to do next. Sometimes, because of setting dates, there’s not that much to do – things are assigned for the forthcoming day/week.

Burning tasks

Of course there are still task that are not closed fast enough. I spent over 30 years of my life practicing getting things done without this system, so the change won’t happen in one day 馃檪 Still, the fact that number of dropped balls is getting lower and lower (with the new approach, they are not forgotten, but sometimes might be dragging).


It looks that there’s no perfect tool for my needs and every single one requires some bending. I might consider writing an extension to the public API, if I truly need it, but for sure, I won’t be spending time on trying “all these other apps”.


The future is bright 馃檪 I’m doing more, I’m forgetting less and there’s a still room for some improvement! Let me just check the tasks for today… Oh yes, there’s one about improving the process itself 馃槈

On morning routines that work

It took me some time, which included, reading, trying things out and testing myself, to come to the rule that works for me and that allows me to do more, especially in the morning. Let me share it. Hopefully it will change something for you.

Start small. The most important thing was to start small, with a thing that I could keep up with. This could have been something, like drinking a glass of water. All I needed to is to do my first thing every morning. Once it was a habit, I moved to another one. This created a spiral of self-reinforcement, helping me with moving more different things to the very morning.

It looks that it worked pretty nicely. Currently the morning set includes:

  1. meditation
  2. preparing meals for family
  3. reading
  4. reviewing things to do
  5. a morning journal (in a very very short form)

This brings me to 7:15 – 7:30 AM where “the real day” starts. The inspiring thing is, that before it starts, I’ve got a lot of important things done.

Next time, if you fail with building a habit, start small. After achieving a small success, you’ll be able to pick something bigger.


DotNetos – podsumowanie

12.03.2018 – 16.03.2018 to czas, kt贸ry zapisze si臋 z艂otymi byte’ami na dysku historii. Razem z Konradem Kokos膮 i 艁ukaszem Pyrzykiem w ci膮gu 5 dni odwiedzili艣my 5 miast w Polsce, prezentuj膮c kontent zwi膮zany z performance’em w 艣wiecie .NET. Czas na kr贸tkie podsumowanie.


Wst臋pne spojrzenie na ankiety oraz na feedbacku kt贸ry otrzymujemy, nie tylko za ich po艣rednictwem, pokazuje, 偶e by艂o to wydarzenie, kt贸re:

  1. przynios艂o warto艣膰 – wynie艣li艣cie z tego wiedz臋
  2. podoba艂o si臋 – pod wzgl臋dem motywu przewodniego, akcji promocyjnej
  3. wyr贸偶nia艂o si臋 na tle innych

Niezwykle cieszy mnie ten pozytywny odbi贸r naszego przedsi臋wzi臋cia. Wsadzili艣my w nie niema艂y kawa艂ek pracy, ale to w艂a艣nie Wasz feedback, to owoce, kt贸re pokazuj膮 czy si臋 uda艂o czy nie. Patrz膮c po tych owocach: uda艂o si臋 bardzo.

Tydzie艅 z DotNetosami

To by艂 niezwykle intensywny tydzie艅. Codziennie pobudka, 艣niadanie, trasa+praca, prezentacje. Niezwykle interesuj膮cy i zupe艂nie niepikantny szczeg贸艂 to to, 偶e ani razu nie mieli艣my sytuacji konfliktowej. Alignment na pok艂adzie DotNetos wynosi艂 300% normy i by艂o to prawdziwie unikatowe prze偶ycie. Konrad, 艁ukasz, dzi臋ki! Z takimi amigos nawet Carolina Reaper jest niestraszna!


Gigantyczne kudosy nale偶膮 si臋 naszemu sponsorowi, firmie 7N. Zdj臋艂a z nas caaaa艂膮 mas臋 pracy zwi膮zanej z logistyk膮, szukaniem sal, kontaktami z hotelami. Mog臋 wyobrazi膰 sobie jaki to by艂 wysi艂ek. Dzi臋ki wielkie!!!

Co dalej? A moje miasto? Jak nie alokowa膰, no jak?

Zar贸wno podczas samego tournee jak i po nim, otrzymali艣my pytania dotycz膮ce tego, czy wyst膮pimy w jakim艣 mie艣cie, co dalej z DotNetos. Obecnie regenerujemy si臋, aby w kolejnym tygodniu zrobi膰 retrospekcj臋 naszego wyjazdu. Zapa艂u i pomys艂贸w mamy mas臋, teraz czas na plany, a potem wykonanie, kt贸re b臋dzie na poziomie poprzeczki, kt贸r膮 sami postawili艣my. DotNetos nie powiedzieli jeszcze ostatniego s艂owa!


  1. Meetup
  2. Tweeter
  3. Facebook
  4. Strona

Serverless & calling no functions at all

If you ever used serverless approach, you know that limiting the number of executions can save your money. This is true especially, when you run your app under a consumption plan. Once the free budget is breached, you’ll start paying for every execution and consumption of precious GigaByte-seconds. Is there a way to minimize this cost?

A cheaper option

One of the approaches that is often overlooked by people starting in serverless environment is not calling a function at all. If the only thing that your functions does is obtaining a blob from the storage, or getting a single entity from a table, there is a cheaper option. The option is to use SAS tokens, which stands for Shared Access Signatures Token. Let’s take a look at the following example, that generates an URL that allows you to access blobs, list them and read them from a specific storage account.

Shared Access Signature is a signed token that enables a bearer to perform specific actions. For instance, you can encode into an url values enabling user to get a specific blob or to add messages to an Azure Storage Queue. The bearer will be able only to perform the set of actions that was specified when the url was created (you can find more, about SAS tokens in Azure docs). How can we use this to lower our costs?

Instead of returning the payload of a blob, a function can return a properly scoped token, that enables a set of operations. In this way, it will be the client’s responsibility to call Azure services directly, without going through the function proxy. It may not only lower your serverless bill, but also, decrease the latency, as the value is not copied first to the function and then sent to a client, but it’s used directly, with no proxy at all.

The idea above looks great, but there’s a single if. What if the token is stolen and another party uses it?

Time and address

The first option to address a possibility for leakage is to use time-based tokens. Instead of issuing an infinite token, issue a token for a specific time and refresh it from the client side, before the previous token expires. Another option is to use another feature of SAS tokens. A majority of methods for obtaining them, enables to pass IPAddressOrRange. With this, you can specify that the token will be correct only if the caller of a specific operation, meets specified criteria. If you issue a token for a single IP, then you can greatly limit the surface of a potential attack.


A good, old saying that doing nothing is cheaper than doing anything applies also to serverless. It might not only cheaper to not call a function, but also much faster as there’s no additional step, just for copying the data. It’s about time to renew your SAS token, so you’d better hurry up!

Meritocracy: all-in or all-out

There are books that are powerful. One of them is for sure Principles of Ray Dalio.

One of the most interesting ideas presented in this book was the meritocratic approach to decision making. Using a weighted voting and gather these data over and over again to improve the whole system. Noticing and measuring. Doing it over and over again. The interesting thing was an ability to veto any decision made by this system. Even more interesting thing was, the fact that this ability (as author claims) was not used any single time. I think it’s so true to the core of meritocracy. Imagine vetoing or changing one decision and then another one and then another one. How would it support the proposed approached? Once you play this veto card, it’s all out. It’s either all-in or all-out. There’s no middle ground.

DotNetos – ingredients for The Secret Serialization Sauce

Today is the first day of our first dotnetos tour across Poland. I thought that it would be useful to gather all the links, repositories in one post and share it with you in a nicely wrapped burrito preserving the spiciness of the The Secret Serialization Sauce. All right: here goes links:

  1. XML – sorry, no links for this
  2. Jil – the fastest JSON serializer for .NET, seriously, it’s crazy fast. If you’re interested why is it that fast this pearl from its design can reveal some ideas behind.
  3. protobuf-net – my favorite implementation of Google standard for protocol buffers, for .NET. In the recent version it supports discriminated unions and much more. Yummy!
  4. Hyperion/Wire – a new player in the serialization area. Started as a Wire, later on forked by Akka.NET team. I invested some time to make it faster.
  5. Simple Binary Encoding – we’re getting soooo close to the wire. It’s simple, binary and uses a special field ordering to allow reading values of a constant length with ease.
  6. NServiceBus monitoring – it uses a custom, extremely efficient protocol for reporting all metrics from a monitored endpoint to the monitoring instance. All the optimizations, the whole approach behind it is described in here.
  7. New wave o serializers – my prediction is that within a few months, a year maybe, we’ll see a new wave of serializers, or at least a wave of refurbished once, based on Span and Memory (read this in-depth article by Adam Sitnik for more info). I’m working on something related to this area and results are truly amazing.

Here you have! All the needed ingredients for The Secret Serialization Sauce. Enjoy 馃尪馃尪馃尪