Azure Functions: processing 2 billions items per day (4)

Here comes the last but not least entry in a series, where I’m describing a few patterns that enabled me to process 2 billions items per day, using Azure Functions. The goal was to do it in a cost-aware and cost-wise manner, enabling fast processing with a small amount of money spent on this.

  1. part 1
  2. part 2
  3. part 3
  4. part 4

The first part was all about batching on the sender side. The second part, was all about batching on the receiver side. The third provided the way, to use Azure services without paying for function execution. The last part is about costs and money.

How much do I get for free?

When running under Consumption Plan, you get something for free. What you get is the following:

  • 400k GB-s – GBs means running with 1GB of memory consumed for 1s
  • 1 million executions

The first item is measured with 1 ms accuracy. Unfortunately for the runner,

The minimum execution time and memory for a single function execution is 100 ms and 128 mb respectively.

This means that even if your function could be run under 100ms, you’d pay for it. Fortunately for me, using all the batching techniques from the past entries, that’s not the case. I was able to run function for much longer, removing the taxation of a minimal run time.

Now the second measure. On average there’s over 2 million seconds in a month. This means, that if your functions is executed with smaller frequency, that should be enough.

How much did I pay?

Not much at all. Below, you can find a table from Cost Management. The table includes the writer used for synthetic tests, so the overall it should be much lower.

price

This would mean that I was able to process 60 billion of items per month, using my optimized approach, for 3$.

Is it a free lunch?

Nope, it’s not. There’s no such a thing like free lunch. You’d need to add all the ingredients, like Azure Storage Account operations (queue, table, blobs) and a few more (CosmosDB, anyone?). Still, you must admit, that the price for the computation itself is unbelievebly low.

Summary

In this series we saw, that by using a cloud native approaches like SAS tokens, and treating functions a bit differently (batch computation), we were able to run under a Consumption Plan and process loads of items. As always, entering a new environment and embracing its rules, brought a lot of goodness. Next time, when writing “just a functiona that will be processed a few millions times per month” we need to think and think again. We may pay much less, if our approach truly embrace the new backendless reality of Azure Functions.

 

Azure Functions: processing 2 billions items per day (3)

Here comes the third entry in a series in which I’m describing a few patterns that enabled me to process 2 billions items per day using Azure Functions. The goal was to do it in a cost-aware and cost-wise manner, enabling fast processing with a small amount of money spent on this.

  1. part 1
  2. part 2
  3. part 3

The first part was all about batching on the sender side. The second part, was all about batching on the receiver side. In this part we’ll move to truly backendless processing.

No backend no cry

I truly admire how solutions are migrated to the serverless world. The most interesting is observing 1-1 parity between components that were there before and functions that are created now, a.k.a “Just make it a func!”. If you see this, one to one mapping, there’s a chance that you’re migrating code without changing the approach at all. Let me give you an example.

Imagine that you need to accept users’ requests. These requests are extremely unlikely to fail (there are ways to model services towards that) and if they do, there’s a natural compensation action. You could think that using a queue to store them is a perfect way of accepting a request, that can be processed later on. OK, but we need a component that will accept these requests. We need something that will write to one of Azure Storage Queues, right? Wrong.

Tokenzzzzz

Fortunately for FaaS, Azure Storage Queues have a very interesting capability. They can be accessed directly with a limited scope of rights. This functionality is provided with SAS tokens that enable access to Add, Update and/or Process, and more. What you can do is to give somebody access to only Add messages, you can limit this access to 5 minutes (and revalidate if user can do it after this period of time). The options are limitless.

If we can limit the access to a queue to just adding messages, why would we need a function to accept it? Yes, we might need a function to issue a few tokens at the beginning but there’s no need of consuming a regular request and move it to a queue. No need at all. Your user can use a storage service directly with no code for putting data in there.

To put it even more bluntly: You don’ need a user to call a func to put a message in a queue. A user can can just put a message.

Cloud native

This moves us to being cloud native. To embrace fully different services and understand that using them no longer requires writing code for them. Your functions can easily move to a higher level, assigning permissions, returning tokens and shifting from being a regular app that “just was migrated to functions” to a set of “cloud native functions”, from “using services” to “orchestrating their usage”.

Where’s the cherry

We’ve got the cake. We need a cherry. In the last part, I’ll briefly describe costs and numbers. See you soon.

Azure Functions: processing 2 billions items per day (2)

This is the second blog post in a series in which I’m describing a few patterns that enabled me to process 2 billions items per day using Azure Functions. The goal was to do it in a cost-aware and cost-wise manner, enabling fast processing with a small amount of money spent on this.

  1. part 1
  2. part 2

In the first part you saw that batching can greatly lower the number of messages you need to send, and that it can actually broaden a selection of tools you can use to deliver the same value. My choice was to stick to good, old fashioned Azure Storage Queues as with the new estimated number of messages, I could simply use a single queue.

Serverless side

The initial code responsible for dispatching messages was simple. It was a single function using QueueTrigger, dispatching messages as fast as they go. Because of running under Consumption Plan, all the scaling was being done automatically. I could see a flood of log entries informing about functions being properly executed.

The test was run for a week. I checked the amount of money being spent in the new Cost Management tool and refactored the code a little bit. I was paying too much for doing lookup after lookup and spending too much time on finding data needed for the message processing. The new version was a bit faster and a bit cheaper. But it made me think.

If a single Table Storage operation takes ~30-40 ms, and I need to do a few for a single function run, what am I paying for? Also, I knew that the data are coupled temporarily. In other words, if one entry from a table was used for this message, it’s highly likely to be used within few seconds. Also, I did not care about latency. There was already a queue in there in front of it. I was fine whether the result will be presented within 1s or 5s. I asked myself: how can I use all these constraints in my favor?

Processing batches in batches

The result of my searches was as simple as that. Why don’t process messages already containing batched entries in batches as well. I could use a TimerTrigger to get this function run every 5/10 s and grasp all the messages using a batched operation GetMessages from Azure Storage Queues. Once, they are fetched, I could be able to either prefetch all the required data using parallel async operations with Task.WhenAll or use a local cache for the execution.

Any side effects of dispatching messages on my own? Good poison message handling and doing some work that was internally handled by QueueTrigger.

The outcome? A single function running every x seconds, draining the queue till it’s empty and dispatching loads of messages.

Was it worth it? The total time spent previously by functions could have been estimated as

total_time = number_of_messages * single_message_processing_time

where single_message_processing_time would include all the lookups.

With the updated approach, the number of executions was stable (~15k per day) with different processing times, depending on the number of messages in the queue. The most important factor was the amortized cost of lookups and storage operations. The final answer was: yes, it was definitely worth it as it lowered the price greatly.

Moving on

In this part we saw that the batching idea leaked to the serverless side, beautifully lowering the time and the money spent on the function execution. In the next part we’ll see the power of backendless.

Azure Functions: processing 2 billions items per day (1)

In this series I’ll describe a few patterns that enabled me to process 2 billions items per day using Azure Functions. Yes 2 billions items per day. The aim of this trial was not to check whether you can do it with Azure Functions. You can do it easily. The goal was to do it in a cost-aware and cost-wise manner, enabling fast processing with a small amount of money spent on this.

Initial phase

The start point was simple. To have a single queue, in my case Azure Storage Queue, and simply enqueue items to it, and run processing on a Consumption Plan. This looked pretty nicely. If you ever try Azure Functions you’ll see the ability to scale up instances when needed, just to make your workload processed in a timely manner.

I must admit that I skipped that part. When you calculate the number of operations that a single queue can handle, it won’t be enough to cope with 2 billions item per day. Yes, you can scale to multiple queues or use a different kind of queue. This was not the case for my experiment though.

It comes in batches

The important part that I intentionally didn’t mention, was the fact, that the numbers of items’ producers was limited. Also, they were able to batch items and flush them once in a while. With this assumption I was able to use a dense serialization protocol (big no no for JSON) and fill every single message that is being sent with hundreds, sometimes, thousands of items to get them processed.

In my case this lowered the number of messages greatly, by a factor of 1000, leaving the whole thing working as it was supposed to. Yes, the receiving part become a bit different as it was required to deserialize the densely packed payload properly.

You may ask why not Event Hubs? Being able to pack data on my own, being given the possibility of a delayed write and comparing prices for the scale I talk about, Azure Storage Queues with a properly selected serializer still won in my calculations.

Cheating Seeing opportunities

This was the first opportunity that I used to make the processing faster and cheaper. We saw that using a batch (smart-batching in this case) greatly lowered the number of moving pieces, still delivering the same value. In the following entry, we’ll move a bit deeper into the solution I built.

Unboxing yourself with books

20 books. That’s the number of books I challenged myself to read during 2017. One week ago I ended my challenge with a positive score 20 out of 20. 20 books per year? It’s not a small number. It wasn’t that easy to achieve either.

There was a tricky part in this challenge. Every book that I read was followed by a short review. Also, my goal was to read non-technical books. Not only non-technical, but also no novels, no poetry. This leaves things like economy, psychology, philosophy, presentation skills. Things that developers’ brotherhood is often afraid or not willing to invest their time in.

I must admit that this year provided me an experience that I could name as unboxing. Reading 20 books from areas much different than my profession or, for example. science fiction novels, really changed my perspective. Lets not dive into details now, but rather focus on the outcome of this experiment. I truly perceive some things in a different way now.

It’s your turn. If I can encourage you to anything, please settle a goal for 2018 and reach out for some non-technical, non-fiction, non-prose books.

Appendix

For sake of reference, these are books that I read in 2017.

Yes, I put one technical book in there as it was really good.

 

Pricing SaaS in the clouds

Why is it so pricey? This is a question that might have popped in your head too many times. Especially, when looking at the pricing pages of SaaS applications. The second that might have followed up, is Why? What’s behind this pricing model? How did they come up with it? Of course one answer could be I don’t care. They did it to get rich. With my money!, but this isn’t very constructive, is it? What would be the minimum price you’d charge for a single user, or a single account of your app? These are much better questions to ask.

Recently, I’ve been playing with Azure Functions. They provide this beautiful FaaS (Function as a Service) environment, where you pay for what you use. In a Consumption Plan, you don’t even pay for you app lying in there as long as nobody uses. Not paying for having no users is a good thing. Having users and paying something is a much better situation though. Imagine now, that you have your first account registered. Let’s put aside the cost of staff/work/development. How much money do you need to handle this single account. How would you estimate costs?

I think that using word estimation in this case, would be a really underestimation. It’s so easy to put a single Function App, with a single storage account and just run a synthetic workload. A single account for one month. Then, using Azure cost management, just to take a look at your bill. See the numbers. No guessing, no estimation, but real costs, real money. Now, with these numbers, you can go back to the pricing model and put something on top of it, just to make it work for you. And for clouds’ sake, remember to make it rain!

A missing image of a manager

You must have seen this meme. A group of people is pulling a stone. The first one, that is in the front of the group is labeled as a leader. There’s also another picture, showing a person sitting on the stone and doing nothing. This person is labeled as a boss. If you are a software engineer, this image probably resonates with you. In my opinion, this resonance, is a result of the confirmation bias kicking in, just proving that technical leadership is the only one that’s needed. In my opinion, there’s one more image that should have been added, but was omitted.

The missing picture came to my mind, when I was on my book reading quest, consuming First, Break All the Rules. The  book is based on a Gallup’s institute research, testing how people are being managed and how this changes the way they work. At the beginning, authors are dissecting the poll that they used for their tests. Also, they show a very important difference between the outward and the inward thinking. They describe a leader as a person that looks outward, pulling the line in a new direction, helping other to conquer new territories. It’s interesting that they don’t discuss the boss figure. They discuss the manager, looking inward.

The third picture that is missing is one showing a manager role. It’s not for looking outward, it’s for looking inward, at the people, at the team. Asking them about their goals, their needs and motivating them. I wrote manager role, as this is just a role. Maybe in your organization you’ll find people having two, or even three roles (startups, anyone?). Maybe, unfortunately for you, you’ll find none (complex organizations, gov related companies ruled by policies).

At the end, I’d like to ask you for one thing. Next time, when you see this extremely fitting or soothing presentation slide or meme, think again, why does it suit you? Maybe it’s just a confirmation kicking in? It’s popular to question authorities. Unfortunately for us, it’s still not popular to question self.