Copy
You're reading the Ruby/Rails performance newsletter by Speedshop.

If you're working on a web app that does more than ~5 requests per second, Lambda will probably increase your costs

I was talking to a friend of mine yesterday, and he mentioned that his company had a Django app deployed on AWS Lambda. I asked why, and one of his main reasons was that it's cheap.

That got me intrigued - a Rails-like webapp, on Lambda, and it's cheap? Could that really be the case?

To get my biases out front: I think most developers choose Lambda when what they really wanted was autoscaling on EC2 or Heroku. However, I'd like to lay out the case here that servers can be cheaper than serverless, and why that's the case.

Our example scenario will be a typical web application:
  • 250 millisecond response times
  • 10 requests per second (600 requests/minute)
  • Uses about 256MB of memory per process
Since Lambda only does your "computing", we'll just compare computing costs alone. So, we're talking about Lambda costs versus Heroku web dyno costs.

Also, since a key feature of AWS Lambda is autoscaling, we'll have to compare against an application on Heroku that has autoscaling properly set up via add-on.

To get a quick Heroku configuration, we can use Little's Law to calculate this application's average long-term parallelism:

Average response time * average request arrival rate = average amount of requests being processed in parallel

That's 0.25 seconds * 10, or 4. Now, to get a system which is running at 50% utilization, we'd need double that amount of capacity. That means we need 8 processes to handle this amount of load.

If this math is confusing, The Rails Performance Workshop will include a section on scaling that goes in depth on this.

So for this app to have about 8 processes, we'd probably need 3 2X dynos, running 3 processes on each dyno. That costs you $150/month. With an autoscaling add-on, it will cost you another $50, so let's say that's $200/month in computing cost.

Now, what about Lambda?

Lambda functions get fractional CPU time - that is, they do not get 100% unfettered access to the underlying vCPU:

"Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of one full vCPU"

So, in order to maintain our backend response times, we have to buy a 1,792MB Lambda function, instead of what we actually need (256MB). This is what my Django friend does (and his response times are still really bad, but I'll give Lambda the benefit of the doubt and say that if we pay for a full CPU, response times will be the same)

We'll also have to use provisioned concurrency (essentially just a Lambda function that "lives" permanently, so we can avoid cold boot times) to provision for our base load.

Unlike our Heroku server, which had to "overprovision" concurrency slightly to avoid request queueing, we can provision concurrency for Lambda probably at exactly 4. Since request arrival rates are not constant, we will incur additional charges when those new Lambdas are booted though.

So, we just plug our parameters (response duration, memory allocated, and number of requests) into the Lambda calculator:


So, it's $20 more expensive per month before data transfer costs, which would probably add another $20 bucks at least. Add in the additional concurrecy required to deal with traffic spikes per month, there's another $20. So at least $60/month more expensive.

With this math, I'm willing to go out on a limb and say that the typical web application is more expensive to deploy on AWS Lambda than it is on Heroku.

Now, the Lambda advocates that read my newsletter are probably screaming at the screen right now because of the assumptions I made around memory provisioning. I could reduce memory provisioning to what the function actually needs (256MB) and it would only cost about $75 a month. Yes, I could do that, if I was willing to accept 1 second response times or worse. Most web applications cannot accept that.

With the way the math works out, Heroku's advantage increases as throughput increases. The two break-even somewhere around 3 to 5 requests per second.

Even if it was $100/month cheaper, why leave Heroku's green pastures for Lambda's JSON config hell?

And if the traffic is lower, at some point, why not just put it on a $7/month hobby dyno and call it a day?

It all makes sense, and it's what you've always known since you were a kid: if you want to save money, buy in bulk, not a-la-carte.

-Nate
You can share this email with this permalink: https://mailchi.mp/railsspeed/when-is-aws-lambda-more-expensive-than-heroku?e=[UNIQID]

Copyright © 2020 Nate Berkopec, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.