Copy
You're reading the Ruby/Rails performance newsletter by Speedshop.

Looking for an audit of the perf of your Rails app? I've partnered with Ombu Labs to do do just that.

日本語は話せますか?このニュースレターの日本語版を購読してください

All performance metrics exist on a spectrum from low to high level.

For web applications, the low-level metrics tracked by teams usually consist of factors such as database query time, request queue time, and CPU load. Mid-level metrics might contain indicators like server response time and time to first byte. At the highest levels, metrics include Largest Contentful Paint (LCP) or browser load time.

We have some intuitive understanding that low level usually means close-to-the-metal latency numbers, which are small, short, and inclusive of low-level operations. But high-level latency metrics surprisingly contain many lower-level ones. Largest Contentful Paint is usually comprised of server response time, time to download and execute JS, and time to render and paint the page. In this way, high-level metrics are made up of various mid- or low-level metrics.

I like to presume that these indicators exist on a chain where high-level metrics are connected to lower-level measures. For example, we can say that database query time is related to Largest Contentful Paint through a chain of metrics: database query time per response, server response time, time to first byte, and finally Largest Contentful Paint. Each of the higher-level metrics is comprised of time spent in the lower-level metrics.

Teams get into trouble by concentrating on low- or mid-level metrics at the expense of their focus on high-level metrics. The most typical example is a Rails shop getting excessive focused on their Rails backend through tracking and optimizing the response time of its Rails server, when its goal should have been an improved experience for its customers. In the typical scenario, Rails' server response times would be around 250 milliseconds, while page load times are 2.5 seconds. Even in a magical world where this shop would rewrite its entire backend in Rust and responses would take 25 milliseconds, page load times would still be reduced only by 9% (225 milliseconds). That's appealing but certainly not worth the amount of effort it would require.

We do performance work because it improves the high-level metrics we care about. No one really cares how fast your Rails server responds because your customers are not HTML parsers. Your customers are human beings that have to wait for their web browser to do all the work necessary for creating a webpage: download CSS and JS, compile and execute JS, hydrate the React app with backend data, etc. Therefore, if your customers are humans, the metrics you track should reflect this emphasis.

That's not to say you should never focus on mid- or low-level metrics, but this focus should be practiced by moving down the chain to find out why high-level metrics are not performing as well as you would like. For example, if your business has a target for Largest Contentful Paint, you should consider the following links of the chain that make up Largest Contentful Paint (LCP): time to first byte (TTFB), time to download blocking CSS and JS, time to run blocking JS, and time to layout and paint. Each of these sub-steps is a lower-level metric in this chain, and when added together, they equal our LCP metric.

Take a look at your performance dashboards today, and, for each metric, consider the following: what part of the metrics chain does this number occupy? Are we tracking all of the high-level metrics we need to, or did we just fill these dashboards up with low-level metrics, unsure of their importance to the larger picture?

The most important metrics chain for most Rails applications that serve HTML will start with Largest Contentful Paint. This metric, now officially part of Google's search ranking algorithm, is an accurate indicator of user experience and easily tracked and reported on by almost all performance measurement tools. However, I hardly ever see this metric at the front and center indicator of performance dashboards.

Largest Contentful Paint only tracks the initial page load of a browser. Additional clicks that do not trigger full browser navigation - react route changes, Turbo Drive or Hotwire clicks, etc., will not count towards this metric. Clients with SPAs tend to tell me that the LCP metric doesn't matter because "most clients are loading the application one time only." Well, that is great - where are your metrics for what happens after the client has loaded the application? Crickets!

Start with the human being that interacts with your software, and work your way down the metrics chain for what metrics to track - don't let the pre-made dashboards and metrics of your performance data provider guide your strategy instead!

Until next time,

Nate
I am looking to put on one or two more private Ruby/Rails performance workshops for companies in 2023. 

If your company is interested in improving the performance or scalability of their Rails application, this is a great way to do it. I just wrapped a workshop in Singapore with Sephora Digital, and it went quite well. What could it do for your team?

You can reply directly to this email to contact me.
You can share this email with this permalink: https://mailchi.mp/railsspeed/track-the-right-things-metrics-chains?e=[UNIQID]

Copyright © 2023 Nate Berkopec, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.