One of the things that is so challenging about the conversation around memory usage on the web right now is the sheer number of unknowns.
We haven't historically had ways of accurately determining how much memory a page is using in the real world, which means we haven't been able to draw a connection between memory usage and business or user engagement metrics to be able to determine what "good" looks like. So at the moment, we have no idea how problematic memory is, other than anecdotal stories that crop up here and there.
We also haven't seen much in the way of at-scale synthetic testing to at least give us a comparison point to see how our pages might stack up with the web as a whole. This means we have no goal posts, no way to tell if the amount of memory we use is even ok when compared to the broader web. To quote Michael Hablich of the v8 team: "There is no clear communication for web developers what to shoot for."
Because we don't have data about the business impact, nor do we have data for benchmarking, we don't have minimal interest in memory from the broader web development community. And, because we don't have that broader interest, browsers have very little incentive to focus on leveling up memory tooling and metrics on the web the same way they have around other performance-related areas. (Though we are seeing a few here and there.)
And because we don't have better tooling or metrics...well, you can probably see the circular logic here. It's a chicken or the egg problem.
The first issue, not knowing the business impact, is gonna require a lot of individual sites doing the work of adding memory tracking to their RUM data and connecting the dots. The second problem, not having benchmarks, is something we can start to fix.
Chrome introduced a new API for collecting memory related information using a Performance Observer, called measureUserAgentSpecificMemory. (At the moment, there's been no forward momentum from Safari on adoption this, and Mozilla was still fine tuning some details in the proposed specification).
An article by Ulan, who spearheaded a lot of the work for the API provides a sample return object.
The response provides a top-level bytes property, which contains the total JS and DOM related memory usage of the page, and a breakdown object that lets you see where that memory allocation comes from.
With this information, not only can we see the total JS and DOM related memory usage of a page, but we can also see how much of that memory is from first-party frames vs third-party frames and how much of that is globally shared.
That's pretty darn interesting information, so I wanted to run some tests and see if we could start to help establishing some benchmarks around memory usage.
Setting up the tests
The measureUserAgentSpecificMemory API is a promise-based API, and obtaining the result is fairly straightforward.
Unfortunately, using it in production is challenging due to necessary security considerations. Synthetically, we can get around that though.
The first thing we need to do is pass a pair of Chrome flags to bypass the security restrictions, specifically, the ominously named --disable-web-security flag, as well as the --no-site-isolation flag to accurately catch cross-origin memory usage. We can pass those through with WebPageTest so all we need now is a custom metric to return the measureUserAgentSpecificMemory breakdown.
That snippet will setup a promise to wait for measureUserAgentSpecificMemory to return a result (it currently has a 20 second timeout), then grab the full result, convert it to a string, and return it back so we can dig in.
To try to come up with some benchmarks, we setup the test with that metric and the Chrome flags, and then ran it on the top 10,000 URLs (based on the Chrome User Experience Report's popularity rank) on Chrome for desktop and again on an emulated Moto G4. Some tests didn't complete due to sites being down, or blocking the test agents, so we ended up with 9,548 test results for mobile and 9,533 desktop results.
Let's dig in!
How much JS & DOM Memory Does the Web Use?
Let's start with by looking at memory usage of the top 10k URLs by percentile.
I don't know exactly what I expected to see, but I know they weren't numbers of this size.
It's well worth noting, again, that without context around the business impact, it's a bit hard to definitively say how much is too much here.
It's also unclear to me exactly how much memory is available for JS related work in the first place. The legacy API (performance.memory) provides a jsHeapSizeLimit value that is supposedly the maximum size of the JS heap available to the renderer (not just a single page), but those values are proprietary and poorly specified, so it doesn't look like we could rely on that to find our upper-bound.
Still, we can still use the results from our tests as rough benchmarks for now, similar to what has been done for other metrics where we don't have good field data to help us judge the impact. Adopting the good/needs improvement/bad levels that Google has popularized around core web vitals, we'd get something like the following:
That itself feels like a helpful gauge, but I'm a big believer that the closer you look at a thing, the more interesting it becomes. So let's dig deeper and see if we can get a bit more context about memory usage.
Correlation between memory and other perf metrics
As we noted early, measuring memory in the wild is pretty tough to pull off right now. There are a lot of security mechanisms that need to be in place to be able to accurately collect the data, which means not all sites would be able to get meaningful data today. That's a big challenge, as performance data always becomes more interesting if we can put it in the context of what the impact is on the business and overall user experience.
Where does it all come from?
Next up, let's take a closer look of what that memory itself is comprised of.
In the memory breakdown, there's an allocation property that we can use to determine if the memory is related to first-party content, cross-origin content or a shared or global memory pool.
Any cross-origin memory will have a scope of cross-origin-aggregated. Any memory allocation that has no scope designated is shared or global memory. Finally, any bytes that don't have a scope other than cross-origin-aggregated is first-party memory usage.
If we breakdown byte usage by where it's attributed, we see that 83.9% of that memory is attributed to first-party frames, 8.2% is attributed to cross-origin frames, and 7.9% is shared or global for desktop.
Mobile is very similar with 84.6% of memory attributed to first-party frames, 7.5% attributed to cross-origin frames and 7.9% being shared or global memory.
What does memory usage look like across frameworks?
I hesitated on this, but I feel like we kind of have to look at what memory usage looks like when popular frameworks are being used. The big caveat here is that this doesn't mean all that memory is for the framework itself—there are a lot more variables in play here.
We have to look though because many frameworks maintain their own virtual DOM. The virtual DOM is quite literally an in memory representation of an interface that is used by frameworks to sync up with the real DOM to handle changes.
So naturally, we'd expect memory usage to be higher when a framework that uses this concept is in place. And, unsurprisingly, that's exactly what we see.
There's another big caveat here—this data is memory usage based on the initial page load. While that's interesting in and of itself, it doesn't really tell us anything about potential memory leaks. Running a few one-off tests, memory leaks were very common in single-page applications (throw a dart at a group of them and odds are you'll land on one with a leak)—but that's a topic for another post.
Summing it up
While there may be challenges in getting this data for your site today using real-user monitoring, the same approach I took for the tests here—some Chrome flags paired with a custom metric—makes it possible for you to start pulling memory related data into your test results today and I would love to see folks doing just that so we can learn more about how we're doing today, what the implications are, and how we can start to improve.
- I'm guessing that images are a large part of the remaining 55%. I've written about images and memory in the past, but the basic gist is that to find the amount of memory each image requires, you take the height of the image multiplied by the width of the image multipled by 4 bytes. So, in other words, a 500px by 500px image takes up 1,000,000 bytes of memory (500x500x4).
- Thanks to Ulan Degenbaev and Yoav Weiss for being incredibly patient with me while I was trying to set these tests up and understand the results.