Our third party metrics and dashboard have had an exciting revamp. With new metrics like blocking CPU, you can now see exactly who is really to blame for a crappy user experience. We've also given you the ability to monitor individual third parties over time and create performance budgets for them.
Or is it really you, and not me? We now automatically group all the requests in our third party waterfall chart, letting you easily identify all the third party services used on your website.
For each third party, you get the number of requests and size for each content type. There's also a first party comparison you can toggle on/off to see what proportion of your requests come from first party vs third party.
To help focus our attention on CPU, several new performance metrics have been defined and evangelized over the last year or three. In this post I'm going to focus on these:
Here's a figure to help visualize these metrics.
SpeedCurve now has different chart sizes and a special TV Mode to help you build a performance culture in your organisation.
From its inception, SpeedCurve has always been designed to look awesome on the big screen. We see SpeedCurve as not just a tool for debugging web performance, but as a communication tool to rally your organisation around the importance of web performance. SpeedCurve helps bring together the development, design, and management teams, and gets everyone focused on turning your product into a fast and joyous experience for your users.
<link rel="preload" href="main.js" as="script">
This blog post has a simple conclusion: Load script asynchronously! Simple, and yet the reality is that most scripts are still loaded synchronously. Understanding the importance of loading scripts asynchronously might help increase adoption of this critical performance improvement, so we're going to walk through the evolution of async script loading starting way back in 2007. Here's what loading 14 scripts looked like in Internet Explorer 7:
The Internet really is a complicated series of tubes. As a result, any time-based metrics we capture can have variations as those tubes wobble a bit as we shove data down them. To help reduce that variation, when we do synthetic tests, we always load a page at least three times and take the median result. But even then you'll find that, over time, your charts will still show plenty of variation.
All that variation can make it difficult to see if your metrics are getting better or worse over time. We recently released a couple of new features in your Synthetic and LUX charts that make it easier for you to visualize trends and compare discrete time periods within your historical data.
To make it easier for you to see which direction your metrics are heading, we've added an option to all your charts to show a trend line which helps you visualize how a particular metric is changing over the timespan of the chart. You can hover over the legend to highlight a trend line or hover over any point on the trend to see the estimated value at that point.
The number of performance metrics is large and increases every year. It's important to understand what the different metrics represent and pick metrics that are important for your site. Our Evaluating rendering metrics post was a popular (and fun) way to compare and choose rendering metrics. Recently I created this timeline of performance metric medians from the HTTP Archive for the world's top ~1.3 million sites:
This week we've made some pretty exciting new changes to your Favorites dashboards. Aside from a brand-new chart editor interface, you'll also notice that we've introduced two new chart types: histograms and correlations.
In this post, I'm going to talk through some of the features in our new chart editor. I'll also explain in detail explain why I think histograms are such an important tool in your performance toolkit, and how you can get some fascinating insights by correlating other metrics on top of a histogram.
Here at SpeedCurve, we are continually gathering detailed performance data from tens of thousands of web pages. This gives us a relatively unique opportunity to analyse and aggregate performance metrics to gain some interesting insights. In this post, I'm going to analyse some browser-based paint timing metrics: First Paint & First Contentful Paint (defined in the Paint Timing spec and implemented in Chromium). I'm also going to analyse First Meaningful Paint (defined in a draft spec and implemented as a Chromium trace metric).
The aim of almost any performance optimisation on the web is to improve the user experience. The folk at Google have been pushing this sentiment with a focus on user-centric performance metrics, which aim to answer four questions about users’ experiences:
First Paint (FP) measures the point at which pixels are first rendered to the screen after navigating to a new page. First Contentful Paint (FCP) is slightly more specific, in that it measures the point at which text or graphics are first rendered to the screen. Both of these metrics are available in Chromium browsers (Chrome, Opera, Samsung Internet, etc) via the Performance API:
The paint timing metrics are important because they aim to answer the first question: is it happening? My analysis will look at performance data from some popular websites in an attempt to figure out whether the paint timing metrics really do answer that question.
One of the longest-standing items on my performance monitoring tool wishlist is an automatically-generated performance improvement checklist, with the improvements ordered by the impact that they will have on the website. In a previous life, I spent countless hours writing performance reports and conducting A/B performance tests to figure out which change would have the biggest impact on a website's performance.
So I'm understandably very excited that today we're announcing the new Improve dashboard – a prioritised performance improvement checklist that aggregates Lighthouse and Google PageSpeed results across all the URLs in your site to identify the most impactful performance changes you can make.
In the year since Google rolled out Lighthouse, it's safe to say that "Will you be adding Lighthouse scoring?" is one the most common questions we've fielded here at SpeedCurve HQ. And since Google cranked up the pressure on sites to deliver better mobile performance (or suffer the SEO consequences) earlier this month, we've been getting that question even more often.
We take a rigorous approach to adding new metrics. We think the best solution is always to give you the right data, not just more data. So we're very happy to announce that after much analysis and consideration, we've added Lighthouse scores to SpeedCurve. Here's why – as well as how you can see your scores if you're already a SpeedCurve user.
When looking to improve the performance and user experience of our sites we often start by looking at the network:
What's the time to first byte?
How many requests are we making and how long are they taking?
What's blocking the browser from rendering my precious pixels?
While these are entirely valid questions, over the last few years we've seen a growing number of web performance issues that are caused, not by the network, but by the browser's main thread getting clogged up by excessive CPU usage.
We're excited to announce the availability of the First Input Delay metric as part of LUX, SpeedCurve's RUM product.
This may sound counter-intuitive, but we don't want you to spend countless hours using SpeedCurve. In fact, our goal is to make your web performance data so accessible, understandable, and actionable that you can get everything you need from us in just a few minutes.
That's why we're so excited to announce the brand-new Status dashboard – a visualization that lets you see at a glance all your web performance budgets, as well as which budgets have been violated.
Keep reading to find out how to start using your Status dashboard to diagnose and fix your performance pains. But first, let's talk about why we built this feature.
Part of building a strong performance culture in an organisation is lowering the barrier to getting people excited about performance. One of the most effective ways I've found to do this is to send around a performance report every week that can, at a glance, answer an important question: did performance get better or worse?
That was the motivation behind our new Weekly Report feature. Now you can configure any of your Favorites dashboards to be summarized in a weekly email, like this one:
It's exciting working at SpeedCurve and pushing the envelope on performance monitoring to better measure the user's experience. We believe when it comes to web performance it's important to measure what the user sees and experiences when they interact with your site. A big part of our focus on metrics has been around rendering including comparing TTI to FMP, Hero Rendering, and critical blocking resources.
The main bottleneck when it comes to rendering is the browser main thread getting blocked. This is why we launched CPU charts for synthetic testing over a year ago. Back then it wasn't possible to gather CPU information using real user monitoring (RUM), but the Long Tasks API changes that. Starting today, you can track how CPU impacts your users with SpeedCurve's RUM product, LUX.
We're excited to announce that we've launched Last Painted Hero as an official metric. Last Painted Hero is a synthetic metric that shows you when the last piece of critical content is painted. Keep reading to learn how Last Painted Hero works, why (and how) we created it, and how it can help you understand how your users perceive the speed of your pages.
When choosing the right performance metric, my soapbox for the last few years has been "not every pixel has the same value". In other words, rather than chase dozens of different performance metrics, focus on the metrics that measure what's critical in your page.
Here at SpeedCurve, we think it's good to focus on rendering metrics, because they're a closer approximation to what the user experiences. There are some good rendering metrics out there, like start render and Speed Index, but the downside to these metrics is that they give every pixel the same value. For example, if the background renders and some ads render, that could improve your start render time and Speed Index score, but it might not have a big impact on the user's experience. Instead, it's better to measure the parts of the page that matter the most to users. We call those parts of the page the "hero elements".
We're excited to announce that SpeedCurve is partnering with Tim Kadlec to provide consulting services to our customers!
Tim is a recognized expert when it comes to web performance. He has spoken at numerous conferences including Velocity, Fluent, QCon, SmashingConf, Beyond Tellerand, and WebStock. He wrote High Performance Images and Implementing Responsive Design, as well as contributing to other books. Mark, Tammy and I have each collaborated with Tim on side projects. We're full of gleeful anticipation as we look forward to this opportunity to work together again.
In the first sentence I mentioned that this is a partnership. Here's what that means: Tim will continue to do consulting outside of SpeedCurve, and if you're not a SpeedCurve customer we encourage you to contact him directly. Tim will also be running SpeedCurve's consulting services. This partnership brings special advantages to SpeedCurve customers:
SpeedCurve comes with a great set of dashboards for synthetic and RUM. But we know that one size does not fit all when it comes to data charts, which is why we've invested so much work into the Favorites dashboards. For customers who use LUX, it provides a place to create custom charts that combine metrics from synthetic and RUM.
We just added some new RUM metrics from LUX in Favorites to allow for even more customized monitoring:
We’re kicking off 2018 with a bang and a donut. We’ve been busy preparing all-new faster test agents and they’re now available for you to switch to – at no extra cost!
Go to your Settings to switch to faster test agents.
All our agents now run on much faster AWS C4.Large instances, which better approximate current desktop browsers. We’ve also switched to the latest WebPageTest agent running on Linux.
How much faster? 20-30%
When you switch to the new agents, some of your metrics will get 20-30% faster, so pick a time to switch that suits your team. In between projects or the start of the month are good times to set a new baseline.
One of the best parts of my job is getting to talk to so many people from so many different companies about web performance. Every company is different, and I learn a ton from talking to each one. But one question that almost every person asks me – regardless of what industry they're in or the size of their organization – is this:
How can I create a stronger culture of web performance at my company?
Creating a performance culture means creating a feedback loop in your company or team that looks like this:
In other words: Get people to care, show them what they can do to help, and then give them positive reinforcement when you get results. It's basic human psychology, and it might seem obvious when you see it in a super-simple graphic. But it's surprisingly easy to miss these steps and instead skip ahead to the part where you invest in awesome performance tools, and then wonder why all your performance efforts feel like such a painful uphill slog.
In this post, I'm going to share some proven tips and best practices to help you create a healthy, happy, celebratory performance culture.
At SpeedCurve, we're fond of the phrase "a joyous user experience". Creating this joy requires delivering what users want as quickly as possible. It's important that the critical content is downloaded and rendered before users get frustrated.
Network metrics have been around for decades, but rendering metrics are newer. Speed Index. Start Render. Time to First Interactive. First Meaningful Paint. These are a few of the rendering metrics that currently exist. What do they mean? How do they compare? Which are best for you? Let's take a look.
Rolling out new features is always a blast, and it's extra rewarding when the new feature is a response to a customer request. We've had many conversations with SpeedCurve users who've told us that multiple Favorite dashboards would be a huge benefit for their teams.
Today, we're very excited to announce that multiple Favorites dashboards are now available. Here's why you need them and how to create them.
One of the best – and worst – things about real user monitoring is that it gives you unparalleled access to massive amounts of user data. The problem is when all this data leads to data indigestion. How do you know where to begin? And how do you know what to leave out in order to present a clear case for performance?
At SpeedCurve, we care about more than just showing you all your data. We want to show you the most important data. And we want to make it easy for you to share that data with people throughout your organization. That’s why we’re excited about the newest addition to our family of visualizations: engagement charts.
Being able to track which changes have an impact – either positive or negative – on your site’s performance is an important part of performance monitoring that can provide valuable feedback to your team. We wanted to make it easier to see at a glance all the changes to your site, without the cognitive overhead of interpreting charts. That’s why we created the new Changes dashboard, which gives you a newsfeed-style overview of recent activity in your SpeedCurve account.
Your Changes dashboard shows your performance budget alerts, deploys, site notes, and SpeedCurve product updates:
I’m at Shop.org this week, having really interesting conversations with online retailers. What I love about talking with this crowd is that – like me – they're super focused on user-perceived performance. Not surprisingly, we have a lot to talk about.
Making customers happy is the not-so-secret secret to retail success. Delivering a fast, consistent online experience has been proven to measurably increase every metric retailers care about – from conversions and revenue to retention and brand perception. (In fact, there's so much research in this area, I dedicated an entire chapter in my book to it. You can also find a number of great studies on WPO Stats.)
Delivering great, fast online experiences starts with asking two questions:
The good news is that most of the issues that are making pages slow for your shoppers are right on your pages, which means you have control over them. Here's an overview of the most common performance issues on retail sites, and how you can track them down and fix them.
I’m super excited to be able to say that I’ve joined Mark, Steve, and Tammy at SpeedCurve!
I’ve watched how Mark has shown over the last couple of years that performance monitoring doesn’t have to be dry and data-heavy; it can be insightful, interactive, and actionable. I’ve also been a follower of Steve’s work for many years. In fact, I should probably thank Steve for providing me with the knowledge that got me interested in web performance in the first place! Tammy’s work has been really interesting to follow, too - her focus on real people and how web performance impacts the way they use our websites is something that resonates strongly with me.
Joseph making BBC News way faster for all users.
We've improved our already fantastic interactive waterfall chart with a new collapsed mode that highlights all the key browser events. This lets you quickly scan all the events that happen as the page loads and if you scrub your mouse across the waterfall you can easily correlate each event to what the user could see at that moment.
Along with all the browser metrics you also get to see our new hero rendering times in context. Click on any event to see a large version of that moment in the filmstrip.
The key to a good user experience is quickly delivering the content your visitors care about the most. This is easy to say, but tricky to do. Every site has unique content and user engagement goals, which is why measuring how fast critical content renders has historically been a challenging task.
That's why we're very excited to introduce Hero Rendering Times, a set of new metrics for measuring the user experience. Hero Times measure when a page's most important content finishes rendering in the browser. These metrics are available right now to SpeedCurve users.
More on how Hero Rendering Times work further down in this post. But first, I want to give a bit of back story that explains how we got to here.
A couple of month ago, someone asked if I'd written a page bloat update recently. The answer was no. I've written a lot of posts about page bloat, starting way back in 2012, when the average page hit 1MB. To my mind, the topic had been well covered. We know that the general trend is that pages are getting bigger at a fairly consistent rate of growth. It didn't feel like there was much new territory to cover.
Also: it felt like Ilya Grigorik dropped the mic on the page bloat conversation with this awesome post, where he illustrated why the "average page" is a myth. Among the many things Ilya observed after analyzing HTTP Archive data for desktop sites, when you have outliers that weigh in at 30MB+ and more than 90% of your pages are under 5MB, an "average page size" of 2227KB (back in 2016) doesn't mean much.
The mic dropped. We all stared at it on the floor for a while, then wandered away. And now I want to propose we wander back. Why? Because the average page is now 3MB in size, and this seems like a good time to pause, check our assumptions, and ask ourselves:
Is there any reason to care about page size as a performance metric? And if we don't consider page size a meaningful metric, then what should we care about?
Being able to monitor and measure the performance of your pages is crucial. You know that already. You also know that the next step is to quickly find out what’s hurting your pages so you can stop the pain.
You want to know:
We’re super excited to announce that you can now use SpeedCurve to answer these questions.
If Mark and Steve invited you to work with them, what would you say? Exactly.
Okay, I have to elaborate a bit more about why I’m ridiculously excited about working with Mark and Steve. My first foray into the performance space was at the Velocity Conference in 2009. If you had told me then that someday I’d be working with that tall guy rocking the main stage, I would’ve thanked you for the kind words… while secretly thinking you were nuts. But here I am!
Tammy at International Women's Day Tech Talks in Toronto
SpeedCurve is a SPA (Single Page App) so we construct the charts dynamically using JSONP. It works great, but we're always looking for ways to make the dashboards faster. One downside to making requests dynamically is that the browser preloader isn't used. This isn't a factor for later SPA requests, but on the first page view the preloader might still bring some benefits. Or maybe not. We weren't sure, so we ran an A/B test. Long story short, doing the first JSONP request via markup caused charts to render 300 milliseconds faster.
We've improved the "Favorites" dashboard which now lets you build your own charts which:
Here's a walkthrough showing you some of the new features:
SpeedCurve reports the number of critical blocking resources in the page. These are the resources that block rendering. Since it's important that users see your content as quickly as possible, it's important to know what might be causing your page to render slowly. We recently enhanced the way we measure blocking resources and wanted to share those improvements with our customers as well as the performance community at large.
The main culprits that block rendering are scripts and stylesheets that are loaded synchronously. A great way to avoid this blocking problem is to load your scripts and stylesheets asynchronously. You can do that for scripts by using the async and defer attributes, plus other programmatic techniques. Loading stylesheets asynchronously is less popular but is still possible using techniques like loadCSS.
We're excited to announce SpeedCurve's RUM product, LUX.
The name LUX is a play on "Live User eXperience" and reflects how we've taken a different approach compared to other Real User Monitoring products. SpeedCurve's mission is to help designers and developers create joyous, fast user experiences. To do that, we focus on metrics that do a better job of revealing what the user's experience is really like.
In addition to standard RUM metrics like page load time and total size, LUX includes innovative new metrics that have more to do with the user experience like start render time, number of critical blocking resources, images above the fold, and viewport size. LUX's RUM metrics help you figure out which design and development improvements will make your users happier and your business more successful.
We put a lot of thought into curating a thematic set of dashboards that help you understand the performance of your front-end, but sometimes you just want to play with the data yourself and slice 'n' dice the data in all sorts of different ways. We've added a new "Favorites" dashboard that lets you do just that. You can explore the data and build your own charts, then rearrange them and share them with the team to help demonstrate the performance issues you're focused on right now.
Here's a walkthrough showing you how to slice the available data in different ways:
We also measure the CPU usage to different key events in the rendering of the page. SpeedCurve's focus is on the user experience and getting content in front of people as fast as possible, so we show you what the CPU is doing up till the page starts to render. This reflects CPU usage during the browser critical rendering path and can highlight various issues. If there's lots of CPU idle time then you're not delivering your resources efficiently. You want to get the CPU busy nice and early rendering the page, rather than sitting idle waiting for slow resources.
In the test below we see in the first pie chart that the CPU is spending a lot of time on layout up to the start render event, which is quite a different picture from the Fully Loaded CPU usage.
If you're a performance engineer, then you're familiar with waterfall charts. They are found in browser dev tools as well as other performance services. I use multiple waterfall tools every day, but the waterfall chart I love the most is the one we've built at SpeedCurve:
Progressive Web Apps (PWAs) combine the best and newest features of the Web to deliver an experience that rivals native applications on mobile. Even better, they work on desktop, too. In fact, they work everywhere that the Web works! "Ah", you say, "that's not true! They require features that don't exist in all browsers." Because PWAs are "progressive", they can adapt to older browsers to deliver the best experience possible given the features that are available.
Given these winning attributes, it's no surprize that PWAs are the most popular movement in web development today. And while there are already numerous conferences, videos, and evangelists for PWAs, there's less focus on testing and tracking the performance of PWAs. Mark and I discussed this gap in PWA performance. In response, we added some new features to SpeedCurve that, coupled with some existing features, provide a great toolkit for evaluating the performance of PWAs.
SpeedCurve’s sweet spot is the intersection of design and performance - where the user experience lives. Other monitoring services focus on network behavior and the mechanics of the browser. Yet users rarely complain that “the DNS lookups are too slow” or “the load event fired late”. Instead, users get frustrated when they have to wait for the content they care about to appear on the screen.
The key to a good user experience is quickly delivering the critical content.
SpeedCurve users may have noticed some changes recently. At the beginning of March we released a major redesign of our Settings UI that gives users more flexibility to get the exact test results they want. The two biggest changes were to the way we emulate devices and to allow different testing configurations for different sites.
At SpeedCurve, we focus on metrics that capture the user experience. A big part of the user experience is when content actually appears in front of the user. Since stylesheets and synchronous scripts are the culprits when it comes to blocking rendering, we've rolled out some new metrics that focus on these critical blocking resources.
The most helpful innovation we made is to highlight the critical blocking stylesheets and synchronous scripts in our waterfall charts. In the following example waterfall chart for ESPN, notice how the critical stylesheets (green) and synchronous scripts (orange) have a red hash pattern. Not surprisingly, the Start Render metric is delayed until after the last of these critical blocking resources is done loading. The "scrubber" at the bottom of the waterfall (showing the screenshot at that point in time) confirms that rendering has been blocked up to this point. Explore an example of a interactive waterfall chart.
If you want to improve performance, you must start by measuring performance. But what should you measure?
Across the performance industry, the metric that's used the most is "page load time" (i.e, "window.onload" or "document complete"). Page load time was pretty good at approximating the user experience in the days of Web 1.0 when pages were simpler and each user action loaded a new web page (multi-page websites). In the days of Web 2.0 and single-page apps, page load time no longer correlates well with what the user sees. A great illustration is found by comparing Gmail to Amazon.
In the last few years some better alternatives to page load time have gained popularity, such as start render time and Speed Index. But these metrics suffer from the same major drawback as page load time: they are ignorant of what content the user is most interested in on the page.
Any performance metric that values all the content the same is not a good metric.
Users don't give equal value to everything in the page. Instead, users typically focus on one or more critical design elements in the page, such as a product image or navbar. In searching for a good performance metric, ideally we would find one that measures how long the user waits before seeing this critical content. Since browsers don't know which content is the most important, it's necessary for website owners to put these performance metrics in place. The way to do this is to create custom metrics with User Timing.
Performance budgets are an important tool for ensuring your site is delivering a great user experience. Steve first experienced performance budgets while Head Performance Engineer at Google. The practice of using budgets to track performance took off with Tim Kadlec's blog post Setting a Performance Budget. The idea is to identify your performance goals and track the metrics that help you achieve your goals.
At SpeedCurve, we give performance budgets first-class status by tracking them in the Site dashboard. Here's an example of tracking a budget for image size.
Before setting your performance budgets, you first have to be monitoring your user experience. Only then can you set budgets and thresholds for improving your baseline user experience. This also allows you to quantify the improvements you're making and share success stories across the organization like "We just improved start render by 32% by reducing image requests to half the budgeted amount".
SpeedCurve now provides a visual diff of every deploy. A full resolution PNG is captured for each URL and each pixel is diffed with the previous deploy allowing you to easily spot any visual changes you may or may not have expected.
The key to practising safe continuous deployment is to have a robust set of tools that give you immediate feedback on how your code has changed between deploys and its effect on the user experience. It's often very hard to spot all the visual changes in each deploy, especially in fast moving teams where a lot of the focus is on unit tests and other automated pass/fail systems. Visual diffs bring an increased level of tracking and confidence when you're able to compare any two deploys and see exactly what has visually changed.
We do a visual diff every time you click "Test Now" on the Deploy dashboard or you use the SpeedCurve API to trigger a round of deploy testing. Integrating with the Deploy API is super easy and provides a robust set of metrics and before/after screenshots, visual diffs, waterfall charts, filmstrips and videos for each deploy. You can then compare any two deploys to see exactly what's changed over time.