Comparing Perceived Performance Metrics 

May 24, 2021 · by Will Smithee ·

I have been asked numerous times if the mPulse Time to Interactive (TTI) and Time to Visually Ready (TTVR) timers are the same as those from Lighthouse, WebPageTest, and/or SpeedCurve. The short answer is no — there are some significant differences to keep in mind. In this article, I will contrast how mPulse captures and calculates these metrics with the methodologies of other tools.

The primary reason for the differences is related to the way synthetic and real user monitoring (RUM) measurements work:  

  • Synthetic measurements allow for more advanced analysis of content in the browser, including performing pixel deltas from one moment to the next 

  • RUM is limited to what the browser makes available to the RUM tag (i.e., third-party JavaScript code)

mPulse looks for other timers and events that the browser produces in order to determine how visually ready the page is and how long it takes to achieve TTVR. For example, TTI in mPulse relies on TTVR (as its minimum bound) and is dependent on that timer’s criteria, whereas other tools calculate TTI independently of TTVR using instead First Contentful Paint as their minimum bound. 

Another limitation for RUM is how long to wait for things to settle down in the browser. Google’s TTI timer waits a full five seconds — this inherent pause isn’t an issue for a synthetic measurement. If mPulse were to wait a full five seconds, it could potentially lose several measurements (beacons) as some end users may navigate away before data (the beacon) can be posted back to mPulse. As you can see in the tables below, the mPulse tag is waiting for only 500 ms. An important note here is that when the “Collect Perceived Performance” checkbox is selected in the mPulse app configuration there may be up to a 500 ms pause before the beacon is sent to mPulse when the page onload event occurs prior to TTI.

The tables below compare mPulse RUM TTVR and TTI with the other tools’ Visually Complete and TTI calculations. They provide a high-level analysis of these algorithms with links to more detailed documentation to help clarify certain points. 

mPulse TTVR Calculation

Lighthouse / WebPageTest / SpeedCurve Visually Complete Calculation

Determine the highest Visually Ready timestamp from the following (unsupported or unimplemented timers will be omitted from the calculation):

First Paint (if available

Wait at least for the first paint on the page — for example, Internet Explorer's msFirstPaint or Chrome's firstPaintTime. These might just be paints of white, so they're not the only signal we should use.


First Contentful Paint (if available

Via PaintTiming, which is an API that can be used to capture a series of key moments (e.g., First Paint, First Contentful Paint) during the page load process.


DOM Content Loaded Event

The DOMContentLoaded event is fired when the initial HTML document has been completely loaded and parsed, without waiting for cascading style sheets (CSS), images, and subframes to finish loading. This happens after domInteractive and is available in the Navigation Timing API browsers via a timestamp and in all other browsers if the mPulse snippet is loaded on the page in time to listen for readyState change events.

Hero Images (if defined

Instead of tracking all above-the-fold images, it could be useful to know which specific images are important to the site owner as defined via a simple CSS selector (e.g., .hero-images). This can be measured via Resource Timing. To add Hero Images Ready c.tti.hi to the beacon, go to mPulse configuration on the Beacons tab.

Framework Is Ready (if defined

A catch-all for other measurements that browsers can't automatically track, this is an event or callback from the page author saying the page is ready. This measurement can be activated for any important page data, such as when a page's click handlers have all registered, by adding Framework Ready to the beacon. This capability cannot be set in the mPulse graphical user interface and must be implemented on your site. See Visually Ready


Once all of the above has happened, Visually Ready has occurred.

Visually Complete

The time at which all the content in the viewport has finished rendering and nothing changed in the viewport after that point as the page continued to load. It's a great measure of the user experience as the user should now see a full screen of content and be able to engage with the content of your site.

Visually Complete is calculated taking page screenshots and conducting pixel analysis of those screenshots applying the Speed Index algorithm.

Visually Complete can be skewed significantly depending on site construction. Rotating carousels and non-white page background colors can affect the measurement to the point at which it becomes meaningless.

Speed Index

The Speed Index calculation looks at each 0.1s interval and calculates IntervalScore = Interval * (1.0 - [Completeness/100]) where Completeness is the percent that’s Visually Complete for that frame and Interval is the elapsed time for that video frame in ms (100 in this case). The overall score is just a sum of the individual intervals: SUM(IntervalScore)



mPulse TTI Calculation

Lighthouse/WebPageTest/SpeedCurve TTI Calculation

After TTVR, calculate TTI by finding the first period of 500 ms where all of the following are true:

  • Start looking for TTI at First Contentful Paint

  • Look for the first interactive window where there is a contiguous period of five seconds fully contained within the interactive window with no more than two in-flight requests

  • TTI is the start of the interactive window from step 2, First Meaningful Paint or DOM Content Loaded, whichever is later

Final Takeaway

The bottom line for these perceived performance metrics is that RUM and synthetic measurements have different capabilities due to the limitations of a RUM tag running in a browser compared with how synthetic runs at an operating system level and can capture much more. 

The mPulse perceived performance metrics provide the most complete picture of real user interaction with your website. This is a still-evolving technical area in general, but we expect to see better metrics as time goes on. You will realize great value using these metrics today compared with relying on the page onload event which is losing relevance due to the increasingly complicated nature of web page and site design, with more sites loading visually important content after the page load event has occurred.

Additional Resources

Two great write-ups of how mPulse calculates these metrics can be found at:

Links to additional information for Lighthouse, WebPageTest, and SpeedCurve perceived performance metrics:

A lovely video discussing the current state of the various Google timers can be found here: