How-To: Using Server-Timing to Get Better Performance Analytics

June 7, 2017 · by Charles Vazac ·

Whether you care about clicks, likes, or page views, monitoring the performance of your web property is critical to the success of your online presence. The only way to make sure that your actual users are having the best possible experience with your content is with RUM - real user monitoring.

In the old days of RUM, this meant firing off a "pixel" to your backend server when the onload event of the page occurred. But today, thanks to modern browser APIs, we can get very granular performance insight into the events between the moment a user requests your page and the moment your content is ready for their eyeballs.

Along with navigation-timing, resource-timing, and user-timing, the addition of server-timing "completes the circuit" of RUM APIs by being the ultimate catch-all performance analytics tool.

Using the newest *-timing API (just landed in Chrome Canary Version 60), website owners can monitor their server-side performance by writing named timers into the response header of any object (basepage or subresource), from anywhere in their back-end stack, and the browser will make that data available to the JavaScript running in the page.

Let's take the simple example of serving an avatar image, where we verify that the requesting user has access to the image, and then we check our cache to decide if we need to make a database call.

function serveImage(response, currentUser, imageId) {
 if (checkACL(currentUser, imageId)) {
  if (!foundInCache(imageId)) {
   return serveImageFromCache(response, imageId)

In this example, checkACL() and loadFromDatabase() are potentially costly operations. As we all know, you can't optimize a process for performance until it can be measured, so let's use server-timing!

function wrapServerTiming(response, func, metric, description) {
 const t1 =
 const returnValue = func()
 const t2 =

 setServerTimingHeader(response, metric, t2 - t1, description)
 return returnValue

function setServerTimingHeader(response, metric, duration, description) {
 duration = typeof duration === 'undefined' ? '' : `: ${duration}`
 description = typeof description === 'undefined' ? '' : `; ${description}`
 response.set('Server-Timing', `${metric}${duration}${description}`)

function serveImage(response, currentUser, imageId) {
 if (wrapServerTiming(response, function () {
      return checkACL(currentUser, imageId)
    }, 'acl')) {
  if (!foundInCache(imageId)) {
    wrapServerTiming(function () {
    }, 'db')
  setServerTimingHeader(response, 'serverName', undefined,
  return serveImageFromCache(imageId)

The code above might produce headers like this:

Server-Timing: acl: 10 Server-Timing: db: 125
Server-Timing: serverName;

Back in the browser, those entries are available using PerformanceObserver:

let entries = [], done = false
new PerformanceObserver(function(list, observer) {
 entries = entries.concat(list.getEntries())
 if (done) {

 entryTypes: ['server']

Entries of type server have the following attributes:

  • name - String value representing the url of the basepage or subresource request
  • metric - String value representing the user-defined metric name
  • duration - Number value representing the user-defined duration, zero if not specified
  • description - String value representing the user-defined description, empty-string if not specified

Now that we've collected our server-timing entries, we should beacon them back to our collector server for analysis. HTTPArchive tells us that the average webpage has more than 100 linked resources. If, for example, we wanted to leverage server-timing to collect two metrics per resource, that would lead to [A LOT OF DATA].

To save bytes on the wire, I recommend using trie compression in conjunction with an array of the metric names, which are likely to be repeated. A server-timing entry for a resource would reference its metric name by position, instead of by fully resolved word. Look for my PR submission to @nicj's resourcetiming-compression library coming soon! :)


If an object is cached in the browser, then its headers (including Server-Timing) will be cached as well. Performance analytics code needs to be aware of this to decide which data should be reported.

For example, cached server-timing entries that report on actual back-end timers should probably be ignored, lest they skew the overall results. But, if server-timing is being used to communication metadata about the resource (image dimensions, for example), then cached server-timing entries are still true and meaningful.

Using the resource-timing API, it's fairly safe identify those resource requests that never actually left the browser like this:

function wasServedFromBrowserCache(url) {
 var entry = performance.getEntriesByName(url)[0]
 return entry && !entry.transferSize && entry.duration < 30

Cross-Origin Resource Sharing (CORS)

If you write server-timing data on resources that your infrastructure serves and want to make that data available to third-party consumers, you will need a Timing-Allow-Origin response header that accommodates the same origin policy.

Bonus Bits

Excepting cookies, server-timing marks the first time that response headers of a basepage are accessible via JavaScript. Because of this, server-timing allows web developers to pass an arbitrary number of name/value pairs from the webserver to the browser, without using templating - something I have personally wanted for at least 15 years. This added bonus, yours for no extra charge, allows web developers to write the following code:

// expressjs webserver

const username = myAPI.userName()
res.set('server-timing', `username;$(username)`)

// in browser

function findBasePageValue(valueName) {
 let value
 window.performance.getEntriesByType('server').find(function ({name, metric, description}) {
   if (name === document.location.href &amp;&amp; metric === valueName) {
     value = description
     return true
 return value

const username = findBasePageValue('username')

// webdevs rejoice!

Charles Vazac is a senior architect at Akamai Technologies.