Understanding UI Performance7 min read

Understanding UI Performance

Let’s say that you have built a shiny new website and launched it for the world to use. With in the first few days, you keep hearing that the application is slow, and it takes forever to load simple pop-ups, yada yada yada. How do you go about analysing and fixing them?

In this post, we will look at multiple ways how we can dissect a performance issue related to user interface and understand what things you could have done better during your development and testing phase to avoid a situation like this.

As a baseline, we are going to take the example of my blog’s home page. If people reported that my blog is loading slowly, the first thing I would do is go to Pingdom and run a page load test. And the results would reflect as

Screen Shot 2014-06-09 at 11.35.24 pmThis seems kind of okay, with a load time of 2 seconds and page size just about an MB for the home page of my blog.

To dig a bit deeper, I would scroll down the results page to see the waterfall display of the requests and I see something like

Screen Shot 2014-06-09 at 11.35.41 pmTake a look at the first request to my server. There is a solid 1 second wait time, before my server responds with the first byte.

So, this could be our issue #1

#1 The hosting servers may not be up to mark

I have hosted my blog on BigRock and they have provided me with 4 shared servers, based out of US. The load is pretty much balanced at all times but there are times when the server might be stuck responding to another shared host.

To validate that, we will run the test 2 more times. This time I see

Screen Shot 2014-06-09 at 11.43.41 pm Screen Shot 2014-06-09 at 11.43.57 pm

And one last time

Screen Shot 2014-06-09 at 11.45.31 pm Screen Shot 2014-06-09 at 11.45.45 pm

Looks like my assumption is correct. In the final iteration, the page load time was under a second.

So the solution for #1 would be either I move my hosting out of a shared environment and move to a VPS or look for a new hosting service.

Also, if your backed is powered by other programming languages, make sure to run a trace on how much time is spent in your application layer. Something a slow DB response can ruin an awesome website.

The next thing I would do is check the Performance Grade tab. And this is what I see

Screen Shot 2014-06-09 at 11.51.29 pmOur next issue is

#2 Not leveraging browser caching

Since I am using a wordpress blog, I can either update my .htacces file by expires to each resource type or use a WordPress plugin like Leverage Browser Caching Ninjas to counter it. And once I do that and run the test again

Screen Shot 2014-06-10 at 12.01.48 amI see that the performance grade has increased by 6 points. But now you see that Remove query strings from static resources parameter’s score has gone down. This is because of our plugin. It has added query params to cache the resource.

Now, you can either leave it like this or you can fix it using Remove Query Strings From Static Resources plugin and if you test again

Screen Shot 2014-06-10 at 12.07.37 am

Voila! the score has gone upto 91. But unfortunately, stripping query strings is breaking a couple of plugins. So I had to undo it.

But you get the drift right?

Let us move to the Page Analysis tab. Here we will analyse the resource.

First the response code

Screen Shot 2014-06-10 at 12.14.12 am

Looks good, no issues here. Next, we will check the load time analysis. The awesome part of Pingdom is that, if you run consecutive tests and save them, Pingdom will show the analysis of the last runs together

Screen Shot 2014-06-10 at 12.18.11 am

As you can see the connect and wait are taking a lot of time as discussed in issue #1. Next, the time taken to load a content type

Roughly 42% – 43% are scripts. If you are performing these steps on your web application, check if you can cut down a few scripts, make sure that you are minfing the scripts and not including any unused code. Since most of the scripts in my blog are plugins, I really don’t (want to) know what happens if I start cleaning them up.

Next we will take a look at requests per content type

Screen Shot 2014-06-10 at 12.27.58 amas you can see from above, there were 24 request made to fetch all the script files. This is issue #3.

#3 Loading the same content type in multiple files

The solution for #3 is concatenation. Bundle all the CSS code into one file, bundle all the JS code into one file. Use sprite image to load the images.

If we can achieve the above religiously, we will be loading 3 resource instead of ~60. This may to take a few extra milliseconds to load initially, but because of caching as discussed in #2, they will be saved on the client for future pages.

And now, let us see how many requests are made on a per domain basis

Screen Shot 2014-06-10 at 12.27.37 am~85% of the requests are targeted at thejackalofjavascript.com. And this is issue #4.

#4 Loading a lot of files from the same domain

But if you already did not know, a browser can make a maximum of 8 (if I am not wrong) requests to the same domain at a given instance. So if you are loading all the resources from the same domain, your waterfall view will be very long. i.e. the page takes a longer time to load the last resource.

So the solution for #4 is to use a CDN. Or distribute your static resources across multiple servers. This way, your browser can download all the requested resources quickly.

The above process gave us a good picture of the overall performance. Now, let us analyse the performance from a User Experience perspective.

The next tool we are going to use is webpagetest.org. I will spin off a test with the below settings

Screen Shot 2014-06-10 at 10.51.28 pmDepending on the traffic, your test will begin. Once your test is executed, the first thing you will notice is the site grade

Screen Shot 2014-06-10 at 10.53.34 pm

I got an ‘F’ in Time to First byte (we all know why) and another ‘F’ for not using Progressive images.

Progressive Image

A simple or “baseline” JPEG file is stored as one top-to-bottom scan of the image. Progressive JPEG divides the file into a series of scans. The first scan shows the image at the equivalent of a very low quality setting, and therefore it takes very little space.  Following scans gradually improve the quality.  Each scan adds to the data already provided, so that the total storage requirement is roughly the same as for a baseline JPEG image of the same quality as the final scan.  (Basically, progressive JPEG is just a rearrangement of the same data into a more complicated order.)

This is very essential for a good UX. The user will see an image and will wait for it to sharpen, unlike the basic way, where the image takes time to load. This is issue #5.

#5 using progressive jpegs over basic

I can solve this by achieving image optimization (refer this post). Then as we move down the page, we will see a table that compares page load times

Screen Shot 2014-06-10 at 10.56.49 pmAs you can see, the website loaded in 7.5 seconds for the first time and the second time in ~2 seconds. Thanks to caching. But this table tells us more than that. Take a look at Start Render column. This is the time when all the required stuff was downloaded and the page rendering began. As per the table, for the first time it took 6.2 seconds and the second time 1.8 seconds.

What does this mean? This means that the user did not see any output till 6 seconds. Want proof? Click on ‘Filmstrip View’ for the first load and you would see frame by frame as how the user experienced the page load. If you generate a video, it will look like

The user sees a blank screen for ~6 seconds. The user has no clue if the site has loaded or the code broke. If the user is kind enough, s/he hits the stop button and tries reloading again. And this is what s/he will see

All our hard work in optimizing the page earlier paid off. So how do we deal with issue #6

#6 Poor user experience while page loads

What could be the reasons for this behavior

  1. Rendering blocking CSS/JS in header, which will block the UI rendering till all the resources are completely loaded
  2. A very rare but plausible reason is scripts/styles added above or in between the markup, that are not available/respond slowly will also cause the rendering to stop.
    Scenario: When using third party script/style to add features to your page, make sure you add them at the bottom. Because if the third party service is down, the browser will wait for the request to timeout (~30 seconds). Till then the code below the script/style will be blocked.

My Blog has this issue, because all the script tags are inside the header. This creates the blocked rendering scenario.

You can test this with Google PageSpeed also. Once you have run the test, you will see the below error/warning, if you have issue #6.Screen Shot 2014-06-10 at 11.38.25 pmScreen Shot 2014-06-10 at 11.38.41 pmPage speed tells us that there is a “render-blocking” JS and or CSS in “above-the-fold”. For a mobile device, the ‘above-the-fold’ area would be

Screen Shot 2014-06-10 at 11.40.13 pmand for the desktop it would be Screen Shot 2014-06-10 at 11.40.22 pm

Now, you can decide where to load the script/style. If within the first 1-2 seconds the user can see some output from you page, he will be happy. You can load the remaining resources later on. This will directly contribute to your bounce rate.

So far we have examined the UI performance wrt to Servers, Network speed, Resources size and rendering location on page. Now let us take a step further and look at what are the things that contribute to page load from a code perspective.

Fire up Google Chrome and then open dev tools (ctrl + shift + i or cmd + shift + i). Next, click on the setting icon and enable “Disable cache (while DevTools is open)” option

Screen Shot 2014-06-10 at 11.52.51 pm

Next, close the setting panel and navigate to the Network tab. Right click anywhere inside it and select “clear browser cache”

Screen Shot 2014-06-10 at 11.54.27 pm

Now, open the page you are testing. You will see that the network tab will start populating the request and its details. Once the page is loaded, at the very end of the Network tab, it will report back something like

79 request | 739 KB transferred | 16.14 s (load: 16.14 s, DOMContentLoaded: 9.29 s)

Cool. So, it took 9 seconds to construct the DOM. Now, let us take a look at how the page is constructed. Click on the Timeline tab and hit the record button (first row, extreme left, beige round icon) and reload the page.

Once the page is loaded, you will be presented with a view like

Screen Shot 2014-06-11 at 9.48.31 amAs you can see from the pie chart on the bottom right hand corner, it took

  • 10.773 ms Loading
  • 164.455 ms Scripting
  • 322.583 ms Rendering
  • 63.468 ms Painting
  • 143.495 ms Other
In general from a browser perspective, the painting and rendering takes more time and not Javascript (a misconception). If you want to check your Javascript code’s performance, you can always come here and see how long does your method actually takes to run.
Next, click on the “Frames” tab on the top left and you will see a chart that indicates how much % of process is used by each task like
Screen Shot 2014-06-11 at 9.56.05 am
  • Purple – Rendering
  • Green – Painting
  • Orange/Yellow – Scripting
  • Blue – Loading

And the bottom, you can click on a certain frame and checkout the code, that triggered the Scripting, Painting, Rendering or Loading tasks.

So, now you have identified that a certain script is taking too long to run or a certain piece of code performing too many redraws. How do you go about solving such issues?

Paul Irish has created a video that showcases how to approach such issues

With this, we conclude how to debug your website in case of performance issues. This is not a through check, but it will give you a ballpark on where the issue might be.

Thanks for reading! Do comment.