News Stay informed about the latest enterprise technology news and product updates.

Performance optimization a key theme at 2014 O'Reilly Fluent Conference

At the 2014 O'Reilly Fluent Conference in San Francisco, one key theme was performance optimization.

Enterprise architects must balance technology, time and money to achieve the greatest gains in performance. At the 2014 O'Reilly Fluent Conference in San Francisco, a key theme revolved around various approaches for doing so.

Performance means different things to different people, said Ilya Grigorik, developer advocate at Google. Engineers think about parsers, lexers, and the technical limitations of RAM and energy usage in mobile devices. Developers think about JavaScript speed and garbage collection efficiency. User interface designers think about selector matching, layout and style calculations. Meanwhile, the business is wondering how all of this relates to visits, engagement and conversion.

To better understand how end users perceive performance, it is important to think about task-oriented flow.

To better understand how end users perceive performance, it is important to think about task-oriented flow, which is very context-dependent. For example, when interacting with a website, a delay of 100 milliseconds (msec) feels instant. A user might experience a perceptible delay when waiting longer.

But in other contexts, 100 msec is too slow. With visual processing, video starts to feel choppy when it is rendered at rates lower than 24 frames per second (fps), while a great visual experience demands 60 fps, which corresponds to about a 16 msec delay. With audio, we can detect a mere 1 msec of jitter. "The performance is not just milliseconds, frames per second and megabytes, but how those translate into how the user perceives the application," Grigorik said.

In some cases, a site might be blazing fast, but the perception of performance is limited by the way the user needs to interact with it. Grigorik pointed to the popular Hacker News site, which is technically fast and reliably renders in under 5 msec. But the experience is poor on his mobile device because he has to go through several steps to resize the text for reading. It ultimately takes him 5 to 7 seconds to start reading.

Grigorik recommended developers and designers put a high priority on the concept of user task. Ask yourself, "What are the primary user tasks and how long does it take to complete those tasks?"

Increasing the perception of speed

Steve Souders, who has published considerable research on technical performance optimization and who recently signed on as chief performance officer at Fastly, a Web performance optimization tool vendor, discussed other ways developers could increase the perception of speed.

One of the key techniques is keeping the user's mind busy while the backend is taking care of business. He calls these techniques "performance illusions," and they have been widely used in the material world to reduce anxiety. For example, non-functional buttons for pedestrian crosswalks in New York and closing subway doors in the U.K. give users the illusion they are controlling their environment.

When Souders first looked at Web optimization in 2001, he came across one study that showed Amazon was perceived as the fastest website, even though it actually took longer to render the whole page. The reason for this discrepancy was that Amazon did a good job of rendering the content most often used -- such as a product summary and a "buy" button -- while lesser-viewed content, which had to be scrolled down to, took longer to render.

Souders also believes browser makers and mobile app developers could do a better job of notifying users that a request has been received and that pages are loading. He looked at a variety of page navigation techniques to see which ones triggered wait notifications across different browsers. In all cases, a notification is triggered when a user opens a new page. But other techniques, such as dynamic scripts and dynamic iFrames, varied in their ability to trigger these alerts across different browsers.

In some cases, such as YouTube, developers have taken some steps to show progress of long processes, such as loading a video. However, it is important to make sure these indicators correspond to the amount of time the user experiences, rather than the number of objects loaded, which can be different. "You don't want a progress indicator just for the sake of a progress indicator," Souders said.

Sounders also recommended displaying some content, even if it's only a wireframe, while the application is collecting the data that needs to be rendered in the background. For example, Instagram creates a wireframe of images that are being loaded and uses shaded rectangles based on the average color of the image being rendered. This occupies the mind while the data is being transferred. The user can start engaging with the page and knows what to expect even before it is fully loaded. On mobile devices, another technique that Souders likes is to slide new tabs into view as they are loading. This helps to keep the user's mind preoccupied while indicating progress towards loading the page.

By default, most metrics on performance are based on the amount of time to load the window. "But this is a proxy for what we really care about doing, which is making sure the user experience is as fast and enjoyable as possible," Souders said.

Tips for mobile optimization

While perceived performance is important, there are still plenty of things developers can do to improve the technical performance for mobile applications, said Peter McLachlan, chief architect and co-founder at Mobify, a mobile development company. Some of his research looked at the various ways to optimize the way mobile apps are written.

He found a three-fold difference between mobile applications with an established connection to an application server and those opening a new connection. From a performance perspective, he recommended taking steps to keep the TCP connection between the client and the server alive for as long as possible.

Web cookies can create a more significant impact on performance than might be expected. While these files might not be particularly large, the website waits until the entire packet is received before rendering the webpage. Even cookies that are only a few kilobytes in size can reduce the performance by about 10%. By keeping the size of cookies down to under 400 bytes, page rendering performance is only reduced by an average of 1%.

McLachlan recommended avoiding using domain sharding, as it won't help most applications, and may actually reduce performance. The HTTP 1.1 specification limited a browser to only two connections per hostname. In order to allow greater transfer of data across parallel connections, many developers used domain sharding to trick the browser into thinking it was connecting with different hosts. This worked well in the early days of the Web, but modern browsers have ignored this constraint by supporting 4 or 6 connections per host.

It's important to consider the relative performance gains from optimizing different parts of the browser code. Many developers have theorized that optimizing the CSS code used for rendering the page could improve performance. McLachlan said this is not the case for mobile browsers. He experimented with several variations of CSS code for producing the same page and found virtually no differences between them. "There is no real reason to invest the time to optimize the CSS," he said.

McLachlan concluded, "None of these tests were rocket science. What surprised me was how little real data was being published. I would like to see more sharing of actual performance data that people are seeing in the wild."

Dig Deeper on Stress, Load and Software Performance Testing

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.