What Latency Really Means for Business

Network latency’s cost implications for businesses are reflected in how users respond. By thinking about performance from a user-centric perspective, latency’s impact – and the way forward – become much more clear and measurable. 

Sadly, there’s no “five second rule” exception with performance lag. Because users see delays onscreen, there are real business implications with fractions of each second. The TABB Group famously estimated that stock brokers using electronic trading platforms with just 10 milliseconds of latency stand to lose 10 percent of their revenue.

Knowing what latency costs is a helpful way to understand the strategic implications of better synchronization for your business, and whether more can be done to provide a better customer experience across device and network differences. 

What Latency Does to Conversion and Bounce Rates

Lost user attention translates to real-world financial losses for many industries, particularly financial services and ecommerce, but also wherever any transactional delay means lost business or missed opportunities. 

After one second, users are typically much more susceptible to distraction. The longer users are frustrated, the more awareness they have of any lag in performance. From there: 

  • 100 milliseconds of loading time means a 7% lower conversion rate (Akamai)
  • 5 seconds of lag increases bounce probability by 90% (Google) 

Longer load perception, when it increases bounce rates and decreases conversion rates, creates financial losses. Poor user experience forms the foundation of lagging business performance online.  

The RAIL Model: Rethinking Latency in Web Applications  

User expectations should drive performance goals. Take web applications as an example. In a web application context, drilling-down to the real impacts of latency means looking closely at how users experience performance delays. 

Created by Google’s Chrome team in 2015, RAIL is based on user interaction with websites. If you were to map a typical user journey, it might look something like this: 

  • Waiting (for a page and application to load)
  • Watching an animation or reading text
  • Scrolling down
  • Tapping or clicking a link
  • Waiting for a page to load 

While visiting just one page, our user undertakes five different actions. With the RAIL (Response, Animation, Idle, and Load) model, network engineers have quantifiable, user-focused metrics to understand how users see performance delays in their applications. 

Evolution equipped the human brain to perceive delays greater than just 16 milliseconds per frame – and when animations take longer, users notice. Setting performance goals accordingly helps reframe the discussion around latency and make it more relevant to user expectations. 

RAIL guidelines encourage performance users see as instant. Here’s what these guidelines look like and how they shape the conversation: 

  • Response in less than 100 milliseconds: Ideally, user input is responded to within 50 milliseconds – this ensures that other latency sources don’t extend the total even processing time beyond 100 milliseconds. 
  • Animation within 10 milliseconds: Refreshing animation frames 60 times per second provides users with a seamless experience. Since browsers use 6 milliseconds to display a new frame, your application realistically has just 10 milliseconds per frame. 
  • Idle time isn’t wasted: Although performing work during idle time makes sense, work shouldn’t jeopardize responsiveness. User input is always the highest priority. 
  • Load before you lose users: The first load should be fast and subsequent loads should be faster. Slow loading looks like a broken application, so you’ll want to optimize your lag time based on what devices and networks your users actually have. For mobile users, you’ll have to account for slow 3G or 4G connections and possibly for older devices. Users often have more patience for mobile connections, so a desktop or laptop-based user might not be so forgiving. 

From what we know about user perceptions, we can build responsive applications and networks with a greater degree of perceived immediacy from the user’s point of view. 

Framed within the RAIL model, latency becomes an impediment to the types of user experiences today’s brands want to create. Each additional millisecond can have actual costs in terms of revenue, user interaction, and brand loyalty. Organizations really are in a competition to deliver less latency than their competitors, who are also eager to connect online with users. 

Measuring and Reducing Latency with Better Clock Sync

Of course, reducing latency is only realistic if we can accurately measure the full latency users experience. As we’ve discussed in past blog posts, the right network timing is invaluable to accurately measure one-way latency and decompose total request completion time, which gives more tools and new insight for network teams to be able to parse whether latency issues lie within the network or the application. Contact us to schedule a demo and learn more about how Clockwork helps teams better diagnose and resolve network latency to provide a better and quicker user experience.

Interested in solving challenging engineering problems and building the platform that powers the next generation time-sensitive application? Join our world-class engineering team.

Contact Sales

Learn how Clockwork technology can power your mission-critical applications in cloud and on-prem environments.  Please complete the form below.