As a consultant at Prowess Consulting, I spend a lot of my day working on functionality test reports, whether writing them or helping devise ways to test different end-user devices—tablets, thin clients, laptops, even all-in-ones. Recently, I worked on a piece specific to the performance of a cloud solution—it was not about a device at all, but it was instead about network performance. I was reminded that there is another side to device performance—the network side.

I could have the best device in the world, but if the network is not performing, it does not matter. The best device on a bad network will still perform badly when interfacing with the network or the Internet; it might even be worse than the worst device connected to a great network.

Contributors to Network Performance

The two primary contributors to how a network performs are its bandwidth and its latency.

Bandwidth or Bits per Second

Bandwidth is the network’s capacity to transfer data; it’s the size of the pipe. That capacity is generally measured in bits per second (bps), megabits per second (Mbps), or gigabits per second (Gbps).  That capacity is the maximum bits that can be transferred, with speeds typically described as “up to” a certain amount. Like any pipe, if something else is sharing the pipe or creating a clog, the network is not going to deliver its capacity or specified bits per second.

Latency

Latency is the lag or delay that occurs while data is being transferred through the network pipe. Latency tends to be the culprit when bandwidth speeds do not deliver their full “up to” capacity. Excessive latency creates network bottlenecks (clogs), which reduce the amount of data that the network can transfer regardless of its bandwidth. When network data transfers more slowly than expected, an administrator typically looks at latency first.

Latency usually results from router hops and distance between the starting and ending points. A router is a device that transfers data packets from one network to another (the hop). At each hop, the packet is copied, which creates a few milliseconds of delay. That delay is insignificant in and of itself, until a packet is travelling through 50 or more routers or high network traffic forces a router to wait several more milliseconds before it can send the packet on its way.

The distance that a packet has to travel also causes added delays. The longer the path the packet travels, the longer it takes to get there. Add several delays due to the router hops, and the latency grows.

Both bandwidth and latency impact overall network performance. They can likewise impact user experience—positively or negatively. Simply travelling on a multilane highway does not mean that you will get to your destination on time if there is something blocking the roadway and forcing everyone into a single lane.

The Cost of Latency

A study done by Amazon in 2008 put the effect of even a little latency into perspective for me.1 That study looked at the cost of latency on sales for Amazon. At the time, Amazon determined that 100 milliseconds of latency could cost it one percent in sales annually. When that one percent drop in sales is extrapolated out to Amazon’s size and revenue in 2015, that 100 milliseconds of latency could potentially cost Amazon $1.07 billion in that year alone.

How Latency Is Tested

Latency is tested by measuring the time that it takes for a single network packet to travel from its starting point to its intended destination and back again—a roundtrip transfer. If the packet travels more slowly than expected, latency is high. If it travels as quickly or more quickly than expected, latency is low. Low latency is the end goal on any network.

Ping and Pinging the Network

Ping is an Internet program that sends a request to determine if a host is accepting requests (that is, if it is “live”) and how long it takes for the host to respond to the request if it is live. Ping is also used to describe the actual action of sending the request; “pinging” the network. A faster response means that the connection is more responsive and has lower latency, which enables:

  • More consistent speeds and throughput, which can result in more reliable performance for applications using the network
  • The capability to handle more requests, whether from one or many devices at a time
  • Better throughput performance for devices currently connected to that network

The Latency Takeaway

My takeaway when being reminded that a device is not an island (most of the time) is to remember that even the best devices are dependent on the network and its performance, not to mention the servers connecting to the network.

 

[1] Digital Realty | Telx. “The Cost of Latency.” March 2015. http://www.telx.com/blog/the-cost-of-latency/.

Share this:

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail