When we imagine the future we often imagine speed – faster transport, faster access to information, quicker service – to name just a few. What if we told you that your IT network (Network Latency) is probably capable of more right now? Your business is likely to be sitting on faster applications, a speedier customer response and greater productivity if only you could unlock it.
The key to doing that is to optimise your applications, and a key part of doing that is to reduce your network latency. But, if you’re going to conquer it, you’ve got to know your enemy…
What is Network Latency?
The concept of latency is much easier to understand if you picture the data that’s moving around the internet as a physical entity.
We rightfully consider information to be without mass – it’s not a parcel that’s going to take up room in the back of a delivery van after all. However, the digital world does actually split information into packets, and for most communication, it’s the fact that communication is split into multiple packets that are individually addressed and sent that makes latency matter.
In simple terms, latency measures the holdups in the time it takes for an amount of data to be passed from a sending point to a receiving point. Low latency means the data transfer was quick – high latency means it’s been delayed. It’s usually the there-and-back time, also known as the Round Trip Delay, that’s measured.
The latency Conveyor belt
Think of a network path like an airport pedestrian conveyor belt, for a moment. Imagine that the packets of data are tourists being sent to the far end by a tour rep. Let’s say that the rep is only allowed to send a certain number of tourists down the conveyor before waiting for confirmation that they’ve all got safely to the other end. After all, he doesn’t want to lose his customers! Let’s also say that to get an acknowledgement, someone has to come back the other way along the return conveyor. Now if that conveyor belt is long, slow or both, the tour rep will spend a lot of time waiting, and that will slow down the number of tourists he can get to their destinations. This is how IP traffic is usually managed, and it makes a big difference to how much data can be carried, with implications for performance. You can have an extremely wide conveyor belt (high bandwidth), but if it is really long and slow then the need to wait for an acknowledgement means it still can’t carry many people very quickly (low throughput).
What’s the holdup?
Just as physical information can be delayed in the real world, digital information can be delayed over a network – and it tends to be a combination of the following things standing in the way:
- Propagation delays
A ‘propagation delay’ refers to the speed at which the data travels – and any delay relating to distance.
Nothing can move faster than the speed of light – and because there’s always some physical movement involved with the transfer of data (even if that movement is waves of light or radio transmissions) this rule remains true for the digital world.
This puts a maximum speed on your data – and while you might think that isn’t too much of an issue – remember that data is often sent 20,000 miles into orbit before it heads for its destination!
- Nodal delays
On its journey into space, across countries and through seas and terrain, your data is going to be handled by a number of devices. These devices will generally be routers and switches – and while they’re efficient and specialised pieces of equipment, they do take a tiny fraction of a second to understand the data that’s landed with them and what should be done with it.
This is referred to as a ‘nodal delay’ – and they add up. Not to huge figures – but in latency terms, even milliseconds count.
Milliseconds count
Although reading this you might think that milliseconds don’t count when it comes to the apps you run or the customers you look after. I have to tell you that you’d be incorrect. That ‘spinning’ icon you see when you’re trying to watch on-demand TV is happening all the time behind the scenes in your applications – and it’s costing you precious productivity and customer satisfaction. The impact of your systems going down altogether can be huge.
It’s no over-exaggeration – customers come to expect the data that’s relevant to them to be accessed almost immediately – meaning they’ll happily click off if they can’t get to it soon enough – or vote with their feet if you’re slow at accessing it on their behalf.
It only takes a small delay amidst the multiple data conversations back and forth to support one of your applications, before end-users (or worse, customers) can end up waiting seconds for a response to their mouse click or for a price to come up.
How do I Manage Network Latency?
If you think one of your applications is performing poorly and impacting your staff or your customers then you need to take action.
The first thing to realise is that latency can be improved. You might, for example, change a circuit for a different type and dramatically improve matters. Or you might change from connecting over the internet to connecting over a private wide area network. Or you might find that replacing old networking equipment or removing bottlenecks can improve matters.
Use this tool to get a feel for the impact of the latency. It comes with an article that explains latency in more detail and gives examples of typical latencies you may find with different network types. There is also a link to explore how to audit your network to trace the cause of performance problems. Which introduces the last point – that latency is not the only cause of poor performance. It’s often the application, database or server and storage infrastructure that drives performance issues. This is where a managed service provider and a performance audit can often help to get to the bottom of things.
Leave a Reply