Communications Insider: March 2009: Volume 4 Issue 1
Switched Ethernet Latency
Ethernet has challenged certain alternative network technologies because of its high data throughput rates. However, despite its throughput advantages, Ethernet has a key shortcoming: the exact time it takes any single packet to traverse the network cannot be predicted.
Unpredictable latency is not acceptable for certain types of applications where message delivery must occur within a specific time limit. So the question becomes: If we cannot know the actual time, can we calculate the maximum time a packet will need to travel from one end of the network to the other? Fortunately, the factors which influence latency in a switched Ethernet network are well known, so it is possible to make this analysis.
Worst Case Latency Analysis
To come up with the worst case scenario that would result in the highest latency, let’s assume the network topology shown in Figure 1. There are four devices, A – D, communicating via this switch:
- Device A is a main controller
- Device B and C send large (1518 Byte) frames at high data rates to Device A (assume they are video cameras streaming data.) Traffic from B and C to A has a low priority.
- Device D is a latency critical controller, which sends 1000 Byte packets every 100ms to Device A. Traffic from D to A has a high priority.
In this example, latency is the time from when the first packet bit is clocked out on the Ethernet transmit link of Device D until the last bit of the same packet is clocked in on the receive link of Device A. This analysis deals only with the Ethernet network and does not include time required by operating systems on Device D and Device A.
Note: In this example, the switch is a Gigabit Ethernet switch, but the links are established at 100Mbit/sec. This will influence the internal switch capacity and the maximum data/packet rate each port can sustain. Abbreviations: B=Byte, b=bit, ms=millisecond, us=microsecond.
Using Figure 1, with a 24 port switch capable of line rate performance, or 48Gb/s throughput, the latency can be calculated as follows:
The calculations shown here give an example of how worst case latency in a Switched Ethernet network can be estimated. There are a few other things, however, that are important to note:
- Ethernet is a Best Effort technology. Switches will drop packets as soon as their memory and FIFOs fill up, hence it is important to plan for lower than maximum network utilization.
- TCP and UDP protocols used over Ethernet are notorious for generating bursty traffic patterns. Even when a network is underutilized, if every network device starts sending a large number of packets at once, the switch packet memory might overflow. Therefore, network design should look into how much packet memory is available and allocate memory based on packet priority, starting to drop low priority packets first if overflow occurs.
- Another aspect of network design is the end nodes themselves. The operating system needs to support latency expectations. If the operating system is tied up with some task for an extended amount of time without servicing Ethernet packets, that in turn might create FIFO congestion on the device itself. As a response, the Ethernet controller might invoke Ethernet flow control to throttle down packet transmission from the switch, moving congestion further upstream in the network. Therefore network design shall insure that devices are capable of handling the expected amount of traffic, and that they are able to assign packet priorities and differentiate their handling.