Actual throughput on Gigabit Ethernet

By | June 21, 2011

How to calculate the usable bandwidth on a Gigabit Ethernet network.

In this article we will look at how much throughput of actual data we have on a Gigabit Ethernet based network and if this will increase by using Jumbo Frames.

How much of this is user data and how much is overhead?

The bandwidth on a Gigabit Ethernet network is defined that a node could send 1 000 000 000 bits each second, that is one billion 1 or 0s every second. Bits are most often combined into bytes and since eight bits make up one byte this will give the possibility to transfer 125 000 000 bytes per second (1000000000 / 8).

Unfortunately not all of these 125000000 bytes/second can be used to send data as we have multiple layers of overhead. As you may be aware of, the data transferred over a Ethernet based network must be divided into “frames”. The size of these frames regulates the maximum number of bytes to send together. The maximum frame size for Ethernet has been 1518 byte for the last 25 years or more.

Each frame will cause some overhead, both inside the frame but less known also on the “outside”. Before each frame is sent there is certain combination of bits that must be transmitted, called the Preamble, which basically signals to the receiver that a frame is coming right behind it. The preamble is 8 bytes and is sent just before each and every frame.

When the main body of the frame (1518 byte) has been transferred we might want to send another one. Since we are not using the old CSMA/CD access method (used only for half duplex) we do not have to “sense the cable” to see if it is free – which would cost time, but the Ethernet standard defines that for full duplex transmissions there has to be a certain amount of idle bytes before next frame is sent onto the wire.

This is called the Interframe Gap and is 12 bytes long. So between all frames we have to leave at least 12 bytes “empty” to give the receiver side the time needed to prepare for the next incoming frame.

This will mean that each frame actually uses:

12 empty bytes of Interframe Gap + 1518 bytes of frame data + 8 bytes of preamble = 1538

This makes that each frame actually consumes 1538 bytes of bandwidth and if we remember that we had “time slots” for sending 125000000 bytes each second this will allow space for 81274 frames per second. (125000000 / 1538)

So on default Gigabit Ethernet we can transmit over 81000 full size frames each second, a quite impressive number. Since we are running full duplex we could at the same time receive 81000 frames too!

We shall continue to study the overhead for this. So for each frame, we lose 12 + 8 bytes used for Interframe Gap and Preamble, which could be considered to be “outside” of the frame, but could we use the rest to send our actual data? No, there is some more overhead that will be going on.

The first 14 byte of the frame will be used for the Ethernet header and the last 4 bytes will contain a checksum trying to detect transfer errors. This uses the CRC32 checksum algorithm and is called the Frame Check Sequence (FCS).

This means that we lose a total of 18 bytes in overhead for the Ethernet header in the beginning and the checksum at the end. (The blue parts above could be seen as something like a “frame” around the data carried inside.) The number of bytes left is called the Maximum Transmission Unit (MTU) and will be 1500 bytes on default Ethernet. MTU is the payload that could be carried inside an Ethernet frame, see picture above. It is a common misunderstanding that MTU is the frame size, but really is the data inside the frame only.

Just behind the Ethernet header we will most likely find the IP header. If using ordinary IPv4 this header will be 20 bytes long. And behind the IP header we will also most likely find the TCP header, which have the same length of 20 bytes. The amount of data that could be transferred in each TCP segment is called the Maximum Segment Size (MSS) and is typically 1460 bytes.

So the Ethernet header and checksum plus the IP and TCP headers will together add 58 bytes to the overhead. Adding the Interframe Gap and the Preamble gives 20 more. So for each 1460 bytes of data sent we have a minimum of 78 extra bytes handling the transfer at different layers. All of these are very important, but does cause an overhead at the same time.

As noted in the beginning of this article we had the possibility to send 125000000 bytes/second on Gigabit Ethernet. When each frame consumes 1538 byte of bandwidth that gave us 81274 frames/second (125000000 / 1538). If each frame carries a maximum of 1460 bytes of user data this means that we could transfer 118660598 data bytes per second (81274 frames x 1460 byte of data), i.e. around 118 MB/s.

This means that when using default Ethernet frame size of 1518 byte (MTU = 1500) we have an efficiency of around 94% (118660598 / 125000000), meaning that the other 6% is used for the protocols at various layer, which we could call overhead.

If enabling so called Jumbo Frames on all equipment, we could have a potential increase in the actual bandwidth used for our data. Let us look at that.

A commonly used MTU value for Jumbo Frames is 9000. First we would have to add the overhead for Ethernet (14+4 bytes), Preamble (8 bytes) and Interframe Gap (12 bytes). This makes will make the frame consume 9038 bytes of bandwidth and from the total amount of 125000000 bytes available to send each second we will have a total of 13830 jumbo frames (125000000 / 9038). So a lot less frames than the 81000 normal sized frames, but we will be able to carry more data inside each of the frames and by that reduce the network overhead.

(There are also other types of overhead, like CPU time in hosts and the work done at network interface cards, switches and routers, but in this article we will only look at the bandwidth usage.)

If we remove the overhead for Interframe Gap, Ethernet CRC, TCP, IP, Ethernet header and the Preamble we would end up with 8960 bytes of data inside each TCP segment. This means that the Maximum Segment Size, the MSS, is 8960 byte and is a lot larger than default 1460 byte. A MSS of 8960 multiplied with 13830 (number of frames) gives 123916800 bytes for user data.

This will give us a really great efficiency, of 99% (123916800 / 125000000). So by increasing the frame size we would have almost five percent more bandwidth available for data, compared to about 94% for default frame size.

Conclusion: Default Gigabit Ethernet has an impressive number of frames (about 81000 per second) possible and a high throughput for actual data (about 118 MB/s). By increasing the MTU to 9000 we could deliver even more data on the same bandwidth, up to 123 MB/s, thanks to the decreased amount of overhead due to a lower number of frames. Jumbo Frames could use the whole of 99% of Gigabit Ethernet bandwidth to carry our data.

7 thoughts on “Actual throughput on Gigabit Ethernet

    1. Rickard Nobel Post author

      Hello Christian,

      and thank you for your reply. It is an interesting question where most sources (and I belive the 802.3 standard) says you must wait 96 nanoseconds between each transmitted frame, which for Gigabit Ethernet is the same as 96 bit times, which is 12 bytes “space”.

      The information on the Wikipedia link seems to say to you could lower the Interframe Gap time to 64 bit times (8 bytes), but is quite vague on how this is done in practice and if specific network cards and switches are needed for that to work. I shall see if I can find any more information on this.

      Regards, Rickard

  1. Pingback: What Is A Good Home Internet Speed? – Home Network Ninja

  2. Ellison Keller

    Can you run this calculation for smallest Ethernet Frame size of 64bytes on a 10G network? My calculations using this formula seem way too low.

    1. Rickard Nobel Post author

      Hello Elison,

      if using the minimum frame size and still assuming using TCP the throughput will be quite low, probably just as you have calculated.

      The full size would be:

      8 preamble
      14 ethernet header
      20 IP
      20 TCP
      6 payload
      4 frame check sequence
      12 interframe gap

      Which will give 84 bytes consumed for the whole frame but only 6 bytes being payload.

      Using the same formula as in the article but with 10G gives 1250000000/84 = 14880952 frames per second (minimum sized). 14880952 frames with 6 bytes of payload results in 89285714, or around 89 MB per second, which is actually less than 1 GBit Ethernet with full size frames!

      89285714 / 1250000000 = 0,07 = the effectivity ratio is about 7 percent, and the rest 93 % is overhead…

      Regards, Rickard

  3. Peter

    Interesting and well written. But it got me thinking… If i had a memcached server and used the jumboframes of 9K, it would mean that it could not serve more then 13830 replies per second. (in theory). But a ‘normal’ framesize would offer me 81K potential answers. Since memcached data is often very small, it would be better to have no or smaller jumboframes, depending on the type of server/data you are running/serving. More bandwith does not always translate to more replies to clients.
    For example a counterstrike or WoW server would get loads of packages with a very tiny bit of information. So my conclusion is “bigger isn’t always better – or faster”


Leave a Reply

Your email address will not be published. Required fields are marked *