# Gigabit Ethernet: Dude, Where's My Bandwidth?

## First Test: How Fast Is Gigabit Supposed To Be, Anyway?

How fast is a gigabit? If you hear the prefix "giga" and assume 1,000 megabytes, you might also figure that a gigabit network should deliver 1,000 megabytes per second. If this sounds like a reasonable assumption to you, you’re not alone. But unfortunately, you’re going to be fairly disappointed.

So what is a gigabit? It is 1,000 megabits, not 1,000 megabytes. There are eight bits in a single byte, so let’s do the math: 1,000 megabits divided by 8 bits = 125 megabytes. Therefore, a gigabit network should be capable of delivering a theoretical maximum transfer of 125 MB/s.

While 125 MB/s might not sound as impressive as the word gigabit, think about it: a network running at this speed should be able to theoretically transfer a gigabyte of data in a mere eight seconds. A 10 GB archive could be transferred in only a minute and 20 seconds. This speed is incredible, and if you need a reference point, just recall how long it took the last time you moved a gigabyte of data back before USB keys were as fast as they are today.

Armed with this expectation, I’ll move a file over my gigabit network and check the speed to see how close it comes to 125 MB/s. We’re not using a network of wonder machines here, but we have a real-world home network with some older but decent technology.

Copying a 4.3 GB file from one of these PCs to another five different times resulted in a 35.8 MB/s average. This is only about 30% as fast as a gigabit network’s theoretical ceiling of 125 MB/s.

What’s the problem?

Summary
• mi1ez
It would have been interesting to see the network speeds with a decent SSD on either end.
• _SirO_
The "limiting" factor at 111MB/s is the Ethernet overhead....... The actual data needs to be framed and packed and that requires additional bits to be transferred so an overhead of (125-111)/125 = 11% seems reasonable.
• _SirO_
The "limiting" factor at 111MB/s is the Ethernet overhead....... The actual data needs to be framed and packed and that requires additional bits to be transferred so an overhead of (125-111)/125 = 11% seems reasonable.
• DevilWAH
I agree with Sir0 that the overheads will bring down the speed.

secondly if you have more than a point to point connection, (ie more than 2 devices) on the network your speeds will suffer.

Even if they are on seperate collision domains, you will still get broadcast trafic that will intrupt conversations.

And lastly the quility of you network cards! there is a reson that one network card will set you back £15 and another cost £150. I have found almost with out exception that a more expensive high quility card will sustain a higher through put than a cheap card.. you dont need to spend £150. but think twice before chosing a £10 card over a £25 one..
• daglesj
Ethernet is a sloppy standard. Typical case of just make everything 'big' and hope for the best.

Now if you had the efficiency of Token Ring with the bandwidth of ethernet....wow!
• profundido
Funny, I just upgraded my home network last week and the results are phenomonal: I copied a 12GB file from the C: drive (OCZ SSD)of my htpc with Vista RTM to raid array on my server with W2008 R2 RC Datacenter and it started at 105MB/s and dropped slowly to a steady 98MB/s until the end!! Ofcourse, copying the file back in the other direction delivered a 37MB/s (high read but slow write speed on SSD...) In fact, it's so grand to have this speed that I now gathered all my data on the 6.5TB networkdrive and stream or use anything from there. I use a Sitecom 8-port switch and CAT6 cabling if you wanna copy this proven high-performance model
• bobucles
I didn't like the cable tests at all. Many buildings do not use ideal 25-28ft connections. In fact, the last Digital Overload event used cables over 100ft long (they throw them out afterwords, a friend took them). The real test between cable types is when using them BEYOND spec, because it is cheaper to buy a 1000ft. spool of cable rather than getting a repeater box every 20 ft.

My own computer runs off a 50ft. cable to the central hub.
• amgsoft
I think the article is concluding something without considering how the networks works. The network does not work like the local PC buses and should probably not be compared with them at all.

First of all, the theoretical maximum transmit rate in one direction is not the same as how fast the file is copied. You have to consider, that a file copy will use TCP-IP protocol on the network. The Transmission Control Protocol splits the whole file to small pieces, usually of approx 1500 bytes long including the frame header, which needs to be acknowledged by the receiver. The sender and receiver operates with a window size, which is the number of unacknowledged packages. The sender sends a number of packages and then waits for an acknowledge signalling, that the receiver has received them all. If not, they need to be retransmitted. Then the sender can send the next portion of the file.

So the actual network card on both sides will be a very limitting factor. In theory you will be able to transfer (!) something between the half and 90% of the maximum rate in one direction.

But that is not the only limitting factor. The packages needs to be processed as well. The PC's will be interrupted for very package they receive and they need to get the data out of the network card in same speed as they arrive and put the data somewhere. For 125MB/sec and 1250-1400 bytes in one package, the PC needs to handle approx 100.000 requests/interrupts in a second from the network card and then probably the next 50.000 from the harddrive. It means 150.000 interrupts more, then required for normal functionality. It requires a lot of processing power from the hardware and from the operating system as well. Let me say, that windows will not be my choice of the operating system to handle this amount of data with very low latency.

The network speed measured in Gigabits is often more a sale trick then an actual information telling anything about the actual transfer speed. It is true, that when using higher speeds you will get the file over faster. However, you will be far from the maximum specification. If you want to utilize the full bandwidth, you need to invest into hardware, which is able to handle it. The majority of the systems, especially the notebooks, are able to send few packages with the right speed, but then they spend more time with waiting then with utilizing the full bandwith.

Another thing is the size of the package. The old ethernet uses 1500 bytes packages. It is a very limiting factor and I hope that in the future the ethernet specifications will be changed to support much larger packages. Today they are trying to use jumbo packages which are packages typ. up to 9kB. It is still to small for an effective data transfer on the network.
• Anonymous
Would have been interesting to see if gigabit made any difference to latency/fps on a large LAN game of some kind. I imagine a 64 player BF2 server might be able to use more than 100 megabits/sec. Not an easy bench to set up though
• tranzz
Why not set the drives up as ram drives. This would remove almost all speed bariers from the equation.
• _SirO_
tranzzWhy not set the drives up as ram drives. This would remove almost all speed bariers from the equation.

hum..... he did that...
• Devastator_uk
tranzzWhy not set the drives up as ram drives. This would remove almost all speed bariers from the equation.

I doubt it, most game servers need very little bandwidth really since not a great deal of information is needed to be transfered.
• Anonymous

"Each 1000BASE-T network segment can be a maximum length of 100 meters (328 feet), and must utilize "Category 5" cabling at a minimum".

http://en.wikipedia.org/wiki/1000BASE-T#1000BASE-T
• spec_00
amgsoft's post pretty much sums it up... The only thing I'd like to add is that network latency (or 'ping' - not the command) also makes a difference. They way TCP packets work is by the machines acknowledging packets received and though the ack is small and relatively simple to process, it will still take time to reach the sender/receiver and how fast that is will depend on ping.
Now calculate the time spent for transmission of each ack packet set back through the network , there's the rest of the time and bytes which constitute remainder of 125Mb/s, the rest is in Amgsoft's post.
Well written indeed...
• Anonymous
It would be good to see the test using a cross over cable from computer to computer to see if the switch is the limiting factor. Ug!
• Anonymous
Disappointed this didn't look at jumbo frames - something that is hard to set up for a typical home enthusiast.

Anyone know if a Gigabit switch connected to a Fast Ethernet router will slow down much? The router is the HDCP server for the network.
• Anonymous
The one thing no-one seems to consider is that the purpose of a gigabit network is not to provide gigabit bandwidth for one PC. It is to provide a much higher bandwidth for multiple PC's. The article is incomplete in that it doesn't compare the throughput using multiple PC's.
• Anonymous
This article is well below the usual standard of THG. Framesize, payload, concurrent transmission, collision detection, cut through vs store and forward in switches, where shall one begin? The "cable test" is silly beyond words (and so is bobucle's comment on going BEYOND spec). Schoolboy stuff. Amgsoft started to explain but clearly ran out of space/patience.
• QuickN
I wish you had added different switch models; I have noticed some gains with highend switchs in real world going from higher end netgears to Cisco 3950 series no other systems that was a missing layer; good article thanks!
• Anonymous
You forgot one important item to try. If your switch supports it, you can send jumbo frames. You would also have to change the MTU and RWIN the corresponding workstations (if running windows). This will also increase the network transfer speed in a single file copy from workstation to workstation. Typical results are about 5% - 20% increase in speed network speed. Of course the disks are still the bottle neck. This is something that is commonly used on iSCSI connections with the MS iSCSI initiator and would work for the workstations as well as long as the swith supports it.