Sign in with
Sign up | Sign in

Gigabit Ethernet: Dude, Where's My Bandwidth?

Gigabit Ethernet: Dude, Where's My Bandwidth?
By

I was in no hurry to upgrade my home network from 100 Mb/s to gigabit speed, which is odd when you consider how much time I spend waiting for file transfers. That's because when I spend money on PC upgrades, I think of the components that offer an immediate performance increase in the applications and games that I run. Putting cash towards things like video cards, CPUs, and even peripherals is almost like buying toys for myself. For some reason, network components don’t inspire the same amount of excitement. Indeed, it’s harder to let go of hard-earned money when it feels like an infrastructure investment rather than a self-gifted advance birthday present.

Inevitably, however, my high-bandwidth networking demands were making it obvious that 100 Mb/s weren’t going to cut it anymore if I valued my time. All of the systems I was running at home already had gigabit network controllers built into their motherboards, and I remember looking into the rest of the hardware shopping list that I needed to step up my network to full gigabit speed.

When I was all done collecting the pieces, I remember copying a large file over the old 100 megabit equipment, which took about a minute and a half, and then upgrading to the gigabit network. After the upgrade, it took about 40 seconds to copy the same file. It was a nice performance boost, but not quite the 10 times difference between 100 Mb/s and 1 Gb/s I was expecting.

What's with that, anyway?

If you’ve had a similar experience, or you plan on migrating to a gigabit network yourself, read on. We’ll be going over the basics of gigabit networking, the variables that will impact the network speed, and what you can do about them to get the most out of Gigabit Ethernet.

Ask a Category Expert

Create a new thread in the UK Article comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 27 comments.
This thread is closed for comments
  • 1 Hide
    mi1ez , 22 June 2009 15:57
    It would have been interesting to see the network speeds with a decent SSD on either end.
  • 0 Hide
    _SirO_ , 22 June 2009 17:35
    The "limiting" factor at 111MB/s is the Ethernet overhead....... The actual data needs to be framed and packed and that requires additional bits to be transferred so an overhead of (125-111)/125 = 11% seems reasonable.
  • -1 Hide
    _SirO_ , 22 June 2009 17:35
    The "limiting" factor at 111MB/s is the Ethernet overhead....... The actual data needs to be framed and packed and that requires additional bits to be transferred so an overhead of (125-111)/125 = 11% seems reasonable.
  • 0 Hide
    DevilWAH , 22 June 2009 18:09
    I agree with Sir0 that the overheads will bring down the speed.

    secondly if you have more than a point to point connection, (ie more than 2 devices) on the network your speeds will suffer.

    Even if they are on seperate collision domains, you will still get broadcast trafic that will intrupt conversations.

    And lastly the quility of you network cards! there is a reson that one network card will set you back £15 and another cost £150. I have found almost with out exception that a more expensive high quility card will sustain a higher through put than a cheap card.. you dont need to spend £150. but think twice before chosing a £10 card over a £25 one..
  • 2 Hide
    daglesj , 22 June 2009 19:08
    Ethernet is a sloppy standard. Typical case of just make everything 'big' and hope for the best.

    Now if you had the efficiency of Token Ring with the bandwidth of ethernet....wow!
  • 0 Hide
    profundido , 22 June 2009 19:54
    Funny, I just upgraded my home network last week and the results are phenomonal: I copied a 12GB file from the C: drive (OCZ SSD)of my htpc with Vista RTM to raid array on my server with W2008 R2 RC Datacenter and it started at 105MB/s and dropped slowly to a steady 98MB/s until the end!! Ofcourse, copying the file back in the other direction delivered a 37MB/s (high read but slow write speed on SSD...) In fact, it's so grand to have this speed that I now gathered all my data on the 6.5TB networkdrive and stream or use anything from there. I use a Sitecom 8-port switch and CAT6 cabling if you wanna copy this proven high-performance model
  • 0 Hide
    bobucles , 22 June 2009 20:28
    I didn't like the cable tests at all. Many buildings do not use ideal 25-28ft connections. In fact, the last Digital Overload event used cables over 100ft long (they throw them out afterwords, a friend took them). The real test between cable types is when using them BEYOND spec, because it is cheaper to buy a 1000ft. spool of cable rather than getting a repeater box every 20 ft.

    My own computer runs off a 50ft. cable to the central hub.
  • 3 Hide
    amgsoft , 22 June 2009 20:28
    I think the article is concluding something without considering how the networks works. The network does not work like the local PC buses and should probably not be compared with them at all.

    First of all, the theoretical maximum transmit rate in one direction is not the same as how fast the file is copied. You have to consider, that a file copy will use TCP-IP protocol on the network. The Transmission Control Protocol splits the whole file to small pieces, usually of approx 1500 bytes long including the frame header, which needs to be acknowledged by the receiver. The sender and receiver operates with a window size, which is the number of unacknowledged packages. The sender sends a number of packages and then waits for an acknowledge signalling, that the receiver has received them all. If not, they need to be retransmitted. Then the sender can send the next portion of the file.

    So the actual network card on both sides will be a very limitting factor. In theory you will be able to transfer (!) something between the half and 90% of the maximum rate in one direction.

    But that is not the only limitting factor. The packages needs to be processed as well. The PC's will be interrupted for very package they receive and they need to get the data out of the network card in same speed as they arrive and put the data somewhere. For 125MB/sec and 1250-1400 bytes in one package, the PC needs to handle approx 100.000 requests/interrupts in a second from the network card and then probably the next 50.000 from the harddrive. It means 150.000 interrupts more, then required for normal functionality. It requires a lot of processing power from the hardware and from the operating system as well. Let me say, that windows will not be my choice of the operating system to handle this amount of data with very low latency.

    The network speed measured in Gigabits is often more a sale trick then an actual information telling anything about the actual transfer speed. It is true, that when using higher speeds you will get the file over faster. However, you will be far from the maximum specification. If you want to utilize the full bandwidth, you need to invest into hardware, which is able to handle it. The majority of the systems, especially the notebooks, are able to send few packages with the right speed, but then they spend more time with waiting then with utilizing the full bandwith.

    Another thing is the size of the package. The old ethernet uses 1500 bytes packages. It is a very limiting factor and I hope that in the future the ethernet specifications will be changed to support much larger packages. Today they are trying to use jumbo packages which are packages typ. up to 9kB. It is still to small for an effective data transfer on the network.
  • 0 Hide
    Anonymous , 22 June 2009 21:56
    Would have been interesting to see if gigabit made any difference to latency/fps on a large LAN game of some kind. I imagine a 64 player BF2 server might be able to use more than 100 megabits/sec. Not an easy bench to set up though :) 
  • -2 Hide
    tranzz , 23 June 2009 00:38
    Why not set the drives up as ram drives. This would remove almost all speed bariers from the equation.
  • 1 Hide
    _SirO_ , 23 June 2009 01:34
    tranzzWhy not set the drives up as ram drives. This would remove almost all speed bariers from the equation.


    hum..... he did that...
  • 1 Hide
    Devastator_uk , 23 June 2009 06:05
    tranzzWhy not set the drives up as ram drives. This would remove almost all speed bariers from the equation.


    I doubt it, most game servers need very little bandwidth really since not a great deal of information is needed to be transfered.
  • 0 Hide
    Anonymous , 23 June 2009 18:23
    You got your units wrong

    "Each 1000BASE-T network segment can be a maximum length of 100 meters (328 feet), and must utilize "Category 5" cabling at a minimum".

    http://en.wikipedia.org/wiki/1000BASE-T#1000BASE-T
  • 1 Hide
    spec_00 , 24 June 2009 12:18
    amgsoft's post pretty much sums it up... The only thing I'd like to add is that network latency (or 'ping' - not the command) also makes a difference. They way TCP packets work is by the machines acknowledging packets received and though the ack is small and relatively simple to process, it will still take time to reach the sender/receiver and how fast that is will depend on ping.
    Now calculate the time spent for transmission of each ack packet set back through the network , there's the rest of the time and bytes which constitute remainder of 125Mb/s, the rest is in Amgsoft's post.
    Well written indeed...
  • 0 Hide
    Anonymous , 25 June 2009 20:34
    It would be good to see the test using a cross over cable from computer to computer to see if the switch is the limiting factor. Ug!
  • 0 Hide
    Anonymous , 26 June 2009 12:20
    Disappointed this didn't look at jumbo frames - something that is hard to set up for a typical home enthusiast.

    Anyone know if a Gigabit switch connected to a Fast Ethernet router will slow down much? The router is the HDCP server for the network.
  • 0 Hide
    Anonymous , 26 June 2009 12:53
    The one thing no-one seems to consider is that the purpose of a gigabit network is not to provide gigabit bandwidth for one PC. It is to provide a much higher bandwidth for multiple PC's. The article is incomplete in that it doesn't compare the throughput using multiple PC's.
  • 0 Hide
    Anonymous , 26 June 2009 17:04
    This article is well below the usual standard of THG. Framesize, payload, concurrent transmission, collision detection, cut through vs store and forward in switches, where shall one begin? The "cable test" is silly beyond words (and so is bobucle's comment on going BEYOND spec). Schoolboy stuff. Amgsoft started to explain but clearly ran out of space/patience.
  • 0 Hide
    QuickN , 27 June 2009 03:20
    I wish you had added different switch models; I have noticed some gains with highend switchs in real world going from higher end netgears to Cisco 3950 series no other systems that was a missing layer; good article thanks!
  • 0 Hide
    Anonymous , 27 June 2009 11:31
    You forgot one important item to try. If your switch supports it, you can send jumbo frames. You would also have to change the MTU and RWIN the corresponding workstations (if running windows). This will also increase the network transfer speed in a single file copy from workstation to workstation. Typical results are about 5% - 20% increase in speed network speed. Of course the disks are still the bottle neck. This is something that is commonly used on iSCSI connections with the MS iSCSI initiator and would work for the workstations as well as long as the swith supports it.
Display more comments