Sign in with
Sign up | Sign in

Network Tests: Are We Getting Gigabit Performance?

Gigabit Ethernet: Dude, Where's My Bandwidth?
By

Let’s proceed with our first test, where we send a file from the client’s C: drive to the server’s C: drive:

We’re seeing something that mirrors our expectations. The gigabit network, which is capable of a theoretical 125 MB/s transfer, is sending out data from the client’s C: drive as fast as it can, probably somewhere in the neighborhood of 65 MB/s. However, as we demonstrated previously, the server’s C: drive can only write as fast as about 40 MB/s.

Now let's copy a file from the speedy server RAID array to the client computer’s C: drive:

Once again, this is just like we called it. We know from our tests that the client computer’s C: drive can write this file at about 70 MB/s under ideal conditions, while the gigabit network is delivering performance very close to this speed.

Unfortunately, none of these numbers have come close to gigabit's theoretical maximum throughput of 125 MB/s. Is there a way we can test the network’s maximum speed? Of course there is, but not in a real-world situation. What we’re going to have to do is make a direct memory-to-memory transfer over the network so that we bypass any hard drive-bandwidth limitations.

To do this, we’re going to make a 1 GB RAM drive on both the client and server PCs, and then transfer a 1 GB file between these RAM drives over the network. Since even the slowest DDR2 RAM should be able to handle over 3,000 MB/s of data, the only limiting factor should be how fast our network can run:

Lovely! We’re seeing a 111.4 MB/s maximum speed over our gigabit network, which is very close to a gigabit network’s theoretical 125 MB/s. This is a great result and is nothing to complain about, as real-world bandwidth will likely never hit an ideal maximum speed.

So now we've proven it conclusively: hard drives are the lowest common denominator when it comes to file transfers over a gigabit network, limiting a network's data transfer rates to that of the slowest hard drive. With this big question answered, we wanted to do a few network-cabling tests, just to satisfy our curiosity. Is network cabling a factor that might keep us from network speeds closer to the theoretical limit?

Display all 27 comments.
This thread is closed for comments
  • 1 Hide
    mi1ez , 22 June 2009 15:57
    It would have been interesting to see the network speeds with a decent SSD on either end.
  • 0 Hide
    _SirO_ , 22 June 2009 17:35
    The "limiting" factor at 111MB/s is the Ethernet overhead....... The actual data needs to be framed and packed and that requires additional bits to be transferred so an overhead of (125-111)/125 = 11% seems reasonable.
  • -1 Hide
    _SirO_ , 22 June 2009 17:35
    The "limiting" factor at 111MB/s is the Ethernet overhead....... The actual data needs to be framed and packed and that requires additional bits to be transferred so an overhead of (125-111)/125 = 11% seems reasonable.
  • 0 Hide
    DevilWAH , 22 June 2009 18:09
    I agree with Sir0 that the overheads will bring down the speed.

    secondly if you have more than a point to point connection, (ie more than 2 devices) on the network your speeds will suffer.

    Even if they are on seperate collision domains, you will still get broadcast trafic that will intrupt conversations.

    And lastly the quility of you network cards! there is a reson that one network card will set you back £15 and another cost £150. I have found almost with out exception that a more expensive high quility card will sustain a higher through put than a cheap card.. you dont need to spend £150. but think twice before chosing a £10 card over a £25 one..
  • 2 Hide
    daglesj , 22 June 2009 19:08
    Ethernet is a sloppy standard. Typical case of just make everything 'big' and hope for the best.

    Now if you had the efficiency of Token Ring with the bandwidth of ethernet....wow!
  • 0 Hide
    profundido , 22 June 2009 19:54
    Funny, I just upgraded my home network last week and the results are phenomonal: I copied a 12GB file from the C: drive (OCZ SSD)of my htpc with Vista RTM to raid array on my server with W2008 R2 RC Datacenter and it started at 105MB/s and dropped slowly to a steady 98MB/s until the end!! Ofcourse, copying the file back in the other direction delivered a 37MB/s (high read but slow write speed on SSD...) In fact, it's so grand to have this speed that I now gathered all my data on the 6.5TB networkdrive and stream or use anything from there. I use a Sitecom 8-port switch and CAT6 cabling if you wanna copy this proven high-performance model
  • 0 Hide
    bobucles , 22 June 2009 20:28
    I didn't like the cable tests at all. Many buildings do not use ideal 25-28ft connections. In fact, the last Digital Overload event used cables over 100ft long (they throw them out afterwords, a friend took them). The real test between cable types is when using them BEYOND spec, because it is cheaper to buy a 1000ft. spool of cable rather than getting a repeater box every 20 ft.

    My own computer runs off a 50ft. cable to the central hub.
  • 3 Hide
    amgsoft , 22 June 2009 20:28
    I think the article is concluding something without considering how the networks works. The network does not work like the local PC buses and should probably not be compared with them at all.

    First of all, the theoretical maximum transmit rate in one direction is not the same as how fast the file is copied. You have to consider, that a file copy will use TCP-IP protocol on the network. The Transmission Control Protocol splits the whole file to small pieces, usually of approx 1500 bytes long including the frame header, which needs to be acknowledged by the receiver. The sender and receiver operates with a window size, which is the number of unacknowledged packages. The sender sends a number of packages and then waits for an acknowledge signalling, that the receiver has received them all. If not, they need to be retransmitted. Then the sender can send the next portion of the file.

    So the actual network card on both sides will be a very limitting factor. In theory you will be able to transfer (!) something between the half and 90% of the maximum rate in one direction.

    But that is not the only limitting factor. The packages needs to be processed as well. The PC's will be interrupted for very package they receive and they need to get the data out of the network card in same speed as they arrive and put the data somewhere. For 125MB/sec and 1250-1400 bytes in one package, the PC needs to handle approx 100.000 requests/interrupts in a second from the network card and then probably the next 50.000 from the harddrive. It means 150.000 interrupts more, then required for normal functionality. It requires a lot of processing power from the hardware and from the operating system as well. Let me say, that windows will not be my choice of the operating system to handle this amount of data with very low latency.

    The network speed measured in Gigabits is often more a sale trick then an actual information telling anything about the actual transfer speed. It is true, that when using higher speeds you will get the file over faster. However, you will be far from the maximum specification. If you want to utilize the full bandwidth, you need to invest into hardware, which is able to handle it. The majority of the systems, especially the notebooks, are able to send few packages with the right speed, but then they spend more time with waiting then with utilizing the full bandwith.

    Another thing is the size of the package. The old ethernet uses 1500 bytes packages. It is a very limiting factor and I hope that in the future the ethernet specifications will be changed to support much larger packages. Today they are trying to use jumbo packages which are packages typ. up to 9kB. It is still to small for an effective data transfer on the network.
  • 0 Hide
    Anonymous , 22 June 2009 21:56
    Would have been interesting to see if gigabit made any difference to latency/fps on a large LAN game of some kind. I imagine a 64 player BF2 server might be able to use more than 100 megabits/sec. Not an easy bench to set up though :) 
  • -2 Hide
    tranzz , 23 June 2009 00:38
    Why not set the drives up as ram drives. This would remove almost all speed bariers from the equation.
  • 1 Hide
    _SirO_ , 23 June 2009 01:34
    tranzzWhy not set the drives up as ram drives. This would remove almost all speed bariers from the equation.


    hum..... he did that...
  • 1 Hide
    Devastator_uk , 23 June 2009 06:05
    tranzzWhy not set the drives up as ram drives. This would remove almost all speed bariers from the equation.


    I doubt it, most game servers need very little bandwidth really since not a great deal of information is needed to be transfered.
  • 0 Hide
    Anonymous , 23 June 2009 18:23
    You got your units wrong

    "Each 1000BASE-T network segment can be a maximum length of 100 meters (328 feet), and must utilize "Category 5" cabling at a minimum".

    http://en.wikipedia.org/wiki/1000BASE-T#1000BASE-T
  • 1 Hide
    spec_00 , 24 June 2009 12:18
    amgsoft's post pretty much sums it up... The only thing I'd like to add is that network latency (or 'ping' - not the command) also makes a difference. They way TCP packets work is by the machines acknowledging packets received and though the ack is small and relatively simple to process, it will still take time to reach the sender/receiver and how fast that is will depend on ping.
    Now calculate the time spent for transmission of each ack packet set back through the network , there's the rest of the time and bytes which constitute remainder of 125Mb/s, the rest is in Amgsoft's post.
    Well written indeed...
  • 0 Hide
    Anonymous , 25 June 2009 20:34
    It would be good to see the test using a cross over cable from computer to computer to see if the switch is the limiting factor. Ug!
  • 0 Hide
    Anonymous , 26 June 2009 12:20
    Disappointed this didn't look at jumbo frames - something that is hard to set up for a typical home enthusiast.

    Anyone know if a Gigabit switch connected to a Fast Ethernet router will slow down much? The router is the HDCP server for the network.
  • 0 Hide
    Anonymous , 26 June 2009 12:53
    The one thing no-one seems to consider is that the purpose of a gigabit network is not to provide gigabit bandwidth for one PC. It is to provide a much higher bandwidth for multiple PC's. The article is incomplete in that it doesn't compare the throughput using multiple PC's.
  • 0 Hide
    Anonymous , 26 June 2009 17:04
    This article is well below the usual standard of THG. Framesize, payload, concurrent transmission, collision detection, cut through vs store and forward in switches, where shall one begin? The "cable test" is silly beyond words (and so is bobucle's comment on going BEYOND spec). Schoolboy stuff. Amgsoft started to explain but clearly ran out of space/patience.
  • 0 Hide
    QuickN , 27 June 2009 03:20
    I wish you had added different switch models; I have noticed some gains with highend switchs in real world going from higher end netgears to Cisco 3950 series no other systems that was a missing layer; good article thanks!
  • 0 Hide
    Anonymous , 27 June 2009 11:31
    You forgot one important item to try. If your switch supports it, you can send jumbo frames. You would also have to change the MTU and RWIN the corresponding workstations (if running windows). This will also increase the network transfer speed in a single file copy from workstation to workstation. Typical results are about 5% - 20% increase in speed network speed. Of course the disks are still the bottle neck. This is something that is commonly used on iSCSI connections with the MS iSCSI initiator and would work for the workstations as well as long as the swith supports it.
Display more comments