Sign in with
Sign up | Sign in

Results: Sequential Throughput

Roundup: Three 16-Port Enterprise SAS Controllers
By

RAID 0

Areca’s card clearly is the fastest product when it comes to simple sequential writes without pending commands. It starts off at 680 MB/s and maxes out at 820 MB/s, as soon as deeper command queues are involved. Adaptec’s maximum performance is very much the same. Promise starts at 390 MB/s and reaches 800 MB/s, but only when long command queues are involved.

RAID 5

With all RAID 5 member drives available, RAID 5 throughput is very much like the numbers we saw in RAID 0. However, once the controllers must rebuild array data on the fly, performance drops. Areca manages to maintain its performance level the best, while Adaptec and Promise are impacted by the missing drive.

Areca and Adaptec manage to maintain the same write performance for sequential operation on degraded arrays, while the Promise card shows a noticeable performance drop once one RAID 5 drive is missing.

RAID 6

RAID 6 with double redundancy is important for mission-critical systems. Again, read throughput is similar to the excellent results seen in RAID 0. Removing one drive (to simulate a failure) has only a small impact on Adaptec’s performance, but makes a larger difference for Areca and Promise. Once two drives fail, Areca manages to maintain the same performance level as with only one failed drive, while Adaptec and Promise lose still more performance.

In a RAID 6 array, sequential write performance is always the same on healthy, single, or double degraded arrays in the case of Adaptec and Areca. Unfortunately, the Promise card’s performance drops by almost 50% once one or two drives of a RAID 6 array are missing.

Ask a Category Expert

Create a new thread in the UK Article comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display 2 comments.
This thread is closed for comments
  • 0 Hide
    Anonymous , 24 April 2009 21:58
    what firmware did you test on the areca card? i heard there were some performance issues with earlier firmwares (current is 1.46)
  • 0 Hide
    jwoollis , 14 May 2009 17:26
    I am disappointed in this review, it does not seem to go into a level of detail sufficient to do justice to this topic.

    Firstly most companies these days would build systems using more than 16 drives or 1x4U rackmounted device, and use iSCSI to distribute storage rather than using disks mounted in each server. It's likely that many companies will have several of these using 32,64,128,256 or more drives and also use this storage in parallel to avoid risk of controller or hardware (other than disks) failure.

    You neglect to mention that each port on the controller card in supporting upto 4 drives with a fanout cable or more drives through the use of edge and fanout connectors runs in Full Duplex that is to say 4x3Gb/s or 12Gb/s in both directions and of course this will double with the next generation 6Gb/s connections. With conventional SATA/SAS drives peaking at between 100 and 200MB/s it is possible for a single port to support much more than four drives, In practice between 6 and 12 drives transfering data continuously in one direction at full drive speed would be required to use up the full bandwidth of one port. If you allow for the fact that drives are rarely use in this manner and not for sustained periods of time and that the drives may be separated into groups rather than used as a single huge RAID array, it would be possible to actually use between 16 and 64 drives off a single port. This of course might be seen as bad practice if a controller has enough ports to separate the drives into smaller groups but the point is that the controllers are far more versitile and this article does little to inform us of this fact.

    You neglect to mention that there is the possibility to acquire Edge and Fanout Expandors as either 3.5" or 5.25" Drive Bay mounted devices or a circuit board which can be installed in both Free Standing Cases or any PC/Server Case to facilitate the use of more drives per port per controller card than would normally be possible with fanout cables.

    There are some who would rather build custom/bespoke solutions rather than pay the extra-ordinarily large sums of money that is required for a 16 bay rackmounted storage solution which are often prohibitively expensive.

    You also neglect to performance test these controller cards by testing performance to the limits of the controller card when used with multiple edge and fanout expanders.

    Please when investigating such topic, do us the service of covering all aspects of the topic properly and in detail so that we might make an informed decision. The controller card is only one part of this solution and the cost of such addons may range from £8,000 to £24,000 per rackmount bay depending on the number of disks supported and the amount/size of disks preinstalled. Perhaps you might offer us examples of these addons with specifications and a Cost per GB! That will certainly put things into perspective!