Sign in with
Sign up | Sign in

Breaking Records With SSDs: 16 Intel X25-Es Do 2.2 GB/s

Breaking Records With SSDs: 16 Intel X25-Es Do 2.2 GB/s
By

Some months ago, we were informed by A&R Edelman, Samsung’s PR agency in the US, about a YouTube video showing 24 Samsung PB22-J flash SSDs configured in a RAID array using a high-end PC. The guys who produced the video did a great job and reached just beyond 2 GB/s using one Adaptec 5-series controller card and an Areca 1680ix board on an Intel dual-CPU "Skulltrail" system. We felt intrigued by the project and decided to see if we could beat their results.

Why So Much Storage Performance?

The project makes sense if you look at it from one of two ways. You can take it as a fun exercise where money is no object, or you can view it with a longer-term outlook to see what future storage products could have in store. The promotional Samsung video shows what the impact of a super-fast SSD array could be. The array is capable of loading applications in a fraction of the time required today, and it effectively eliminates all storage-related performance bottlenecks. But it remains obvious that using 24 (or even 16 drives, as we did), is an impractical scenario on the desktop.

Drive Selection

However, the situation is different on higher-end servers, where a maximum amount of I/O operations per second (IOPS) may be imperative for mission-critical applications. We decided not only to use a large number of flash SSDs, but we also wanted to use the best flash SSDs to trounce Samsung’s throughput numbers while also providing sensationally high IOPS numbers.

Our choice was Intel’s X25-E flash SSD, which is based on more expensive single-level cell (SLC) flash memory. Compared to Samsung’s multi-level cell (MLC) flash, SLC can provide shorter latencies and higher throughput for both reads and writes. One drawback remains: while Samsung’s PB22-J provides a massive 256 GB capacity, Intel’s X25-E professional SSDs still max out at 64 GB. Fortunately, the capacity difference didn’t matter in our race for performance, as only 16 of Intel’s flash SSDs were enough to beat the 24 drives used in Samsung’s video.

Let’s Get It On!

Intel was interested to take on the challenge and provided sixteen 64 GB X25-E drives for this article. Meanwhile, we asked Adaptec to provide two 5805 PCI Express RAID cards. With these, we created a nested array consisting of two RAID 0 hardware RAIDs, which we then used to create a Windows-powered software RAID 0 array across them. Our approach worked very well, as you’ll soon see.

Display all 4 comments.
This thread is closed for comments
  • -1 Hide
    tinnerdxp , 30 July 2009 18:40
    Sweet... Question though - why not use 16-port 16-lanes controller?
    I somehow doubt that windows can RAID-0 efficiently...
  • 0 Hide
    johannesd , 30 July 2009 19:31
    Could be interesting to try with 8 drive and 1 controller or 5-by-5 on two controles. 5-by-5 as each disk delivers 200MB/s so if you can get the same results with this number of discs it would prove that there is a bottle neck some where..
  • 0 Hide
    ainarssems , 30 July 2009 21:59
    Quote:
    why not use 16-port 16-lanes controller?


    Quote:
    However, we deliberately went after two eight-port cards instead of one card with a massive number of ports so we could distribute the bandwidth across two PCI Express slots. If you look at the typical SAS/SATA HBA interface, you’ll find that it’s a first-gen x8 PCI Express connection, which reaches a maximum of 2 GB/s. Since we wanted to reach higher throughput, we had to go for two cards and create a software RAID array using the operating system.
  • 0 Hide
    jwoollis , 6 August 2009 16:52
    I get that there are bandwidth restrictions on the PCI-E Bus and for the purposes of this test the choice of multiple controllers is may be valid.

    I would ask however when is there going to be a test performed which is relevant to real world commercial applications and potential where a single controller full duplex port supports speeds of 12000(4x300)MB/s each way and a controller may support 128/256 (for adaptec) drives depending on model over 1 or more of these physical ports. Connecting more than 4 drives per port requires the use of edge and fanout connectors.

    Few people or businesses will benefit from or ever use 16 expensive SSD Drives, Most business may incoporate some SSD's into a solution, however 10k and 15k SAS drives will continue to be used for a significant period of time, and for commercial applications, such drives will be used in multiples of hundreds or thousands to achieve capacity or performance requirements, using similar controllers though most likely with Fibre connections.

    You must also realise that such solutions will not ever use software raid, certainly not RAID 0 alone, and would not always build a single RAID array even when attached to the same controller, so it would be practical to consider the ramifications of using other solutions such as 128 drives as two sets of 64 RAID 0 Arrays Mirrored or 128 drives as a set of 64 drives RAID 5 and a two sets of 32 RAID 0 Arrays Mirrored. Multiple arrays on a controller provide a means of adjusting configuration to suit needs - for example combining RAID 0 and RAID 1 is beneficial to Database performance, whereas RAID 5 or 6 might be used for filesystems and backups.

    Most such solutions would never be integrated into a server but be installed in a stand alone NAS. I personally would be keen to see a project to build a NAS without using preassembled rack mounted solutions but using edge and fanout connections to achieve substantial solutions featuring significant numbers of disks 32/64/128/256 or more, and testing performance of such solutions in single and multiple various arrangements, to the limits of the controller.