Promise FastTrack RAID Very Slow After Rebuild

I had to replace a failed 80GB IDE disc drive on an older system based on a Gigabyte GA-7DXR mainboard, with Promise FastTrack RAID controller built-in.

Normally, this RAID 0 array did 60MB/S read/write, however, after having a drive fail, and after replacing, rebuilding the array (this time with a 128K stripe instead of 64K stripe--could that be the cause?) the array write speeds are down to 3MB/S and read is 16MB/S. I formatted NTFS with a fairly large allocation unit size, and, as I mentioned in parenthesis, I built the array in BIOS with a larger stripe size this time and am wondering if perhaps there's a problem with this controller and 128K stripe size. Perhaps I need to rebuild with 64K stripe as before to get normal performance?
11 answers Last reply
More about promise fasttrack raid slow rebuild
  1. How large is the NTFS allocation unit size? Any reason why you didn't use the default values?
  2. I think it's 128K blocks. Same as the stripe size.

    The reason is to hopefully improve performance with writing multi-gigabyte uncompressed video files to the array.
  3. Your settings might be optimized for your application, but the reads are now larger and that isn't very good for benchmarks. Therefore you basically have to decide what's best: good write performance for very large files or optimal benchmarks results. Have you actually verified that writing very large files is quicker than before? I'm asking because everything else most likely is slower.
  4. The benchmark was to write a 512MB file to the disc (which took over a minute) and then read it back. I also should note that it took several hours to format the 149GB partition, which I thought was abnormally slow.
    What I noticed was that nBench would often pause a lot while writing and reading from that partition. Usually this process goes non-stop, but on this RAID, it was waiting for something most of the time, then it would write a little, then wait, the write a little more and so on.. definately not the fault of nBench! I would not expect the results to be off by a factor of 20, at any rate. 128K blocks are not that big.
  5. 128K blocks are not necessarily that big, but the standard size is 4K. That's a significant increase. Why not go back to the defaults (block size and stripe size) and see if speed is back to normal?
  6. I rebuilt the array with the original 64K strip size (when it was known to perform well) and re partitioned and formated the array. It took almost 6 hours to format 149GB!!

    The nBench test results are 0.71MB write, and 7.78MB read and it took several minutes to copy the 128MB test file, because the progress was stopped most of the time, then it would jump ahead a little, the pause.. I'd say it was paused 93% of the time and actually writing 7% of the time.

    I'm gonna dump the OS partition and reload a known working Ghost image. I think the RAID driver has somehow become the wrong version or some misalignment has occured after reinstalling the RAID driver earlier when troubleshooting.

    I just finished restoring a known good Boot partition. And am testing the RAID now. It is still running VERY slow and is paused more often than writing the test file. --just looked over my shoulder to see where it was at with the test and it crashed. nBench: creatprocess cklog: failure 3. Windows or Fastrack dialog prompt: Windows was unable to save all the data... and Fasttrack: Array 1 Offline.

    FastCheck says the drives are functional, but the array is offline. Here we go again... maybe the controller/motherboard is giving up the ghost. This does not look good.

    Oooookay... so perhaps I have two bad drives in a row, but what are the odds a brand-new, sealed drive is bad?

    Having given up on the idea of using the RAID in RAID 0 configuration, I decided to set it up for SPAN configuration and use one drive each, for two SPAN RAID drive partitions.
    I started to format one partion. Two hours later, it was 31% complete and the system just rebooted itself in the middle of formatting. Windows won't come back up now. The BSOD that keeps coming up is saying that the BIOS does not support ACPI compliance. I'm pretty sure the 7DXR mainboard is new enough to be ACPI compliant, so this error suggests a possible hardware failure somewhere. Not sure.. may not be a drive problem at all.
  7. It certainly doesn't look too promising. You might have a defective motherboard, but could you temporarily connect the drives to a non-raid interface to determine if they're fine? This would at least allow you to determine if the issue is with the RAID controller.
  8. I did some Googling on the ACPI error and many discussions pointed to RAM. Since this machine had been powered off for months, I determined that corrosion could be affecting the DIMM sockets, so I shut 'er down, pulled the DIMMs and re-seated them. System booted right up.

    Went back to formatting the two disks, now operating as separate "span" discs on the RAID channels. Disk 1 took 4-1/2 hours to format a 74GB partition. Disc 2 took about 20 minutes to format a similar partition. I ran nBench on disc 1 and write was around 1MB/S. Ran nBench on disc 2 and write is 56MB/S and read is 55MB/S.

    So now I'm about to swap the drives on the channels and see if the problem moves. If not, then I suppose I'm looking at a bad RAID controller channel, or maybe something went wrong with the cable.
  9. Here's the latest:

    It appears that I have had not one--but TWO drives fail at the same time.

    The one drive that dropped off the RAID while formatting is a Nov 2002 mfg date. I put in a new drive of the same model, just out of shring wrap.

    The other drive in the array, a D740X series drive, is also bad.

    I set up the RAID as a single drive setup and swapped cables and then channels. Slow formatting all ways. Tried the other drive that was not thought to be defective and slow formatting. So now I was thinking bad controller, but both channels?

    So I then try my new drive and format it. It formats quickly.

    So... BOTH drives in the array are formatting slowly, and one drops off the array when formatting, so one is worse than the other but they are both BAD.

    The remaining new drive formats quickly on either controller channel. So that vindicated the controller and cables, as I've swapped both.

    Two bad drives, at the same time! The odds against that are what?
  10. Having them powered off for a long time is what caused the apparent simultaneous failure. If they had been online, one would most likely have failed before the other one. I've seen that before, but the system indicated which drives had failed.
  11. Oddly, they're not fully failed--just running at about 1/60th normal throughput. Well, one of them does disappear from the controller about 20 minutes into formatting, but comes back online after a power down/reboot.
    I've had a lot of gear fail this past summer, immediately after powering down overnight. Lost a Seasonic PSU and also had a Gigabyte mainboard that somehow got its CMOS information scrambled on an overnight shutdown to save electricity. Go figure... I never seem to have any failures as long as I leave the machines on 24/7, but turn something off and all hell breaks loose. :-(
Ask a new question

Read More

NAS / RAID Controller Components