Let's look at some actual RAID 0 data to see how all of this translates into real life. Table 1 summarizes the test bed system used. It is the same system used in the Windows Home Server testing, but with only 1 GB of RAM and WD VelociRaptor drives. It was running Ubuntu Server 184.108.40.206 and using mdadm for RAID.
|CPU||Intel Core 2 Duo E7200|
|Motherboard||ASUS P5E-VM DO|
|RAM||1 GB Corsair XMS2 DDR2 800|
|Ethernet||Onboard 10/100/1000 Intel 82566DM|
|Hard Drives||Western Digital VelociRaptor WD3000HLFS 300 GB, 3 Gb/s, 16 MB Cache, 10,000 RPM (x2)|
|CPU Cooler||ASUS P5A2-8SB4W 80mm Sleeve CPU Cooler|
Table 1: RAID 0 Test Bed
Figure 1 contains iozone write test results with the following test configurations:
- RAID 0 XP - Baseline RAID 0 run. 4k block size, 64K chunk. iozone machine running XP SP2
- RAID 0 XP stride32 chnk64 - Same as #1, but with a stride of 32
- RAID 0 XP stride64 chnk128 - Same as #2, but with a stride of 64 and 128K chunk
- RAID 0 Vista stride32 chnk64 - Same as #2, but with iozone Vista SP1 machine
- Single Drive XP - test with one ext3 formatted drive
- RAID 0 XP tweaks - same as #1, but with all tweaks suggested in this Forum post
Figure 1: RAID 0 Write comparison
As Don predicted, it's hard to see a difference among the RAID 0 write results. The only run that is drastically different is the one with the "tweaks" that were supposed to enhance performance!
The stride experimentation resulted from this Forum suggestion. According to this HowTo, Stride is related to chunk and block size and is supposed to direct the mkfs command to allocate block and inode bitmaps so that they don't all end up on the same physical drive. This, in turn, is supposed to improve performance. But it's hard to see any significant difference in write performance.
We do see a performance gain between the single drive and the RAID 0 runs. But using Vista SP1 instead of XP SP2 on the iozone machine doesn't seem to affect the results much. In all the runs I did, the best non-cached performance was always under 68 MB/s.
Figure 2 shows the read results for the same tests. This time Vista running on the iozone machine provides a noticeable improvement and the "tweaks" run is still the worst performer.
Figure 2: RAID 0 Read comparison
But you have to look pretty closely, and beyond the 1 GB filesize (after OS and NAS caching is no longer in effect) to see a difference in the other tests. There is a definite difference between single drive and all the RAID 0 runs. And while the best RAID 0 run with iozone running on XP is with the larger stride and chunk size, it's not a huge improvement.
It does appear, however, that some of the RAID 0 mechanisms that Don described are working to bump read performance up to 74 MB/s. But that's only when file sizes were well below the RAM sizes in both the NAS and iozone machine. Once both RAM sizes are exceeded, performance falls to around 53 MB/s.
If the "Fast NAS" series is teaching me anything, it's that the whole subject of networked file system performance is pretty complex. Hell, just the simple act of copying a file from one machine is more complicated than I ever imagined!
I have a new appreciation for the legions of techies who toil away anonymously, trying to improve the performance and reliability of this fundamental building block of computing.
At this point I need to regroup a bit to determine what the next step will be. I have a hardware RAID controller arriving any day now. So, maybe that's a logical next step. I also might see what a Vista-based "NAS" might do for performance with a Vista client. In the meantime, keep the suggestions and feedback coming in the Forums!