Discussion:
Test: HighPoint RocketRaid 3120 PCIex1 2xSATA controller under FreeBSD 7.1-PRERELEASE
(too old to reply)
Lukas Razik
2008-09-05 16:00:35 UTC
Permalink
Hello Jeremy!

We wrote about Areca's and HighPoint's HW-RAID controllers some weeks ago:
http://lists.freebsd.org/pipermail/freebsd-hardware/2008-August/005339.html

Now I've tested the HighPoint RocketRAID 3120 controller with two Samsung
320GB SATA (HD322HJ) harddisks under FreeBSD 7.1-PRERELEASE with the bonnie++
harddisk benchmark and the following modes:

JBOD:
http://net.razik.de/HPT_RR_3120/bonnie_JBOD.html
RAID1:
http://net.razik.de/HPT_RR_3120/bonnie_RAID1.html
RAID0:
http://net.razik.de/HPT_RR_3120/bonnie_RAID0.html

I don't know the controllers from Areca but I think the reached values are
O.K. Anyhow, the performance is better than with my old 3ware 8006-2LP PCI
controller.
Tested filesystem was: UFS2 with enabled Soft Updates.

------

Under Vista 64 (Benchmark: HD Tune):
Loading Image...
Loading Image...
Loading Image...

One Samsung HD322HJ @ ICH9R onboard controller:
Loading Image...

------

Regards,
Lukas


PS:
bonnie++ : http://www.coker.com.au/bonnie++
http://www.hdtune.com/
fluffles.net
2008-09-19 03:27:08 UTC
Permalink
Post by Lukas Razik
Hello Jeremy!
http://lists.freebsd.org/pipermail/freebsd-hardware/2008-August/005339.html
Now I've tested the HighPoint RocketRAID 3120 controller with two Samsung
320GB SATA (HD322HJ) harddisks under FreeBSD 7.1-PRERELEASE with the bonnie++
http://net.razik.de/HPT_RR_3120/bonnie_JBOD.html
http://net.razik.de/HPT_RR_3120/bonnie_RAID1.html
http://net.razik.de/HPT_RR_3120/bonnie_RAID0.html
Hi Lukas,

Your scores are too low, especially for RAID0. Could you please try:

dd if=/dev/zero of=/path/to/raid/zerofile bs=1m count=20000
umount <raid mountpoint>
re-mount
dd if=/path/to/raid/zerofile of=/dev/null bs=1m

The unmount is necessary to clear the filecache, else you will be
(partly) benchmarking your RAM since part of the data will come from RAM
and not the disks; not what you want when testing disk performance. As a
rule of thumb you should test with a size at least 8 times bigger than
the sum of all write-back mechanisms, either in hardware or software.
The 20GB zerofile in the example above is a safe guess.

Also make sure the filesystem is near-empty when you benchmark, else you
are benchmarking a slower portion of your harddrives so you can expect
lower scores.

If you get about the same scores with dd, try using a higher read-ahead
(vfs.read_max value, set it to 32 for example). Also sometimes it's
required to use a higher blocksize to get full potential, try:

newfs -U -b 32768 /dev/<raid device>

Warning: using 64KiB blocksize you risk hanging the system under heavy
load (like 2 bonnies running simultaniously).

Also *DO NOT* create partitions on the raid device, unless you have
manually created them to avoid a "stripe misalignment", where one read
request might hit two disks causing lower performance. Ideally you'd
want a single disk to handle one I/O request, not several since the only
real bottleneck is the seek time.

So if your raid device is /dev/da0, just pass that to newfs, after
making sure your partitions are gone with:
dd if=/dev/zero of=/dev/da0 bs=1m count=20
This will, ofcourse, destroy all data on the RAIDrai volume.

The last thing i can think of is stripesize, you should not set this
lower than 64KiB to avoid two disks processing one I/O request (ata
driver does 64KiB max requests i think, maxphys is 128KiB). But also
because UFS2 begins at 64KiB offset, to allow for partitioning data to
be preserved. So where you think the filesystems starts, for example as
defined in a label, is not actually where the filesystem starts storing
data. All these factors can cause RAID-performance to be low, or to be
cloaked due to improper benchmarking. Many people using HDTune think
their RAID does not perform well, while its just HDTune which was never
meant to test RAID-arrays since RAIDs can only be faster when
parallellisation is possible - processing 2 of more I/O at once by
different physical disks. HDTune sends only one request at a time, and
as such RAIDs that do not have internal read-ahead optimizations will
fail in HDTune. An Areca however will perform well, due to its own
optimizations. But normally the filesystem takes care of generating
enough I/O, also on Windows.

Also note that virtually all Windows systems using RAID are suffered by
stripe misalignement, since Windows requires the use of partitions and
Windows has neglected to take into account the misalignment issue by
using a weird offset for the partitioning. It is possible to create an
aligned partition using third party tools however.
Post by Lukas Razik
I don't know the controllers from Areca but I think the reached values are
O.K. Anyhow, the performance is better than with my old 3ware 8006-2LP PCI
controller.
Tested filesystem was: UFS2 with enabled Soft Updates.
------
http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_JBOD.png
http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_RAID1.png
http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_RAID0.png
You should not use HDTune to test RAID-arrays, see also:
http://www.fluffles.net/blogs/2.Why-HDTune-is-unsuitable-for-RAID-arrays.html

Test with ATTO-256 and you should get higher scores. Just make sure the
filesystem starts at the beginning of the volume and that it's close to
empty.

Hope its useful :)
Regards,

Veronica
Dieter
2008-09-19 06:09:33 UTC
Permalink
Post by fluffles.net
If you get about the same scores with dd, try using a higher read-ahead
(vfs.read_max value, set it to 32 for example). Also sometimes it's
newfs -U -b 32768 /dev/<raid device>
Warning: using 64KiB blocksize you risk hanging the system under heavy
load (like 2 bonnies running simultaniously).
What's this about 64KiB blocksize hanging the system?
Hang awhile then recover, or hang forever need a reboot?
Is this a RAID thing or are normal disks at risk?
It isn't obvious why a 64KiB blocksize would cause a
problem in this day of multi GiB memory.
Jeremy Chadwick
2008-09-19 06:24:18 UTC
Permalink
Post by Dieter
Post by fluffles.net
If you get about the same scores with dd, try using a higher read-ahead
(vfs.read_max value, set it to 32 for example). Also sometimes it's
newfs -U -b 32768 /dev/<raid device>
Warning: using 64KiB blocksize you risk hanging the system under heavy
load (like 2 bonnies running simultaniously).
What's this about 64KiB blocksize hanging the system?
Hang awhile then recover, or hang forever need a reboot?
Is this a RAID thing or are normal disks at risk?
It isn't obvious why a 64KiB blocksize would cause a
problem in this day of multi GiB memory.
Why do you think the amount of memory people have in their computers has
*anything* to do with a filesystem blocksize? If a large blocksize
would crash a filesystem, it's not going to be due to "not having enough
RAM".
--
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP: 4BD6C0CB |
fluffles.net
2008-09-19 15:31:43 UTC
Permalink
Post by Dieter
Post by fluffles.net
If you get about the same scores with dd, try using a higher read-ahead
(vfs.read_max value, set it to 32 for example). Also sometimes it's
newfs -U -b 32768 /dev/<raid device>
Warning: using 64KiB blocksize you risk hanging the system under heavy
load (like 2 bonnies running simultaniously).
What's this about 64KiB blocksize hanging the system?
Hang awhile then recover, or hang forever need a reboot?
Is this a RAID thing or are normal disks at risk?
See this, posted 2 years ago to freebsd-fs:
http://freebsd.monkey.org/freebsd-fs/200610/msg00024.html

Scott Long says the bug in vfs_bio.c is known but difficult to fix. They
also argued that using a 64KiB blocksize should not be necessary. I have
been getting slightly higher scores with 64KiB blocksize though, also
with perfect alignment.

32KiB should be safe to use, i have not managed to crash the system
under heavy I/O load.

I'm curious if my suggestions gave you any better results, though.

Regards,
Veronica

Loading...