[OpenIndiana-discuss] Sequential read/write performance on striped mirror
Ancoron Luciferis
ancoron.luciferis at googlemail.com
Tue Mar 27 19:14:59 UTC 2012
Hi *,
I've just set up a new system that I evaluate for using in heavy I/O
intensive environments.
For that, I've configured the following zpool for a test:
NAME STATE READ WRITE CKSUM
mpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t5000C50034C0813Fd0 ONLINE 0 0 0
c2t5000C50034C1C8AFd0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c2t5000C50034B581EBd0 ONLINE 0 0 0
c2t5000C50034C3A5DBd0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c2t5000C50034BF20D3d0 ONLINE 0 0 0
c2t5000C50034B57EA7d0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c2t5000C50034B59093d0 ONLINE 0 0 0
c2t5000C50034C32133d0 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
c2t5000C50034B1CB9Bd0 ONLINE 0 0 0
c2t5000C50034BB53DBd0 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
c2t5000C50034C1F61Fd0 ONLINE 0 0 0
c2t5000C50034B08517d0 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
c2t5000C50034C027EBd0 ONLINE 0 0 0
c2t5000C50034B5841Bd0 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
c2t5000C50034C06E17d0 ONLINE 0 0 0
c2t5000C50034C05C5Fd0 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
c2t5000C50034AF3D6Fd0 ONLINE 0 0 0
c2t5000C50034BAEEBFd0 ONLINE 0 0 0
mirror-9 ONLINE 0 0 0
c2t5000C50034B18117d0 ONLINE 0 0 0
c2t5000C500349D2AEFd0 ONLINE 0 0 0
mirror-10 ONLINE 0 0 0
c2t5000C50034C00F8Bd0 ONLINE 0 0 0
c2t5000C50034C22E0Bd0 ONLINE 0 0 0
logs
c2t5001517959585A71d0 ONLINE 0 0 0
c2t5001517959585DA4d0 ONLINE 0 0 0
cache
c2t5001517959639231d0 ONLINE 0 0 0
c2t5001517959638A66d0 ONLINE 0 0 0
All disks for the striped mirrors are Seagate ST33000650SS 3TB SAS
(6Gb), which should be good for ~150 MiB/s sequential read/write.
The disks for the ZIL are quite cheap Intel SSD 311 20GB and the cache
consists of two Intel SSD 320 40GB.
All disks are attached to an LSI SAS2008 controller.
In theory, the best numbers I should be able to get are:
sequential read: ~3.3 GByte/s
sequential write: ~1.65 GByte/s
Now for the test: I start with a simple file copy test locally including
a filesystem mount/unmount to really ensure sync to disk.
The results (using 1, 2 and 5 threads to copy 1 GiB, 5 GiB and 10 GiB
files generated by fio - throughput is determined from timestamping and
other statistics from the output of "zpool iostat mpool 1"):
1.) 1 thread:
1 GiB test (1 threads): 344.307 MiB/s
5 GiB test (1 threads): 401.758 MiB/s
10 GiB test (1 threads): 445.129 MiB/s
Peak read bandwidth: 932 MiB/s
Peak write bandwidth: 1034 MiB/s
Peak combined bandwidth: 1043 MiB/s
Peak read IOPS: 7K/s
Peak write IOPS: 9K/s
Peak combined IOPS: 9K/s
2.) 2 threads:
1 GiB test (2 threads): 297.752 MiB/s
5 GiB test (2 threads): 509.532 MiB/s
10 GiB test (2 threads): 562.484 MiB/s
Peak read bandwidth: 1024 MiB/s
Peak write bandwidth: 1075 MiB/s
Peak combined bandwidth: 1403 MiB/s
Peak read IOPS: 8K/s
Peak write IOPS: 9K/s
Peak combined IOPS: 11K/s
3.) 5 threads:
1 GiB test (5 threads): 466.370 MiB/s
5 GiB test (5 threads): 492.627 MiB/s
10 GiB test (5 threads): 569.436 MiB/s
Peak read bandwidth: 1075 MiB/s
Peak write bandwidth: 1044 MiB/s
Peak combined bandwidth: 1552 MiB/s
Peak read IOPS: 8K/s
Peak write IOPS: 9K/s
Peak combined IOPS: 13K/s
So this is much less of what I would have expected. I know that the ZIL
SSDs are not the fastest (that's why I arranged them as stripe, not as
mirror for this test) but I would have expected to at least an adequate
peak here.
During the initial sequential write test I observed iostat and found
out, that the disks are not being saturated: ~85% busy, ~4.3 actv,
barely reaching the 100 MiB/s, mostly doing around 80 MiB/s.
With exact same hardware and a software RAID under Gentoo (configured
exactly the same way - 11 2-mirror stripes) using ext4 I easily get more
than 1 GByte/s with multiple readers/writers.
So something must be wrong with my config or setup in OpenIndiana. Can
someone shed some light on this?
Thanx,
Ancoron
More information about the OpenIndiana-discuss
mailing list