[OpenIndiana-discuss] zsf send/receive performance
wim at vandenberge.us
wim at vandenberge.us
Mon Dec 9 17:56:38 UTC 2013
Hello,
I was hoping someone can point me in the right direction. I have a server
(151A8) with two identical zpool's Each of the pools has a number of file
systems on it and both are over 80% empty. When I copy a file system from one
pool to the other using something like:
zfs send -R pool1 /fs01 at epoch <mailto:eddtmp/evol02bck at 20131202-mopup2> | zfs
receive -F pool2/fs01
I consistently get between 260 and 280MB/s. adding mbuffer makes no difference.
This seems low since each pool is capable of more than 2GB/s read and write.
NFS/iSCSI performance of the file systems also tops out around 2GB/s aggregate
per pool.
The strange part is that if I transfer a second file system between the same two
pools at the same time, the aggregate throughput goes up to 520-560MB/s. If I
add a third, 780-840MB/s and so forth until I reach close to 2GB/s when copying
seven at which time throughput plateaus. All of this measured while no other
activity is ongoing. I have verified that I do not have a transient hardware
issue.
The limit seems to be on the "zfs send" side. zfs send redirected to /dev/null
still tops out at the magical 260-280 MB/s. A single zfs receive, when reading
from a file, will consume the full 2GB/sec on the target pool. The same numbers
hold when transferring from pool2 back to pool1. The limiting factor is always
the originating side.
I'd like to be able to saturate the entire bandwidth using a single copy. Any
ideas?
Thanks,
Wim
More information about the OpenIndiana-discuss
mailing list