[OpenIndiana-discuss] Comparison of mbuffer/tamp to ssh (A novice attempt)

Harry Putnam reader at newsguy.com
Sat Apr 1 12:13:57 UTC 2017


Timothy Coalson <tsc5yc at mst.edu> writes:

Thanks for the excellent input.  I like the details.

[...]

>> root # time zfs send p0/tst.2/isos at 170330 | tamp | mbuffer -s 128k -m
>> 1000m -O oi0:31337
>>
>
> tamp is compression, which takes cpu time.  Since your network is gigabit
> and you are running substantially below that, you would probably get better
> speed without tamp (especially since the data isn't very compressible).

I ran a test of that theory with the same data.  The results seem to
indicate that tamp plays a beneficial role as the times were roughly
11 minutes faster with tamp.

But of course this test is not definitive.  Drawing any serious
conclusions from it would be a mistake.

But, I guess we'd do well to note that `tamps' claim to fame is that it
is light and fast compression.

With tamp in play:
49m55s  with tamp in play (reported on send end)
4063 sec (7.23MB/sec reported on recv end])

Without tamp:
61m47s  without tamp (reported on send end)
3094 sec (9.5MB/sec [reported on recv end])

Roughly 11 minutes longer without tamp with only 29GB which would be
quite significant with the much larger transfers that are common for
many of the posters on this list.

(Full results at the end)

>
>> Using ssh
>>
>> [...]
>>
>> root # time zfs send -v p1/tst.2/isos at 170330|ssh oit zfs recv -vFd p0
>>
>
> You have no buffering on this, which is a large disadvantage, likely
> offsetting the removal of compression from the workload.  Add mbuffers of
> the same size as your other test (but without -l and -O, obviously) on each
> side of the ssh in order to do a fair comparison.

Haven't tested this part with the buffers yet.

=================================================

Test results mbuffer without tamp:

root # time zfs send p1/tst.2/isos at 170330 |mbuffer -s 128k -m1000m -O oit:31337
in @  0.0 KiB/s, out @ 4093 KiB/s, 28.7 GiB total, buffer   0% full
summary: 28.7 GiByte in 1h 01min 45.7sec - average of 8121 KiB/s

real    61m47.297s
user    1m6.360s
sys     23m49.175s

==============================================================

root # time mbuffer -s 128k -m 1999m -I 31337 |zfs recv -vFd p0
receiving full stream of p1/tst.2/isos at 170330 into p0/tst.2/isos at 170330
in @  0.0 KiB/s, out @ 6642 KiB/s, 28.7 GiB total, buffer   0% full
summary: 28.7 GiByte in 1h 06min 29.4sec - average of 7544 KiB/s
received 28.7GB stream in 4063 seconds (7.23MB/sec)

real    67m48.226s
user    0m43.148s
sys     11m59.656s





More information about the openindiana-discuss mailing list