[OpenIndiana-discuss] Comparison of mbuffer/tamp to ssh (A novice attempt)

Jonathan Adams t12nslookup at gmail.com
Fri Mar 31 12:42:40 UTC 2017


Do you have any info on just how much of the time difference is just down
to the SSH header overhead?

Do you know how long this takes in comparison to starting an rsyncd server
and sending/receiving via rsync with compression on?

Jon

On 31 March 2017 at 13:29, Harry Putnam <reader at newsguy.com> wrote:

> [Test conducted 170330]
>
> An attempt to measure the difference using mbuffer/tamp compared
> to hipsters latest version of ssh:
>  pkg list|grep ssh
>   service/network/ssh     7.2.0.2-2017.0.0.4
>
> =========================================================
> Send HOST: A vbox vm running hipster on a windows 10 host
>            4608 RAM
>
> Hardware HOST: HP xw8600 2x Xeon 550 3.00 Ghz 32 mb ram
>
> =========================================================
> =========================================================
>
> Recv HOST: A vbox vm running hipster on a openindiana bld 151_9 host
>            4608 RAM
> Hardware HOST: HP xw8600 2x Xeon 570 3.33 Ghz 32 mb ram
> =========================================================
>
> Network hardware 1 Gigabit router and NIC's on a home network with
> very little traffic. 11, hardware and vm mixed, hosts ... only the
> two Solaris X86 hosts above doing any serious network usage.
>
> Hosts using the newest release versions of mbuffer and tamp
> -------
> mbuffer:
> pkg://openindiana.org/shell/mbuffer@20160613-2017.0.0.0:20170306T183501Z
>
> Source URL:
> http://www.maier-komor.de/software/mbuffer/mbuffer-20160613.tgz
> -------
>
> tamp:  tamp-2.5.solaris10_x86.zip
>
> Source:
> https://blogs.oracle.com/timc/entry/tamp_a_lightweight_
> multi_threaded#Resources
> That is one source... there are others... I'm told the joyent repo has
> it, but I could not find it there.
> -------
>
> Sending end (29.7 GB):
>
> root # time zfs send p0/tst.2/isos at 170330 | tamp | mbuffer -s 128k -m
> 1000m -O oi0:31337
> in @  0.0 KiB/s, out @ 14.2 MiB/s, 27.9 GiB total, buffer   0% full^[[B
> summary: 27.9 GiByte in 49min 52.2sec - average of 9765 KiB/s
>
> real    49m55.136s
> user    7m30.973s
> sys     26m35.363s
>
> ===================================================
>
> Recieving end (29.7):
>
> root # time mbuffer -s 128k -m 1999m -I 31337 |tamp -d|zfs recv -vFd p1
> receiving full stream of p0/tst.2/isos at 170330 into p1/tst.2/isos at 170330
> in @  0.0 KiB/s, out @ 4860 KiB/s, 27.9 GiB total, buffer   1% fulll
> summary: 27.9 GiByte in 51min 34.7sec - average of 9442 KiB/s
> received 28.7GB stream in 3094 seconds (9.50MB/sec)
>
> real    51m40.875s
> user    4m11.984s
> sys     32m33.923s
>
> =================================================
>
> Using ssh
>
> [...]
>
> root # time zfs send -v p1/tst.2/isos at 170330|ssh oit zfs recv -vFd p0
>
> [...]
>
> 18:19:50   28.7G   p1/tst.2/isos at 170330
> 18:19:51   28.7G   p1/tst.2/isos at 170330
> received 28.7GB stream in 4158 seconds (7.07MB/sec)
>
> real    69m26.666s
> user    9m5.345s
> sys     44m30.999s
>
> Neither zfs fs is using compression so not sure why the reported
> difference in size of data. 29.7 send end, 28.7 recv end.
>
> You can see that the mbuffer/tramp transfer was 18 minutes quicker.
> So, with a much bigger batch of data, say 500 GB, the difference would
> be a very lot.
>
> As an idea, take it times 30.  That makes 861GB to transfer ... it would
> mean a saving of 9 hrs over using ssh.
>
> Or 25.8 hrs as against 34.725 hrs.
> Of course these are only very loose figures because they do not take
> into account what the network traffic might be like.
> For the duration of this test net traffic other than this tranfer,
> should have been very light.
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>


More information about the openindiana-discuss mailing list