[OpenIndiana-discuss] ZFS remote receive

Richard Elling richard.elling at richardelling.com
Thu Nov 1 01:44:41 UTC 2012


On Oct 31, 2012, at 3:37 AM, Jim Klimov <jimklimov at cos.ru> wrote:

> 2012-10-31 13:58, Sebastian Gabler wrote:
>>> 2012-10-30 19:21, Sebastian Gabler wrote:
>>>> >Whereas that's relative: performance is still at a quite miserable 62
>>>> >MB/s through a gigabit link. Apparently, my environment has room for
>>>> >improvement.
>>> Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
>>> depending on your NICs, switches and other networking gear) for
>>> unrouted (L2) storage links? It is said that traditional MTU=1500
>>> has too many overheads with packet size and preamble delays between
>>> packets that effectively limit a gigabit to 700-800Mbps...
> 
> 
>> The MTU is on 1500 on source and target system, and there are no
>> fragmentations happening.
> 
> The point of Jumbo frames (in unrouted L2 ethernet segments) is to
> remove many overheads - CSMA/CD delays being a large contributor -
> and send unfragmented chunks of 9-16Kb in size, increasing the local
> network efficiency.

There is no CSMA/CD on gigabit and faster available from any vendor today.
Everything today is switched.
 -- richard

> 
> > On the target system I am seeing writes up to
>> 160 MB/s with frequent zpool iostat probes. When iostat probes are up to
>> 5s+, there is a steady stream of 62 MB/s.
> 
> I believe this *may* mean that your networking buffer receives data
> into memory (ZFS cache) at 62Mb/s, then every 5s the dirty cache
> is sent to disks during TXG commit at whatever speed in can burst
> (160Mb/s in your case).

More likely: straight pipe send | receive is a blocking configuration. This
is why most people who go for high speed send | receive use a buffer,
such as mbuffer, to smooth out the performance. Check the archives,
this has been rehashed hundreds of times on these aliases.
 -- richard

--

Richard.Elling at RichardElling.com
+1-760-896-4422





More information about the OpenIndiana-discuss mailing list