[OpenIndiana-discuss] ZFS read speed(iSCSI)

Roel_D openindiana at out-side.nl
Thu Jun 6 20:52:18 UTC 2013


80 to 100MB/s is very low, to low. How big are the files? Due to the iscsi caching/compressing mechanism speeds of 200MB/s a reachable. Even over 100Mb lines. 
But i saw this week on our OpenNAS server that the zfs iscsi dropped to 300Kb/s when i tried to save 8 VM's of in totaal 500GB. The first minutes it reached 200MB, after that it declined to 300KB/s


Kind regards, 

The out-side

Op 6 jun. 2013 om 22:03 heeft Heinrich van Riel <heinrich.vanriel at gmail.com> het volgende geschreven:

> If only the network guys here told me this, I do have VMware with two nics
> but that does not help going to the same target as pointed out. It does
> load balance but the total amount is only 80-100MB/s.
> 
> So I guess I will change two of the interfaces in LACP on one VLAN and then
> the other two on another on the storage server side and on VMware/Hyper-v
> bind to the two different targets with no LACP
> 
> So from the all replies I will be doing the following:
> 
>   * net changes as above
>   * create block volumes with blocksize 32
>   * enable compression
>   * add the SSD cache disk. (doing some limited testing, students will
> clone from the same templates and use the same install media when, so it
> seems like it would help with that by putting on the SSD, tested on system
> where I could add SATA SSD)
> 
> I will post my findings, but might take some time to fix the network in
> time and they will have to deal with 1Gbps for the storage. The request is
> to run ~90 VMs on 8 servers connected.
> 
> Thank you all for all the responses.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Thu, Jun 6, 2013 at 9:24 AM, Saso Kiselkov <skiselkov.ml at gmail.com>wrote:
> 
>> On 05/06/2013 23:52, Heinrich van Riel wrote:
>>> Any pointers around iSCSI performance focused on read speed? Did not find
>>> much.
>>> 
>>> I have 2 x rz2 of 10x 3TB NL-SAS each in the pool. The OI server has 4
>>> interfaces configured to the switch in LACP, mtu=9000. The switch (jumbo
>>> enabled) shows all interfaces are active in the port channel. How can I
>> can
>>> verify it on the OI side? dladm shows that it is active mode
>>> 
>>> [..snip..]
>> 
>> Hi Heinrich,
>> 
>> Your limitation is LACP. Even in a link bundle, no single connection can
>> exceed the speed of a single physical link - this is necessary to
>> maintain correct packet ordering and queuing. There's no way around this
>> other than to put fatter pipes in or not use LACP at all.
>> 
>> You should definitely have a look at iSCSI multipath. It's supported by
>> VMware, COMSTAR and a host of other products. All you need to do is
>> configure multiple separate subnets, put them on separate VLANs and tell
>> VMware to create multiple vmkernel interfaces in separate vSwitches.
>> Then you can scan your iSCSI targets over one interface, VMware will
>> auto-discover all paths to it and initiate multiple connections with
>> load-balancing across all available paths (with fail-over in case a path
>> dies). This approach also enables you to divide your storage
>> infrastructure into two fully independent SANs, so that even if one side
>> of the network experiences some horrible mess (looped cables, crappy
>> switch firmware, etc.), the other side will continue to function without
>> a hitch.
>> 
>> Cheers,
>> --
>> Saso
>> 
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the OpenIndiana-discuss mailing list