[OpenIndiana-discuss] ZFS read speed(iSCSI)

Heinrich van Riel heinrich.vanriel at gmail.com
Fri Jun 7 18:42:54 UTC 2013


Thank you for all the information. Ordered the SAS SSD.
I somewhat got tired of iscsi and the networking stuff around it and went
to good ol FC. Some hypervisors will still use iSCSI.

Speed is ok

One sec apart cloning 150GB vm from a datastore on EMC to OI.

alloc free read write read write
----- ----- ----- ----- ----- -----
309G 54.2T 81 48 452K 1.34M
309G 54.2T 0 8.17K 0 258M
310G 54.2T 0 16.3K 0 510M
310G 54.2T 0 0 0 0
310G 54.2T 0 0 0 0
310G 54.2T 0 0 0 0
310G 54.2T 0 10.1K 0 320M
311G 54.2T 0 26.1K 0 820M
311G 54.2T 0 0 0 0
311G 54.2T 0 0 0 0
311G 54.2T 0 0 0 0
311G 54.2T 0 10.6K 0 333M
313G 54.2T 0 27.4K 0 860M
313G 54.2T 0 0 0 0
313G 54.2T 0 0 0 0
313G 54.2T 0 0 0 0
313G 54.2T 0 9.69K 0 305M
314G 54.2T 0 10.8K 0 337M
314G 54.2T 0 0 0 0
314G 54.2T 0 0 0 0
314G 54.2T 0 0 0 0
314G 54.2T 0 8.32K 0 261M
314G 54.2T 0 175 0 1.06M
314G 54.2T 0 0 0 0
314G 54.2T 0 0 0 0
314G 54.2T 0 0 0 0
314G 54.2T 0 6.29K 0 196M
314G 54.2T 0 17 0 33.5K
314G 54.2T 0 0 0 0
314G 54.2T 0 0 0 0
314G 54.2T 0 0 0 0
314G 54.2T 0 9.27K 0 292M
315G 54.2T 0 11.1K 0 347M
315G 54.2T 0 0 0 0
315G 54.2T 0 0 0 0
315G 54.2T 0 0 0 0
315G 54.2T 0 9.41K 0 296M
317G 54.2T 0 29.2K 0 918M
317G 54.2T 0 0 0 0
317G 54.2T 0 0 0 0
317G 54.2T 0 0 0 0
317G 54.2T 0 11.6K 0 365M
318G 54.2T 0 25.0K 0 785M
snip... and so on.

I cant seem to catch a break, BTW I am using VMware for this. It is
connected to a brocade 5100B with many other nodes and the EMC array is
also connected to it. No other system indicate connection drop(s).
I will change the cable, but highly doubt it is that. I will also connect
the other port on the hba.

Only under load.
Jun  7 14:02:18 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730:
Link reset. (Disabling link...)
Jun  7 14:02:18 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710:
Link down.
Jun  7 14:04:41 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720:
Link up. (4Gb, fabric, target)
Jun  7 14:04:41 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid
22000, topology Fabric Pt-to-Pt,speed 4G
Jun  7 14:10:19 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730:
Link reset. (Disabling link...)
Jun  7 14:10:19 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710:
Link down.
Jun  7 14:12:40 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720:
Link up. (4Gb, fabric, target)
Jun  7 14:12:40 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid
22000, topology Fabric Pt-to-Pt,speed 4G
Jun  7 14:15:24 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730:
Link reset. (Disabling link...)
Jun  7 14:15:24 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710:
Link down.
Jun  7 14:17:44 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720:
Link up. (4Gb, fabric, target)
Jun  7 14:17:44 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid
22000, topology Fabric Pt-to-Pt,speed 4G


HBA Port WWN: 10000000c
        Port Mode: Target
        Port ID: 22000
        OS Device Name: Not Applicable
        Manufacturer: Emulex
        Model: LPe11002-E
        Firmware Version: 2.80a4 (Z3F2.80A4)
        FCode/BIOS Version: none
        Serial Number: VM929238
        Driver Name: emlxs
        Driver Version: 2.60k (2011.03.24.16.45)
        Type: F-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 20000000c


It does recover and will continue, but not really usable in this manner.

Any ideas? Is this perhaps a known issue with emulex driver in OI? Most use
qlogic it seems, but we are 100% emulex so this is all I have.

Thanks


On Fri, Jun 7, 2013 at 11:29 AM, Edward Ned Harvey (openindiana) <
openindiana at nedharvey.com> wrote:

> > From: Jim Klimov [mailto:jimklimov at cos.ru]
> >
> > > With 90 VM's on 8 servers, being served ZFS iscsi storage by 4x 1Gb
> > > ethernet in LACP, you're really not going to care about any one VM
> being
> > > able to go above 1Gbit.  Because it's going to be so busy all the
> time, that the
> > > 4 LACP bonded ports will actually be saturated.  I think your machines
> are
> > > going to be slow.  I normally plan for 1Gbit per VM, in order to be
> comparable
> > > with a simple laptop.
> > >
> > > You're going to have a lot of random IO.  I'll strongly suggest you
> switch to
> > > mirrors instead of raidz.
> >
> > I'll leave your practical knowledge in higher regard than my theoretical
> > hunches, but I believe typical PCs (including VDI desktops) don't do
> > much disk IO after they've loaded the OS or a requested application.
>
> Agreed, disk is mostly idle except when booting or launching apps.  Some
> apps write to disk, such as internet browsing caching stuff, and MS Office
> constantly hitting the PST or OST file, and Word/Excel autosave, etc.
>
> But there are 90 of them.  So even "idle" time multiplied by 90 is no
> longer idle time.  And most likely, when they *do* get used, a whole bunch
> of them will get used at the same time.  (20 students all browsing the
> internet in between classes, or 20 students all doing homework between 5pm
> and 9pm, but they're all asleep from 4am to 6am, so all 90 instances are
> idle during that time...)
>
>
> > And from what I read, if his 8 VM servers would contact the ZFS storage
> > box with requests to many more targets, then on average all NICs will
> > likely get their share of work, for one connection or another, even as
> > part of LACP trunks (which may be easier to manage than VLAN-based
> > MPxIO, with its separate benefits however). Right?..
>
> Yup, with 8 VM servers, each having 11 VM guests, even if each server has
> a single 1Gb link to the 4x LACP storage server, I expect the 4x LACP links
> will see pretty heavy and well distributed usage.
>
>
> > It might seem like a good idea
> > to use dedup as well,
>
> Not if you care at all about performance, or usability.
>
>
> > So here's my 2c, but they may be wrong ;)
>
> :-)
>
> I guess the one thing I can still think to add here is this:
>
> If the 90 VM's all originated as clones of a single system, and the
> deviation *from* that original system remains minimal, then the ARC & L2ARC
> cache will do wonders.  Because when the first VM requests the boot blocks
> and the OS blocks, and all the blocks to launch Internet Explorer (or
> whatever app)...  Those things can get served from cache to satisfy the 89
> subsequent requests to do the same actions.
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>


More information about the OpenIndiana-discuss mailing list