[OpenIndiana-discuss] slicing a disk for ZIL

Jim Klimov jimklimov at cos.ru
Thu Nov 29 15:27:05 UTC 2012


On 2012-11-29 15:59, Edward Ned Harvey (openindiana) wrote:
>> From: Sebastian Gabler [mailto:sequoiamobil at gmx.net]
>>
>> I have bought and installed an Intel SSD 313 20 GB to use as ZIL for one
>> or many pools. I am running openindiana on a x86 platform, no SPARC. As
>> 4 GB should suffice, I am considering to partition the drive in order to
>> assign each partition to one pool (ATM pools are 2 on the Server, but I
>> could expand it in the future)
>> After some reading, I am still confused about slicing and partitioning.
>> What do I actually need to do to achieve the wanted effect to have up to
>> 4 partions on that SSD?
>
> Everybody seems to want to do this, and I can see why - If you look at the storage capacity of an SSD, you're like "Hey, this thing has 64G and I only need 1 or 2 or 4G, that leaves all the rest unused."  But you forget to think to yourself, "Hey, this thing has a 6Gbit bus, and I'm trying to use it all to boost the performance of some other pool."
>
> The only situation where I think it's a good idea to slice the SSD and use it for more than one slog device, is:  If you have two pools, and you know you're not planning to write them simultaneously.  I have a job that reads from pool A and writes to pool B, and then it will read from pool B and write to pool A, and so forth.  But this is highly contrived, and I seriously doubt it's what you're doing (until you say that's what you're doing.)
>
> The better thing is to swallow and accept 60G wasted on your slog device.  It's not there for storage capacity - you bought it for speed.  And there isn't excess speed going to waste.

Well, there is some truth to this :) But what is the alternative?
For example, on smaller systems you only have one PCI(*) bus and
that would be a bottleneck, even if you add several SSDs onto the
box, so you lose little by using few SSDs for many tasks right away.

Also, with their screaming IOPS, many storage vendors limit the
supported SSD setups to about 4 devices per system (or per a JBOD
shelf, at most), and even that - often with a dedicated controller
HBA for the SSDs.

However, if you only add a SLOG to just one pool, your other pools
would remain slow on sync writes - which you might not care about
with some workloads, or might not want with others. And, mind you
all, a dedicated ZIL device would only accelerate sync writes which
are acknowledged to clients as "yes, I've instantly saved your data
in reliable fashion!", and presence of such IOs can be researched
with DTrace scripts to justify the purchase of a SLOG (or lack of
need) for a particular storage box and its de-facto IO patterns.

If your box uses several pools but is limited to one SLOG (or two, in
a mirrored SLOG), you might still prefer to benefit from having pieces
of it added to the several pools. Even if your systems have some sort
of streaming sync IO (which for some reason won't be streamed right
into the pool anyway), your ZIL writes would be 3Gbit/s for a couple
of pools. If the (networked) IO is bursty, as it often is, then it is
likely that each pool would get full bandwitdh to its part of the SLOG
in those microseconds it needs to spool those blocks.

My cents,
//Jim



More information about the OpenIndiana-discuss mailing list