[OpenIndiana-discuss] [developer] Re: Install OpenIndiana in an UFS root
wessels
wessels147 at gmail.com
Tue Sep 10 14:37:29 UTC 2013
@Jim
Shared storage is indeed another interesting point. I don't completely
follow what you do with your ESX datastores. VMFS is a clustered FS. ESX
doesn't provide any redundancy. Their VMFS best pratices recommend that you
should use one VMFS datastore per lun. So all blades running ESX and
sharing a single lun containing VMFS exposed by the controllers in the same
chassis, I can follow. And with storage vMotion you could transfer all vm's
to another chassis in case you want to service the chassis. But I don't
really understand your setup.
There's no magic solution which covers all problems. ESXi is a wonderful
product and certainly earned it's place. But it's not best solution for all
problems, like massive scale out apps. ESXi and VMFS have real scalability
limits, like max 32 hosts per VMFS. I can run way more app instances in
zones than ESXi using vm's on an identical machine.
But not everybody has such use case. For most business, with some AD
controllers, Exchange, Sharepoint, CRM and ERP vm's, a hypervisor like
vSphere is the solution.
The lack of a (redundant) distributed (ZFS based) FS in Illumos is too bad.
Lustre is working on a ZFS based version. But I can't build nor afford the
development costs of such FS, so no whining from me. We simply work around
that and distribute data in other ways. Don't put too much trust in TCP/IP
either, remember this one: http://status.aws.amazon.com/s3-20080720.html .
If memory serves me right the corrupted packed matched it's weak checksum
causing the massive outage.
On Tue, Sep 10, 2013 at 2:48 PM, Jim Klimov <jimklimov at cos.ru> wrote:
> On 2013-09-10 12:50, wessels wrote:
>
>> That's quite some offtopic discussion my original question triggered.
>> Since the original question remains unanswered I'll add my 2cents to
>> "the zfs on a cloud instance" discussion as well.
>> ...
>>
>> So Illumos is certainly not the last word in operating systems like ZFS
>> is in file systems.
>>
>
> As a counterargument in defence of hypervisor virtualization, in tone
> with your reference to live migration, I'd mention also file systems
> line VMFS which can be shared from the same storage by several hosts.
> Beside sharing free space across the VM farm, this also allows easier
> re-launching of tasks (VMs) on several different nodes other than
> their default one.
>
> For example, I happen to see many Intel MFSYS boxes deployed, which
> include 6 compute blades and 1 or 2 (redundant) storage controllers
> which control "pools" on up to 14 HDDs mounted into the chassis and
> LUNs from these pools are dedicated to, or shared by, compute nodes.
> While the networking for the nodes is from 1 to 4 1Gbit links, their
> internal storage links are 3 or 6(?) Gbit/s, and there is no PCI
> expansion, so we can't really get similar bandwidths via iSCSI/NFS
> served by one host (like would likely be the case with HyperV shared
> storage for example).
>
> On one hand, the chassis controller does not expose raw disks and we
> can't run ZFS directly on that layer; though we can (and sometimes do)
> run ZFS in the pools provided by chassis and use fixed configurations
> (one LUN - one ZFS pool - one *Solaris-derivate and some zones in it).
>
> Or we can on another hand set up ESXi on the nodes and share the common
> disk space as one big VMFS (or a few for local redundancy against at
> least something). This way any VM can run on any host, be it live
> migration or cold (via restart or sudden death of a host, but without
> data copying). And then we do too face the problem of ZFS inside VMs
> (though usually we can make one VM with many zones together doing a
> particular job, and compared to appservers I wouldn't say that ZFS is
> extremely hungry), as well as uneasiness about storage of checksummed
> data on obscure storage devices. Well, at least we can mirror across
> two VMFS pools served by two different controllers (by default) :)
>
> Since these machines with all their perks are a successful building
> block in many of our solutions, despite some of their drawbacks, I
> thought suitable to mention them in the context of VM vs. baremetal
> as a fact of life I cope with regularly ;)
>
> //Jim
>
>
> ______________________________**_________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@**openindiana.org<OpenIndiana-discuss at openindiana.org>
> http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
>
More information about the OpenIndiana-discuss
mailing list