[OpenIndiana-discuss] Shopping for an all-in-one server

Jim Klimov jimklimov at cos.ru
Mon Jun 2 07:38:40 UTC 2014


Hello friends, and sorry for cross-posting to different audiences like this,

I am helping to spec out a new server for a software development department, and I am inclined to use an illumos-based system for the benefits of ZFS and file-serving and zones primarily. However, much of the target work is with Linux environments, and my tests with the latest revival of SUNWlx this year have not shown it to be a good fit (recent Debians either fail to boot or splash many errors, even if I massage the FS contents appropriately - ultimate problem being with some absent syscalls etc.); due to this, the build and/or per-dev environments would likely live in VMs based on illumos kvm, virtualbox, or bare-metal vmware hosting the illumos system as well as the other vm's.

Thus the box we'd build should be good with storage (including responsive read-write NFS) and VM hosting. I am not sure whether OI, OmniOS or ESX(i?) with HBA passthrough onto an illumos-based storage/infrastructure services VM would be a better fit. Also, I was away from shopping for new server gear for a while and its compatibility with illumos in particular, so I'd kindly ask for suggestions for a server like that ;)

The company's preference is to deal with HP, so while it is not an impenetrable barrier, buying whatever is available under that brand is much simpler for the department. Cost seems a much lesser constraint ;)

The box should be a reliable rackable server with remote management, substantial ECC RAM for efficient ZFS and VM needs (128-256gb likely, possibly more), CPUs with all those VT-* bits needed for illumos-kvm and a massive amount of cores (some large-scale OS rebuilds from source are likely a frequent task), and enough disk bays for rpool (hdd or ssd), ssd-based zil and l2arc devices (that's already half a dozen bays), possibly an ssd-based scratch area (raid0 or raid1, this depends), as well as several TB of HDD storage. Later expansions should be possible with JBODs.

I am less certain about HBAs (IT mode, without HW-RAID crap), and the practically recommended redundancy (raidzN? raid10? how many extra disks in modern size ranges are recommended - 3?) Also i am not sure about modern considerations of multiple PCI buses - especially with regard to separation of ssd's onto a separate HBA (or several?) to avoid bottlenecks in performance and/or failures.

Finally, are departmental all-in-one combines following the Thumper ideology of data quickly accessible to applications living on the same host without uncertainties and delays of remote networking still at all 'fashionable'? ;)
Buying a single purchase initially may be easier to justify than multiple boxes with separate roles, but there are other considerations too. In particular, their corporate network is crappy and slow, so splitting into storage+server nodes would need either direct cabling for data, or new switching gear which i don't know yet if it would be a problem; localhost data transfers are likely to be a lot faster. I am also not convinced about higher reliability of split-head solutions, though for high loads i am eager to believe that separating the tasks can lead to higher performance. I am uncertain if this setup and its tasks would qualify for that; but it might be expanded later on, including role-separation, if a practical need is found after all.

PS: how do you go about backing up such a thing? Would some N54L's suffice to receive zfs-send's of select datasets? :)

So... any hints and suggestions are most welcome! ;)
Thanks in advance,
//Jim Klimov 
--
Typos courtesy of K-9 Mail on my Samsung Android



More information about the openindiana-discuss mailing list