[OpenIndiana-discuss] hardware specs for illumos storage/virtualization server

Robin Axelsson gu99roax at student.chalmers.se
Sat Nov 17 12:06:46 UTC 2012


On 2012-11-17 05:36, Paul B. Henson wrote:
> I think I'm finally going to get around to putting together the 
> illumos home/hobby server I've been thinking about for the past few 
> years :), and would appreciate a little feedback on 
> parts/compatibility/design.
>
> The box is intended to be both a storage server (music/video/etc 
> media, documents, whatever) with content available via both NFS and 
> CIFS, as well as a virtualization server using kvm to run some number 
> of linux instances (the most heavyweight of which will probably be the 
> mythtv instance, but there will be a number of other miscellaneous 
> things going on). I'm thinking of using two SSD's with a partition 
> mirrored for rpool, a 2nd separate partition as L2ARC on each, and 
> possibly a third mirrored for slog (or potentially a separate SSD just 
> for slog), and a storage pool consisting of 2 6 disk raidz2 vdevs.
>
> For the case, I'm looking at the Supermicro 836BA-R920B rackmount 
> chassis, which has 16 3.5" hot-swap bays on the front, and 2 2.5" 
> hot-swap bays on the back, along with dual redundant 960w 80+ platinum 
> certified power supplies. This particular model has all 16 front bays 
> direct attached, with four SFF-8087 connectors. There are two other 
> models available with either one or two SAS expanders; however, from 
> what I understand hooking up SATA drives on the other side of a SAS 
> expander is a bad idea. If I went with near-line SAS, I could get the 
> model with the expanders, which would reduce my cost in terms of SAS 
> controllers, but the pricing on near-line SAS is ridiculous compared 
> to SATA, and the extra cost in SAS controller should be outweighed by 
> reduced cost in drives (I'm already looking at a way higher budget 
> than I'd like for a hobby project, but I have few vices, and 
> electronics are one of them ;) ).
>
> For the motherboard, I'm looking at the Supermicro X9DRD-7LN4F-JBOD, 
> which is a dual LGA 2011 socket board with 16 DIMM slots, 2 x SATA3, 4 
> x SATA2, and 8 x SAS (LSI 2308 controller onboard) along with 4 intel 
> i350 based gig nics. My understanding is that illumos is perfectly 
> happy with the LSI 2308 in IT mode. The -JBOD version of this 
> motherboard comes from the factory with IT firmware. It doesn't seem 
> readily available though, if I went with the regular version the LSI 
> controller comes with RAID firmware, it's possible to reflash with IT 
> but from what I've read it's a bit of a pain (you need to do it from 
> the EFI shell). It also looks like illumos works with the intel i350 
> gig nics, and I assume there should be no issue with the onboard Intel 
> AHCI SATA controller?
>
> CPU, 2 x Intel Xeon E5-2620. The hex core is a bit pricier than the 
> quads, but I've just got my heart set on 12 cores, and no one said a 
> hobby had to be cost effective ;). These are Sandy Bridge Xeons, I 
> know there were some Sandy Bridge issues in the past, but I think 
> there were workarounds, and it looks like Joyent recently fixed them 
> (https://github.com/joyent/illumos-joyent/commit/4d86fb7f59410be72e467483b74e2eebff6052b2), 
> so I'm hoping they will work well.
>
> I haven't really spec'd specific RAM, although I'm partial to crucial, 
> it takes 1333MHz registered ECC DDR3. I think I want at least 32GB for 
> the storage server side, and I'm not sure yet how much more I'll add 
> in on top of that for virtualization.
>
> 8 of the 16 3.5" bays will be covered by the onboard LSI controller, I 
> need to get an additional PCIe controller with 2 x SFF-8087 connectors 
> to cover the rest. Seems there are a fair number of options, although 
> I'm not sure if there's a clear winner among them. Any favorites?
>
> Hard drives are the parts I'm least confident in 8-/. I'd like to go 
> 2TB or 3TB, that's cost prohibitive for near-line SAS, and pretty darn 
> pricy for "enterprise" SATA. I don't really want to go with desktop 
> class drives though.
>
> Is there any opinion yet on the new WD Red "NAS" drives? They're only 
> $170 for a 3TB drive, which is pretty cheap. On the plus side, they're 
> engineered for 7x24 operation, have a three year warranty, and are 
> supposed to be low power/low heat (both would be good; while I 
> installed a 4.5kw solar power system a few years ago when I remodeled 
> our house, and have been net negative powerwise since, I anticipate 
> that to change when this beast starts running. I also set up a 
> dedicated wiring closet with a separate 8000btu wall air conditioner, 
> but still less heat = less cooling = less power utilization). They 
> come out-of-the-box with 7 second TLER, plus the ability to tune that 
> however you'd like. On the downside, while WD doesn't specify it, they 
> evidently run at 5400rpm (where I suppose the low power low heat comes 
> from), and aren't exactly screamers (streaming isn't too bad, but 
> random IO leaves a bit to be desired).
>
> My mythtv vm will potentially be recording 4 HD ATSC streams 
> (originating from network connected HD homeruns), reading all 4 back 
> from disk at the same time (for commercial flagging) and potentially 
> reading a different two streams for playback on the two front ends I 
> currently have connected to TVs. Arguably worst case for an ATSC 
> transport stream is about 18Mbps, so it's not really that much. But 
> then all of the vm's will be doing their thing, plus whatever NFS/CIFS 
> clients are up to. Sizing for IO is black magic to me <sigh>, on the 
> one hand I want to maximize my storage for the cost, but on the other 
> I don't want to have recordings that skip and stutter and vm's that 
> lag and are unresponsive...
>
> I also don't really have a good handle on what SSD's to go with. As I 
> mentioned, I'm thinking of getting two for rpool/l2arc, and hook them 
> up to the onboard SATA3 controller. If I can find ones that are 
> appropriate, I'd carve out a third partition on them for a mirrored 
> slog; otherwise I'd get a separate third one and stick it in a 3.5" 
> bay to be dedicated slog. I don't think I'd bother to mirror the slog 
> if it is on a separate SSD, I believe there are no longer any critical 
> failure modes from slog failure, worst-case being it fails when the 
> pool is off-line and you need to manually import it. Any suggestions 
> on good rpool/l2arc/slog SSD's, or rpool/l2arc SSD's with a different 
> model slog SSD would be greatly appreciated.
>
> Thanks much for reading so far :), I realize I've gone on for quite a 
> bit... Any comments/feedback/suggestions on compatibility or design 
> issues with what I've laid out would be very welcome. Thanks again...
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
> .
>

What you are going for seems like quite an ambitious undertaking! 
Regarding near-line SAS, I don't really understand what that is all 
about. When looking at prices of hard drives, say a 2TB true-SAS drive 
is maybe $10-20 more expensive than an equivalent S-ATA drive. So I 
basically don't agree with you that SAS, is "ridiculously expensive" 
compared to SATA, at least not on the hard drive side. With that in mind 
it makes even less sense to go with near-line SAS.

As for the WD red, I would be careful with EcoGreen drives and the other 
shoestring budget drives out there but that's my two cents.  At the very 
least make sure you have a good warranty policy for them which I don't 
believe you will have. They are not reliable in terms of speed but I 
don't know how they would work with some cache SSD drives. When 
considering the other hardware specs of your system you seem to have 
high requirements, so such drives will be prone to give you a headache 
from time to time.

If you need more HBAs, have you looked at the IBM ServeRAID M1015 ? If 
you are happy with 3 Gb/s then you can go for an Intel SASUC8I.

By "slog" I take it that you mean ZIL. If I understand correctly, if a 
ZIL gets corrupted then the entire pool it belongs to will fall apart. 
So I would take that into consideration when dedicating a drive or a 
group of drives for ZIL. In other words, it seems sensible to have a 
particularly good redundancy for ZIL.





More information about the OpenIndiana-discuss mailing list