[OpenIndiana-discuss] Recommendations for fast storage (OpenIndiana-discuss Digest, Vol 33, Issue 20)

Ong Yu-Phing ong.yu.phing at group.ong-ong.com
Mon Apr 15 04:32:37 UTC 2013


A heads up that 10-12TB means you'd need 11.5-13TB useable, assuming 
you'd need to keep used storage < 90% of total storage useable (or is 
that old news now?).

So, using Saso's RAID5 config of Intel DC3700s in 3xdisk raidz1, that 
means you'd need 21x Intel DC3700's at 800GB (21x800/3*2*.9=10.008) to 
get 10TB, or 27x to get 12.9TB useable, excluding root/cache etc.  Which 
means 50+K for SSDs, leaving you only 10K for the server platform, which 
might not be enough to get 0.5TB of RAM etc (unless you can get a bulk 
discount on the Intel DC3700s!).

Working set of ~50% is quite large; when you say data analysis I'd 
assume some sort of OLTP or real-time BI situation, but do you know the 
nature of your processing, i.e. is it latency dependent or bandwidth 
dependent?  Reason I ask, is because I think 10GB delivers better 
overall B/W, but 4GB infiniband delivers better latency.

10 years ago I've worked with 30+TB data sets which were preloaded into 
an Oracle database, with data structures highly optimized for the types 
of reads which the applications required (2-3 day window for complex 
analysis of monthly data).  No SSDs and fancy stuff in those days.  But 
if your data is live/realtime and constantly streaming in, then the work 
profile can be dramatically different.

On 15/04/2013 07:17, Sa?o Kiselkov wrote:
> On 04/14/2013 05:15 PM, Wim van den Berge wrote:
>> Hello,
>>
>> We have been running OpenIndiana (and its various predecessors) as storage
>> servers in production for the last couple of years. Over that time the
>> majority of our storage infrastructure has been moved to Open Indiana to the
>> point where we currently serve (iSCSI, NFS and CIFS) about 1.2PB from 10+
>> servers in three datacenters . All of these systems are pretty much the
>> same, large pool of disks, SSD for root, ZIL and L2ARC, 64-128GB RAM,
>> multiple 10Gb uplinks. All of these work like a charm.
>>
>> However the next system is  going to be a little different. It needs to be
>> the absolute fastest iSCSI target we can create/afford. We'll need about
>> 10-12TB of capacity and the working set will be 5-6TB and IO over time is
>> 90% reads and 10% writes using 32K blocks but this is a data analysis
>> scenario so all the writes are upfront. Contrary to previous installs, money
>> is a secondary (but not unimportant) issue for this one. I'd like to stick
>> with a SuperMicro platform and we've been thinking of trying the new Intel
>> S3700 800GB SSD's which seem to run about $2K. Ideally I'd like to keep
>> system cost below $60K.
>>
>> This is new ground for us. Before this one, the game has always been
>> primarily about capacity/data integrity and anything we designed based on
>> ZFS/Open Solaris has always more than delivered in the performance arena.
>> This time we're looking to fill up the dedicated 10Gbe connections to each
>> of the four to eight processing nodes as much as possible. The processing
>> nodes have been designed that they will consume whatever storage bandwidth
>> they can get.
>>
>> Any ideas/thoughts/recommendations/caveats would be much appreciated.
> Hi Wim,
>
> Interesting project. You should definitely look at all-SSD pools here.
> With the 800GB DC S3700 running in 3-drive raidz1's you're looking at
> approximately $34k CAPEX (for the 10TB capacity point) just for the
> SSDs. That leaves you ~$25k you can spend on the rest of the box, which
> is *a lot*. Be sure to put lots of RAM (512GB+) into the box.
>
> Also consider ditching 10GE and go straight to IB. A dual-port QDR card
> can be had nowadays for about $1k (SuperMicro even makes motherboards
> with QDR-IB on-board) and a 36-port Mellanox QDR switch can be had for
> about $8k (this integrates the IB subnet manager, so this is all you
> need to set up an IB network):
> http://www.colfaxdirect.com/store/pc/viewPrd.asp?idcategory=7&idproduct=158
>
> Cheers,
> --
> Saso
>
>
>




More information about the OpenIndiana-discuss mailing list