« EMC World 2015: A Humorous Interlude, Part 3 | Main | EMC World Day 3: DSSD Tech Preview »

May 06, 2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dwayne Lessner

Hi Chad

Congrats on the relaunch of your product. Data locality with Nutanix still allows you to rebuild from the remaining nodes as the secondary copies are evenly redistributed with the exception of block awareness. In a cluster of 16 nodes 3 will not have data to rebuild. In a cluster of 64 nodes, 3 nodes couldn't be used to rebuild data. So i think that is fair in the overall rebuild times.

What is the use case for creating such a large failure domain ie 1,000 of nodes? Large failure domains also mean need more than 2 copies of data if you care about losing multiple nodes. How many copies of data were used in these performance tests? What was the CPU and RAM used by the client? What was the working set? Was it in cache? Seems like a lot of focus on performance for Tier 2 apps.

Thanks for the post.

DL

Tony Watson

I am a bit confused Chad.

April 17, 2015 - SalesAdvantage.

On April 13, 2015, EMC announced ScaleIO Splitter for RecoverPoint general availability. This announcement provides an optimized data protection solution for hyperconvered server SAN infrastructure by protecting ScaleIO systems with virtual RecoverPoint Appliances (vRPAs). ScaleIO customers can have the confidence that an integrated EMC solution will provide disaster recovery and business continuity for their software-defined ScaleIO deployment

Chet Walters

I'd love to play with this, but; Release date: May 29, 2015

Mark Burgess

Hi Chad,

Great stuff.

As the software is now "free" and you only pay for support does this mean that a fully supported solution will be much cheaper than before?

Also is the licence still based on raw capacity?

My only concern is the product is all about scale (performance and capacity), but in pretty much every other dimension cannot match a conventional array (i.e. double disk protection).

I did a comparison of VSAN and an array at http://blog.snsltd.co.uk/are-vmware-vsan-vvols-and-evorail-software-defined-storage-and-does-it-really-matter/ and I think it is fair to say ScaleIO has most of the same limitations as VSAN.

As always these things come down to use case and cost, but it is great to see that ScaleIO is now free to use.

Many thanks
Mark

BS

Is this free download also unlimited in use? So suitable for production use cases or only for test purpose?

Most FEATURE RICH SDS product?? I must be missing something here,..

ScaleIO doesn't do:

File,
Object,
Dedupe,
Compression,
Native Replication,

SacleIO also does not offer:

Selectable copies for DP (2 only and 2 max),
Protection against multiple node failures,
Dynamic tiering,
Client side caching,
Hybrid disk pooling,
Distributed metadata,
Unlimited Heterogeneous OS support,

As I understand all ScaleIO does is pool a bunch of disks together and stripe the data over these disks, additional features are snapshots, QoS and limited RAM caching and that's it right?

So please elaborate on how this is as you say the Most FEATURE RICH SDS product.

Thank you

Nick

"Maybe we can “protect” the business by isolating it to specific workloads"

I'm really excited about everything ScaleIO can do but right now EMC/VCE is "protecting" legacy business by repeatedly saying ScaleIO is for "tier 2" workloads under the guise of "feedback from customers". I just don't buy it. Which customers are saying "nah, instead of putting my tier 1 workloads on this system that is incredibly easy to manage and can perform and scale on ridiculous levels, I'll just put tier 1 on less performant and scalable solutions that are harder to manage".

Remove the "tier 2" legacy business protecting lingo from all your documentation and then I will be VERY excited.

Leroy van Logchem

Just deployed three PetaBytes using ScaleIO. We are very pleased by the ease of use, especially since availability of the loadable VIB in ESX. ScaleIO really can fill your 10Gb Ethernet interfaces with sub 1 ms latency iops. Being released from traditional silo complexity and scaling locks the questions are now quite simple "Is using 2 x 10Gbit interfaces enough for my workload or should we just add two more?" Our setup uses 'just 56 SDS servers' but can deliver way more then most of the client SDC servers can handle. We solved the Compression and DP wishes using ZPOOL's with mirror sets using two autonomous ScaleIO Clusters. ZFS adds the well known qualities of check-sums, LZ4 compression and real-time snapshots. All data stored is "tier 1" and looking at the Roadmap more reasons to stay on this Type III technology are added. To be fair there are some limits and weaknesses - specifically the mdm should be dynamically discovered when using the GUI and it could use some historical monitoring graphs. Otherwise the notification of Events is very clever because your drill down from Protection Domain -> Storage Pools -> SDS into the raw block devices and finally the SDC health.

Re to the post of BS: It does provide Protection against multiple node failure by using Fault Sets. We actually use these because our servers are grouped by four in each chassis. So every 2nd copy of 1MB gets directed to another chassis. Last weekend Power was lost to 4x60TB SDS and availability was not affected - the Rebuild and Rebalance algorithms intimidatingly solved the outage.

CMD

When will ScaleIO be able to do basic storage operations like migrating a LUN from one storage pool to another without having to do host based migrations? This is a feature that other EMC storage arrays have had for a long time and seems to be lacking with ScaleIO.

BS

So as long as your are able to control which servers fail and within which fault set your are safe? Sounds like a great protection against multiple nose failure,.... Easier way would be to write 3 copies instead of 2 but that would increase the required capacity to much if you don't have compression and dedupe.

Sounds to me like it's time for a real feature rich distributed storage platform,...

Chad Sakac

@Dwayne - thank you!

Respectfully, it's much more than a relaunch. It is a ScaleIO Appliance (VxRack), it's an update on the next release, and it's making it freely and frictionlessly available for download and use (with community vs. EMC support, of course).

On your first comment - I want to be clear - the very, very simple data distribution (and lookup) model of ScaleIO makes rebuilds extremely, extremely fast. I would encourage you to download and try it yourself (use AWS EC2 if you don't have sufficient hardware), and compare to anything else - perhaps the Nutanix community edition. It's not simply a question of "where is the data", but "how much work needs to be done to map/discover/copy" and "how much parallelism is there in the rebuild process".

On your second comment - I agree, a 1000 node cluster would be a crazy failure domain. In ScaleIO deployments, generally, the customers create protection domains within their clusters (failure domain limiting), as well as logical partitioning for tenants and their storage pools.

The beauty of making this stuff available (and I **believe** Nutanix has done something similar) - don't listen to me :-) Download, build a 64 node cluster, and try for yourself!

@Chet - agreed, I pushed hard (and everyone did) to make the bits available at the moment of the announcement at EMC World. They will be there by the end of the month, with no barrier. The delay was a update (v1.32) that further improved the ease-of-install over the current version (v1.31). We've been putting v1.32 through it's paces amongst the EMC SEs, with very good results. Please download at the end of the month, and please let me know what you find! (good/bad/ugly!)

@Mark - thanks for the feedback. The freely available version has no capacity, feature, or time limits. It comes with community-only support. If you need EMC Support, you license the software, and pay for maintenance and support.

I would encourage people to evaluate all their SDS options - there are material differences in behaviors, and economics (which vary customer to customer).

@BS - thanks for your comment!

Note that I am calling out transactional use cases. Transactional SDS stacks on top of object stacks perform very poorly for transactional use cases. Don't trust me? Fine - give it a shot.

Likewise, there are NAS SDS stacks, but none that have the performance, scale, and QoS behaviors that ScaleIO has. Most of the commercial NAS SDS stacks aren't scale-out NAS stacks either. Some that are (Gluster as an example) have extremely poor transactional behaviors. This doesn't mean they are BAD, but they are bad for VMs, bad for relational databases - and hence are less "disruptive" to the mass of the storage ecosystem.

It's interesting to note that right after posting this, A customer (see Leroy below your comment) who was deploying several PB of ScaleIO as a low-level hyper-transactional system - and was using a ZFS variation of top where they needed NAS and some of the other things you point out. The voice of a customer is, shall we say, more powerful than BS (IMO). Thanks for the comment!

Re your subsequent comment re "controlling server failure" - I suspect you might (?) be coming from a point of view of a distributed object storage model. These invariably have much richer geo-distribution models, and multiple copy writes - and many, many other attributes. I AGREE :-) That's why ECS Software exists - and competes in the market for SDS object stacks. What I have consistently noted is that putting transactional storage ON TOP of object stacks means you get the worst of both worlds rather than the best of both worlds. IMO (and I'm sure you would disagree) - there's no need.


VL

Can you confirm if ScaleIO handles: Dedupe, Compression, Native Replication, Dynamic tiering?

Michael

Amy idea if 2.0 will support Auto tiering using Storage and Performance disks in the same storage pool, but with intelligence to place the hot/cold data as required

Elias

"ScaleIO is available for frictionless and free unlimited download and use."

User guide states different: "Using ScaleIO in a production environment requires a license."

For me when someone says unlimited use. It means unlimited. Not limited to test clusters.


So whats the case? Since this blog doesn't represent EMC I guess that the user guide is correct.

Chris

ScaleIO does not do Dedupe, Compression, Replcation or Tiering.

Chad Sakac

All - thanks for the comments!

@VL @Michael @Chris - ScaleIO doesn't do Dedupe/Compression/Dyanmic tiering. It's not uncommon that people implement filesystems on top of ScaleIO to provide additional data services. Think of ScaleIO as doing one thing very, very well - a very horizontally scalable transactional engine - delivering every ounce of performance and latency out of a collection of industry-standard systems.

The whole architecture (for now :-) is what I think can be characterized as "performance optimized" vs. "capacity optimized" (or a balance of the two).

@Elias - it is available for frictionless and free unlimited use. The clause in the user guide is there to represent an important idea. The download is there with only community support - PERIOD. If you want EMC to take your support calls, you have to buy and license the product. I think that's a pretty common and reasonable position - don't you?

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.