It’s designed to operate at massive scale. It’s designed to be geo-distributed. It’s designed to be awesome :-)
We’ve been at the Object Storage game for longer than anyone else – Atmos being our first offer in the market, way back in November 2008 and with Centera in even further back in 2002 (though calling that an object store is a stretch).
With Atmos we learned how important thinking of scale was in all things.
At scale, you simply can’t have the meta-data being centralized – it has to be distributed. At scale, you have to tackle the question of geo-distribution. At scale, you can’t think of “protocol gateways” – because then the gateway becomes a bottleneck if not massively distributed themselves. At scale, you can’t “bolt on” multi-tenancy – it’s got to be built in at every level – otherwise you end up with weirdness where things like encryption policy isn’t granular enough, or that metering isn’t granular enough.
… And, the economics, the “Total Cost to Serve” (TCS), has to be front and center at all times.
ECS takes all the learning of Atmos, all the experiences, and couples the experiences the team also brought to bear in building another one of the world’s hyper-scale public blob/object stores.
- ECS scales linearly – there are NO specialized node roles.
- ECS has rich fault tolerance – with complete self-healing at disk, node, rack, and site failure.
- ECS has flexible load balancing and object call distribution – in the hardware, in the software, or with client loadbalancing support.
- ECS uses a massively scaled-out and distributed key-value store for meta-data, and once again, it’s “embedded” not bolted on.
- ECS has a unique geo-protection model – better able to deal with temporary site partition, and with really cool active-active models, including local caching. There is a huge reduction in inter-site replication traffic versus other approaches.
- ECS can be used as an archive, and one that is compliant with SEC 17a-4(f) and with Centera CE+ support (including platform lockdown and privileged deletion)
And a neat little internal factoid, ECS was the first commercial product to use a container-based, massively scalable distributed storage stack, in fact, using Docker along with some cool container management stack internally to abstract and scale all these functions. Cool :-)
With this release, the team has taken ECS to new level. We already have well north of 1.3 EB of ECS deployed. The use cases are fascinating – as archives, in support of NfV projects like this one at Verizon. Frankly I think this space of object storage is just beginning. We were clearly early to the game (before the enterprise market understood object storage), but it’s getting exciting!
What’s new in ECS 2.2
- NFSv3 native support – in addition to the existing S3, Swift, Atmos, CAS, HDFS protocol support (with no gateways!). Customers can already use ECS with thinks like Cloud Array, and other object gateways to present block and NAS storage – but this is something new. This means that customers can directly interface via NFS, with a global namespace, global locking. It’s not designed to be a substitute for mature Enterprise NAS platforms (think Isilon and VNX as examples) with a whole set of enterprise data services. But – it’s a simple way to accelerate your use of Object storage – because you can directly, and natively use NFS vs. needing any client object code.
- Encryption at rest – simple, easy, awesome. Lots of customers have been asking for this, and the implementation highlights the upside of thinking of multi-tenancy as a core design tenet from the start: we can do encryption at the namespace and the bucket level of control. Key management and keys are isolated to tenants. And – nicely, no application/client side changes needed. Nice!
- Deeper Openstack support. We’ve always supported Swift as a protocol, but now ECS is a drop-in alternative to the native Swift implementation or Ceph for people who want something better. ECS registers as a service with Keystone, service policies are enforced. There is full support for the Keystone v3 API.
- Full integration with EMC SRM. Metering natively in ECS 2.2 took a huge leap forward – but many customers want even more, and what that reporting and visibility to include their full enterprise “persistence domain”.
Best part? Like FREAKIN’ EVERYTHING WE DO – ECS is available for free, and frictionless download for customers, with the bits right here (btw – current bits as I post are 2.1 – 2.2 will show up shortly). Our point of view on SDS stacks (VSAN, ScaleIO, Isilon SD Edge, ECS) are consumed by customers in one of three ways:
- Software only. This is the most flexible appraoch – but the most do-it-yourself (DIY). The customer then needs to design, build and support the stack.
- Software + validated hardware. This offers an incremental step – where we bundle an industry standard server that we have validated. Still DIY.
- Appliance – a fully turnkey stack, ready to go.
We don’t offer all products in all 3 forms (sometimes #2 doesn’t make sense from a customer PoV) – ECS 2.2 is available as software, and as an appliance.
So – here’s the bold claim, the challenge for the market, and for our competitors. We are willing to commit that ECS is the industries best, most compelling object store. It scales to infinity and beyond – but the core question with object storage is: “what is the Total Cost to Serve”? This is a figure which must be all inclusive – and is the sum of upfront capital, one time operational expense (maintenance, deployment) and ongoing operational costs (infrastructure like network), environmentals (power, space, cooling), and adminstration. It manifests as “ N ¢/GB/Month”.
We think that ECS has a Total Cost to Serve which can be 65% lower than AWS S3, Infrequent Access, and even in some cases lower than AWS Glacier, and certainly Ceph or Swiftstack.
For VCE readers wondering – how does ECS fit into our converged infrastructure and platform strategy, here’s the answer – SIMPLE:
VxRack comes in different system-level designs – because hyper-converged is an architecture, not a “thing”.
- Want a hetergeneous hyper-converged Rack-scale for workloads that need infrastructure resilience? Want a system that is built from “modular, disaggregated, composable” components – but designed and integrated as a system? Pick VxRack Flex.
- Love VMware and want to go hard down the “single abstraction approach for all workloads” path? Using workloads that need infrastructure resilience? Want a system that is built from “modular, disaggregated, composable” components – but designed and integrated as a system? Pick VxRack SDDC.
- Working in the domain of these strange new Cloud Native Apps that follow the “twelve factor app” principles, that have low to no infrastructure resilience dependency? Or perhaps your workload is a next-generation analytics/data fabric? Pick VxRack Neutrino.
In each case, common experience, common integration – but different SDS and system level design. For example (and this is obvious, but worth stating):
- VxRack Flex is heterogenous. VxRack Flex focuses on transactional workloads, so has a lot of IOps and low latency. ScaleIO scales horizontally across tons of racks, and is heterogenous, so the network fabric design needs to contemplate more east-west SDS traffic across racks.
- VxRack SDDC is homogeneous (super vSphere integrated), but like Flex, focuses on transactional workloads, so has a lot of IOps and low latency. VSAN scales at vSphere cluster scale, so there’s less inter-rack traffic. So, VCE thinks about the network design a little differently.
- VxRack Neutrino focuses on cloud native apps and analytics, so needs only a small amount of transactional SDS (ScaleIO), but needs a ton of Object/HDFS storage – which is where ECS comes in.
In each of these cases, VCE engineers the systems expressly for each customer, and supports the whole system.
Now, SDDC and Neutrino are not GA yet, but we’re furiously working on it.
Back to ECS… My challenge to YOU dear reader:
Are you using AWS S3? Interested in saving a truckload of money? Need something that complies with the S3 protoccol – but can in fact do much, much more? Challenge your EMC team to beat AWS 65% on price alone.
Are you an ECS customer? How’s it going? Feedback always welcome!