First, it's useful to understand where they are not related. Atmos is NOT a vCloud.
vCloud is an initiative. vCloud defines a set of standards for compute objects (VMs using the OVF standard), and APIs (using REST/SOAP) to manage VMs running in a cloud compute model, APIs to communicate SLA requirements to the underlying layers (vApp - which applies similar logic to internal clouds), structures to get VMs in and out of the cloud, and the security requirements to make that model work. A lot of really, really cool stuff. It's primary benefit is that it makes the same objects that are your primary compute management model (Virtual Machines) transparently and natively run in a hosted model - which bypasses the barriers that stop certain workloads from being SaaS-able (or at least really hard to make work that way). It also makes the distributed, dynamic datacenter expand past the walls of the datacenter itself (for example - for spot or seasonal workloads).
Atmos is a product. Atmos' design goal is to be the best design for storing information in the cloud use case - cloud-optimized storage (COS). The use case contains by definition several design implications. It implies:
- many-petabyte (even exabyte) scaling and a cost model to match (which in turn demanded a scale-out storage model using low-cost storage).
- object counts in the billions
- multi-tenancy
- global namespace
- a wide variety of access methods (REST/SOAP and more traditional CIFS/NFS/IFS)
- most importantly - an object-based management and information distribution model.
We've been working hard on this for a while - and in fact have been shipping this since June quietly. It's definitely something new so we took our traditionally conservative model of product deployment (internally we do a "phased product release" process for all major products) and applied it to our innovative new idea, which is why you might have heard of "Maui" before.
Could Atmos be used for internal vClouds (i.e. customers deploying their own VMware Infrastructure with a vCloud inspired access and management model)? Sure. But, at the same time, customers however need to understand it's design goal is for internet scale, and internet-connected access to information.
So - where is the intersection? The intersection is a shared long-term view and goals. Both EMC and VMware view this cloud model - both for computing and for storing information as a new emerging force in IT. VMware is working to make a standard compute container model that makes clouds applicable to any workload - unchanged. EMC is trying to make storing information for cloud use cases simpler and lower the bar for new entrants into this market. We think that Atmos is the ideal platform for storing the information accessed by cloud compute layers - including, we're confident, partners who plan on building those based on the vCloud model. But, of course, when you have two large vendors like EMC and VMware working together, you can expect to hear more from us on this topic soon enough.
Read on if you want to understand Atmos better....
This is what a Atmos physical deployment looks like (there are various configurations with varying processor/storage configurations). Connectivity is in GbE or 10GbE flavors.
While the hardware is cool - I particularly like the elegant dense drive config (while maintaining easy accessibility, and a stable rack configuration), the bigger deal is the software.
By scaling core elements in this modular way (for example, the storage server elements from the metadata server code), we build a very, very VERY scalable model.
Now, it's predictable that some will claim we're trying to create a new acronym for our own sake to join SAN, NAS, and Content Addressable Storage (CAS).
But as you read those things, put on an engineering hat, and consider this:
- Imagine trying to build a SAN design that scales to multi petabytes.
- Could it be done via the scale-out model like XiV (and while not the design goal of EqualLogic and Lefthand Networks - which are designed for more traditional SAN use cases, COULD be used for this model)and others?
- Could a SAN-based design hit the needed price point using cheap drives? Possibly.
- BUT if the unit of management is a LUN, how does that work? Do you replicate LUNs for geographic distribution, and how does that work when the thing you need to store is my Flicker photos? Do I get my own LUN? Yikes - so the management model makes SANs a non-starter.
- Ok, so a SAN model is out, except as a model where you provide cheap storage and people build their own object-based model on top of that. Hmm - that's clearly not the answer (and that's the conclusion we came to).
- Interesting question (at least from me): does this mean that scale-out is the right design for SANs? In general, in our view, not intrinsically. SAN use cases are designed around storing structured data with very predictable availability, and performance that is characterized as predictable and low latency.
- So - the customer has to build their own software model to manage the data objects on the cheap, but scalable SAN.
- Now - imagine doing the same thing with clustered NAS....
- It IS closer - in the sense that files are closer to the model of what you need.
- NAS is a more legitimate access method for this use case than SAN (which is totally not applicable) in the sense that a client can be anywhere, and doesn't need to be hard-provisioned. But... think about how you access YouTube videos - it's by the video (object) not by some filesystem. Hmm - so, you need "access from anywhere", with an object-access mechanism.
- Can you create a NAS device at that scale, yes (but it's very hard - as our respected competitors are finding), and the price point is possible
- but you get left with the whole challenge of multi-tenancy (you would have to couple a NetApp Multistore or EMC Virtual Datamover lmodel into the clustered NAS use case)
- The larger challenge is one of object based distribution and retention. An example here helps. Imagine a customer (not of Atmos, but of a true end customer of a service provider using Atmos) like a game developer building the next Steam. Aside: I totally dig Steam - it's how I got my Fallout 3 the day it came out without leaving my house. The developer don't know how popular a game will be. The developer (or distributor) have a ton of titles - not just one. Periodically, some spike, lull, and then come back. Some titles become strangely popular in weird geos (look at South Korea and the MMORPGs they play relative to WoW - they are so different it's fascinating). So, managing the distribution of the content gets really really hard. And the content isn't a file per se but a bunch of related files - an object. Could you make the filesystem the object (it is after all a set of related files) - sure but MAN the scaling and management model just tanked if you do that. So, the unit of distribution, control, is the object, not the file or the filesystem.
- So - the customer has to build their own software model to manage the data objects on the cheap, but scalable clustered NAS.
- What about CAS? Well CAS hits the mark in a way some of the others don't by starting at an object-level, but....
- Can you create a CAS device with the scale-out model and cost price point - yes, in fact, the Centera RAIN model is sorta like Atmos. But, the processor/storage models get whacky fast. CAS is all about compliance, long term retention, and making sure that incredibly stringent information policies are enforced. The processor/drive ratio for CAS is about 1:4, for COS, it varies between 1:15 and 1:60. This blows out the cost model.
- the CAS management model, being object based - scales up from a management standpoint the same way COS needs.
- CAS replicates and at the object level, but for protecting the data - not for distributing it - so this would require someone to build their own app on top using something like XAM, that would access and distribute the data .
- The Meta-data requirements are totally different. Meta-data for COS needs to be super rich and expandable for whatever the customer needs (think of the Steam example). For CAS, the meta data is relatively primitive (though rich compared to SAN and NAS) - i.e. when was this created, how long do I retain it, and what's the deletion policy?
- So - the customer has to build their own software model to manage the data objects on the not-so-cheap CAS.
Look at the three bold bullets above, and you might see the same thing I see - hitting the scaling point and the price point is the EASY (aka lower value) part from an engineering and technology standpoint. The big part of building an Amazon or Google storage model is in the software, not the storage hardware itself. The other "Web 2.0 storage" mainstream players in the market have created the equivalent of the "storage server" part of the Atmos architectural design. Atmos' primary competitor is the desire for new players who want to build the next Amazon or Google to build their own storage stack (not to be underestimated - coming from a startup - I know the "we can do anything" thinking that makes them fun :-)
That's the challenge we've decided to try tackling - make the barrier to entry to creating an Amazon or Google model lower. That in turn represents a market opportunity for EMC - potentially a LARGE ONE.
There are tons of other use cases - in fact, I have a pressing one in front of me right now. Internally, we use VM appliances in our systems engineering field folks for everything. We're a Microsoft partner and under our MSPP agreement, we have Exchange, SQL Server, Sharepoint, etc VMs for demo use everwhere. Likewise, every EMC product that is software (from ECC to Smarts, Networker to Avamar, Documentum to EmailXtender) and even some "hardware" products (Celerra and others) exists as a VM. I travel around with a USB HDD with 200GB of such VMs, and I have a global team, and global partners. I tried FURIOUSLY to find various ways to solve this "VM distribution" problem. They are all large, so a couple of central FTP sites won't cut it. NAS with automated replication is better (ala EMC Celerra, NetApp, or Windows DFS-R), but there were three problems: 1) access to partners and VMware (i.e. I wanted it "kinda" open so we could share) and also the distribution model wasn't "replicate everything, everywhere" (for example, I want EMC secret next-gen products to replicate only to internal folks).
Could we be wrong? Sure. We know the technology works (advantage of the last 5 months of production use). In the end, the market determines the winners and losers - but it's fun to see this newest baby hit the stage after it being under wraps (not very effectively) for so long - and we'll see soon enough how people react - Atmos is shipping.
I'd like to link to this particular blog from the SNIA XAM Public De velopers Portal
http://groups.google.com/group/xam-developers-group
Posted by: Christina Casten | November 11, 2008 at 11:26 AM
Atmos sounds like a CDN, so COS is not really new...?
http://www.elasticvapor.com/2008/11/defining-cloud-optimized-storage.html
Would you agree?
What is the use case for Atmos in the VMware context?
Obviously it's for Content Delivery of ISO's, Templates and VM's at a global scale in whatever format you choose (OVF, VMDK, VHD, AMI, etc.) in this new cloud enabled world.
I do see in the future you are able to pull down a Mobile VM from the cloud via Web APIs (REST/SOAP) on your mobile device or access the content you have stored in the cloud from anywhere at anytime. Mobile Devices, VM's, Apps will change and be obsolete but your content will not (home movies, pictures, music, personal docs, etc) and this is what you want to keep secure and available. This is where I see the Atmos CDN. :P
Posted by: Terry | November 11, 2008 at 11:53 AM
Virtual Geek:
There are a number of things that I'd like to explore here re: Atmos but let me start with the COS use case that implies a definition of COS. You list some COS attributes, but could some others also be included? Such as:
Security (perhaps "multi-tenancy" is "secure multi-tenancy?")
Object versioning
Object persistence
Scale to billions of users (as opposed to objects)
Local data caching
John Webster
Illuminata
Posted by: John Webster | December 04, 2008 at 10:28 AM
hello
i want to know vmmare Dos device usb
help me
Hossein
Posted by: Hossein | May 07, 2009 at 12:46 AM
Small quibble: CAS and XAM both allow unlimited metadata. In fact, Centera allows more metadata than Atmos (which I believe has a limit of about 256 tags); Centera is limited to 100 MB of metadata. The metadata mentioned above is part and parcel of the object model but is a subset of the completely extensible metadata model.
BTW: XAM would make an exciting access method for Atmos. A prototype of this was developed in November, 2008 and DiskXtender (which has a XAM implementation) was demonstrated running on Atmos. I don't know what became of that effort.
Posted by: Former EMCDE | July 08, 2009 at 08:16 AM