UPDATED Dec 8th, 2:45pm EST – clarification on VCE support model.
Well – it’s been a busy month since the VMware, Cisco, EMC Virtual Compute Environment coalition launch. Customer reaction has been universally positive in my view, with the only critique being pressure for more details.
If you’re a VMware, Cisco, or EMC employee, or a VCE partner – the official Required/Recommended Bill of Materials (BoM) for the Type 1 (mid-range unified design) and Type 2 (enterprise NAS + block scale-out design) have been published. You can get the BoM by using the VCE portal (www.vceportal.com) and registering the opportunity, or by directly reaching out to the VCE Solutions Support Team or SST (which part of my team is part of) – though if you use the VCE portal, the SST gets engaged automatically.
Having now been working with customers on this for some time, and our first set of Vblock sales under our belt and rapidly accelerating, I wanted to share a couple of thoughts.
- Question 1: Why are there two core types of Vblocks?
- Question 2: What if I’m a service provider or enterprise customer thats need multitenancy in my cloud? Isn’t the whole idea of a Vblock that you manage it as an integrated unit – and if so – how are ideas of multitenancy enforced end-to-end? That, and other reasons, is why I really want more details on EMC Ionix Unified Infrastructure Manager!
There are a couple others I get often – these I will answer immediately. The first two I will do in the body of the post as they are longer technical answers.
- Question 2: “Is a Vblock just a set of parts? Ergo - If I have the parts, do I have a Vblock? How do I “certify” what I already have as a Vblock?” Answer: You have a Vblock if you have the core required elements of a Vblock BoM, with co-terminus terms. Recommended elements are of course, recommended. We are working on a formal “Vblock certification” process
- Question 4: “Where are you on the formation of the SST?” Answer: Full steam ahead. Leader has been identified. I’m hiring like mad for the EMC side contribution (have many in place already), people being identified from all three companies. When you use www.vceportal.com, the team that gets engaged to help you is the SST.
- Question 5: “What does it take to access the integrated support?” Answer: the top support contract level from all three companies UPDATED – also requires either registration via www.vceportal.com (net new Vblock where the SST is engaged) and we are working the “certification” process for customers with existing infrastructure.
- Question 6: “I want to be a Vblock partner – what do I need to do” Answer: you need to be services certified from all three companies (Cisco UCS ATP + VMware VIP Enterprise + EMC Velocity Partner ASN)
I’m sure that we’ll see more and more “hey, I’ve got a Vblock for you right here” from all sorts of places soon. The first rapid shot across the bow came from HP – but the idea of integrated infrastructure is an obvious one with obvious benefit for the customer – so “fast follower” motions are expected. To me, the think that makes this really interesting – beyond all the clear examples that we can give of CURRENT and FUTURE integration between our stacks - is the integrated management model, selling model, and support model. Otherwise it’s just a collection of parts.
There’s a lot of pressure from other non-Cisco parters towards EMC to create “V_E” Vblocks, and I’m sure Cisco is getting the same on their side. Remember, VMware (most of all!!), Cisco and EMC will continue to partner openly – after all we know there will be need for the “a la carte” option as well as the “prix fixe” best of breed model we’re proposing.
If you’re interested in the detailed technical answers to questions 1 and 2 (inlcuding a UIM demo!), I’m certainly interested in your feedback so please read on!
Ok – let’s get to the technical questions!
Question 1: Why are there two core types of Vblocks?
One thing that I think is important to understand is that while in one sense, the Type 1 (and Type 0 – which is a “smaller Type 1”), and Type 2 Vblocks can be characterized by “medium” and “large” (with a Type 0 being a “small”) – there’s a more important distinction. Ok – to understand this, here’s a logical diagram of each type of Vblock (and nodes for aggregation)
If you’re the type that likes physical pictures, not logical ones – here are some from the VCE integration and solutions lab in Santa Clara.
That’s a Vblock Type 1, and below is a Vblock Type 2.
Consider the following two scaling models:
- You can scale out a Vblock by adding Vblocks. EMC Unified Infrastructure Manager integrates the management – so you don’t have “islands” from a management standpoint. This is the sweet-spot of the “Type 1/Type 0” Vblock – whose minimum/maximums are more tightly bound. If you look at the logical diagram, you could take 3 of the Vblock type 1s, and have the effective scale of a Type 2.
- You can scale out a Vblock by starting smaller and scaling a Vblock out horizontally – and only when you hit a maximum, build another Vblock. This is the sweet spot of the Type 2 Vblock. If you look at the diagram – the “minimum” configuration of a Type 2 is actually the same scale (physically and logically) as a single Type 1.
For people who can access the BoM (which has “maximum/minimum” configurations of Vblocks) – you’ll see yourselves that a smaller Type 2 is the same size/cost as a larger Type 1. But, where a Type 1 would hit a maximum and then need to “step function” into two Type 1 Vblocks, a Type 2 would could continue to scale horizontally. Each model has a different economic sweet spot also at different scales, and requires different functional capability.
The choice of Type 1 or Type 2 is really a use-case and customer-driven one (sometimes people have a strong ideological draw to one model or the other).
I’ve written a post on “Storage Architectures for Cloud” with my chicken scratch diagrams here, but the analyst meeting we had a couple weeks prompted me to make a formal diagram and its been helpful with customers. You can see it animated below:
The key idea here is that compute clouds (Infrastructure as a Service – which often forms the basis of Platform as a Service, which in turn forms the basis of Software as a Service) generally has the models shown above. Elements:
- In either case, there is a portal that handles self-provisioning, some placement logic (less placement logic here is better, as that requires the portal to have knowledge of the infrastructure layer), and chargeback. In the vCloud Express examples that are up and running, people all “home brewed” this layer. VMware is developing a standardized set of tools that can be customized to accelerate this function for enterprises or service providers.
- There’s a compute layer. This scales up by adding compute nodes. In most cases, this is virtualized (remember the general principle – big aggregated, fluid pools can be more efficient)
- There’s a network layer. In most cases, this is virtualized (remember the general principle – big aggregated, fluid pools can be more efficient)
- There’s a storage layer. I’ve only drawn shared storage models (all of which are fluid and virtualized) though it is possible to build these out of DAS. The downside of using DAS models is you have the opposite of a big aggregated fluid pool, and whether you are using VMware or any other server virtualization technique, without shared storage, you look the “fluidness” created by live migration techniques. The storage layers generally fall into one of two design categories. Either you:
- use use “non-scale out” designs where you add capacity and IOps up to the limit of the “brains” (and ports), at which point you add another. That’s the design point of the Type 1 Vblock, which uses EMCs midrange, multiprotcol unified platform. The original documents are based off a CX4-oriented design, but are being updated to be a Celerra-based design for integrated NAS and block use cases.
- use “scale-out”storage designs where you add capacity and IOps, brains and ports as needed. That’s the design point of the Type 2 Vblock, which uses EMC’s scale-out platform (V-Max, with optionally up to 8 nodes in the Celerra cluster)
Both are valid, and have various tradeoffs (the biggest one is that the larger your scale and the larger your aggregate pool of just about any resource, the more efficient you can be). Note that if the compute layer is vSphere, the “scale by adding more arrays” model can be SAN (iSCSI/FC/FCoE) or NAS (NFS). Conversely, since there is no pNFS support, even with scale-out NAS models, you can’t “scale out a datastore” using NFS (since the client is NFSv3, one session per datastore, ending at one IP address). This isn’t a “better/worse” conclusion. There are scaling issues with block models as well (SCSI locking, queuing behavior).
The key design goal is to have the pool of storage be so abstracted that the placement needs no “knowledge of infrastructure”. This means no “put this VM in this datastore, and this VM in that datastore because _____”. Today, whether you go block or NAS, there’s no getting away from the fact that some placement logic/awareness is needed. We’re working on it….
BTW - I was at VMware for our Quarterly Technical Review (QTR) and Quarterly Business Review (QBR) this past week. These are multi-day face to face deep dives on collaborative engineering projects. In the Storage track (there were security, management, and go-to-market tracks) the goal of “eliminate infrastructure scaling limits (aka ‘one giant storage pool that auto-tunes’) and “eliminate all needs at higher levels of abstraction to know anything about infrastructure (aka ‘invisible infrastructure’) was central. FASTv2, Storage DRS, VAAI and pNFS projects all figure strongly, as well as other projects which are not public.
So - Question 1: Why are there two core types of Vblocks? Answer: Because there are two fundamentally different scaling models for customers – and at different scales, the two scaling models have differing requirements and efficiencies.
Next…..
--------------------------------------------------------------
Question 2: What if I’m a service provider or enterprise customer thats need multitenancy in my cloud? Isn’t the whole idea of a Vblock that you manage it as an integrated unit – and if so – how are ideas of multitenancy enforced end-to-end? That, and other reasons, is why I really want more details on EMC Ionix Unified Infrastructure Manager!
The first thing – multitenency in these cloud compute use cases is different than most people think. Most customers deploying “vCloud” architectures put multitenancy logic into the provisioning/chargeback portal (as noted here), and use the VM encapsulation as the isolation level (note that the DoD considers VMware-level isolation as being secure).
But – there are cases where the SAME infrastructure would be have differing administrative domains (ergo this is not a service provider customer requirement, but rather it can be a service provider requirement), it can be handy.
Ok – so what does EMC Ionix UIM do? EMC Ionix UIM is critical “glue” in a Vblock – it means you stop looking at and managing the Vblock as “integrated pieces”, and start managing it as a “VM housing infrastructure”
Rather than listing what it does, I think it’s useful to watch this video.
You can download the high-resolution version of this video here in MOV format.
And since it’s always good to have a list of what something does :-)
- It can subdivide the entire Vblock (compute, network, storage) and sets of Vblocks into multi-tenant management domains
- The idea in UCS Manager of a “service profile” is an awesome one – simple full blade configuration that can be templated and deployed. EMC Ionix UIM extends that idea – including in the service profile any associated MDS and other Nexus/Catalyst network configuration needed. in v2, this service profile idea extends literally to provision the underlying storage itself.
- it can manage many UCS systems and Vblocks from a single console. While the Vblock Type 0/1 can scale, and the Type 2 can REALLY scale – for large enterprise customers, they need something that can extend the idea across multiple UCS systems, across multiple Vblocks.
- It can enable simple state configuration changes over time – by default, the UCS, network, and storage element managers aren’t focused on “compliance over time”
- It can take service profiles and copy/paste them with a single click on a multi-UCS environment.
- It can schedule application of profiles and multi-step Vblock provisioning tasks
- It can report out on jobs, and even provide audit reports and check off processes…
- It can check against compliance with best practices: like “check to see that all service profiles are bound to templates, and aren’t homebrewed”:
- It can check with compliance for configuration errors: for example here, automated error checking like the example below (looking for duplicate MAC addresses)
So… What’s next for EMC Ionix UIM 2.0?
Well – for that, you need an NDA briefing – but I’ll lay out the principles we’re aiming for
- It will provide a “service catalog” for their entire Vblock – end-to-end deployment right up to adding the vSphere host to the cluster (or creating a new cluster) with a single click
- It will provide a “pool of pools” management model for vSphere, compute, network, storage
- It will provide the same tools for compliance and remediation across baselines
So - Question 2: What if I’m a service provider or enterprise customer thats need multitenancy in my cloud? Isn’t the whole idea of a Vblock that you manage it as an integrated unit – and if so – how are ideas of multitenancy enforced end-to-end? That, and other reasons, is why I really want more details on EMC Ionix Unified Infrastructure Manager! Answer: You probably don’t need multitenancy embedded in the infrastructure in a vCloud service model (since multitenancy is delivered via the portal and VMware), but if you do – you’ve got it with UIM. More importantly UIM enables you to manage the Vblock as what it is – not “pieces and parts” by rather an integrated, validated system – a “VM condo” :-)
Amazing and useful post -- thanks for putting this together!
-- Chuck
Posted by: Chuck Hollis | December 07, 2009 at 09:22 PM
Thanks, really useful to me eather.
I am TC for commercial and I have a lot of partners's questions related to Acadia and vBlock, in fact there are many big Integrator who want to be EMC partners today due to EMC idea is delivery this model through them.
Posted by: Mauro Ayala | December 08, 2009 at 09:36 AM
Interesting, but the partners (I'm part of Dimension Data, an official vblock partner) continue to search more details and especialy technical details. I'm waiting and searching always about the Vblock Architecture Reference Guide as discussed in the VCE Partner FAQ :(
In addition, locally in my country, when you want engage EMC or Cisco on a potentiel vblock deal, nobody reply ... Especially Cisco prefers speak about Nexus to the customer than UCS or Vblock ... it's very strange like attitude.
Just for your info, in this post you speak about the website www.vceportal.com but this site is a private only for EMC/Cisco/Vmware employees ... no access for partner :(
Posted by: Vincent Peeters | December 09, 2009 at 02:13 PM
@ Vincent - we're working to open up www.vceportal.com to VCE partners - expect that soon. In the meantime, please email me directly, and I'll get you anything you need.
Posted by: Chad Sakac | January 16, 2010 at 03:47 PM
tel yemen
Posted by: abdo | April 12, 2010 at 03:16 PM