Of course I leave it up to Cisco to discuss their stuff in detail – but a bit of interesting dialog here at the always awesome Scott Lowe’s blog.
By now many folks are excited, but wondering “what’s the ‘there’ there on ‘built for virtualization’?” Is it just a killer switch/blade combination with a strong focus on 10GbE and FCoE?
There’s more to it than that, IMHO. The big piece on this is that as Cisco started work on this, going to the drawing-board with the 100% virtualized x86 datacenter.
Now some of this stuff is reflected in “fundamental design choices”, like:
- high memory density
- high scaling
- centralized management model
- use of IOV on the interfaces
- 10GbE/DCE/FCoE as the transport
Often in the storage industy – people position intrinsic design things as being more "ideal” for VMware environments (i.e. “our array is virtualized, therefore better!” – not sure what that means, but hear it often… and in a similar vein: “our product is available as a virtual appliance, therefore better”). The best ones are ones that are almost universal – I was almost laughing my a$$ off at VMworld as 3PAR was handing out cards that said “thin built in!” Catchy slogan (and 3PAR makes a fine product) – but wow – while I give them kudos for being first in mid/high end block-centric arrays with it – everyone’s got that now :-)
The point? Everyone (Cisco and EMC included) can be correctly accused of positioning things that others do (along with us) or design elements that are good as a general thing – but not VMware-specific - as “designed for Virtualized environments”. That doesn’t mean they arent’ good – but aren’t a technical integration point per se (doesn’t make them any more/less important).
Let me give but one example of something in the UCS design that is clearly “built for VMware” and is a real technical integration point.
If interested – read on…
At VMworld last September – the VMware/Cisco/EMC session outlined one of the core attributes we’re all designing to – that the VM (or more precisely the vApp – the collection of VMs that together represent the business application) will own it’s own policy (ergo: “private cloud, I need this availability, this performance, this cost – please deliver” – this would be part of the OVF metadata), the virtual datacenter layer will orchestrate across the infrastructure, and the infrastructure would responsible for compliance against that policy.
Don’t get what I’m driving at? These screenshots are always the thing that hammers it home for me.
Today, no one thinks twice about giving a VM a given number of vCPUs, virtual memory, CPU and Memory shares and letting VMware hash it out via DRS (yes, there are cases where specific optimizations like CPU affinity, and disabling memory ballooning and oversubscription make sense still – but that’s a shrinking use case). See below – nice, simple, pick a point on the slider for RAM.
BUT – you had better have configured pNIC/dvswitch relationships when you configure that ESX host and man – if there’s no datastore, man… you’re not going to go anywhere fast.
So the observation is that the network and storage provisioning is decoupled from the VM provisioning task (whereas CPU/Memory are a managed aggregate). And how do you know that the storage or network will deliver what that VM needs? Either:
- you’re the one person who does it all
- your network/storage teams are overbuilding like mad (not efficient – but common)
- you are operating in an incredibly coordinated fashion with your networking/storage teams (very rare)
- you have management tools that span the domains with a VMware-centric view (these exist – I would say EMC does this better than most, but even that isn’t the holy grail – because crossing organization domains usually means “view only” – not because it can’t be done, but breaks org rules)
- you frankly have no idea – and will figure out when things go sideways.
5 is usually the case.
So – with that all said – take a look at this slide from what the joint presentation (TP03) we showed (red circles are for emphasis – in the presentation – the “disk” and “network” icons rise up from the physical tier and joint CPU and Memory as “pooled, aggregated resources managed at the Virtual Datacenter OS layer”. Notice that FCoE is a part of it, and clearly what we’re talking about above the “DRS for Storage and Network” was “general design principles for server/network and storage”
What did we mean when we said “DRS for Network/Storage?” Follow me along here….
I don’t claim to be the first to “out” this – in fact, it’s been out there for a while – but I don’t think many people immediately “see it”.
Start here, see here for an EXCELLENT doc on the architecture of the Nexus 1000v.
Now – let’s look at VN-Link again. VN-Link can apply tags to ethernet frames - and is something Cisco and VMware submitted together to the IEEE to be added to the ethernet standards. Now, when you loaded the cross-mem module (virtual ethernet module – as Brad notes, acts as a line card) you loaded on vSphere if you were part of the Nexus 1000v Beta/RC, you could have played with this. It allows ethernet frames to be tagged with additional information (VN tags) that mean that the need for a vSwitch is eliminated. the vSwitch is required by definition as you have all these virtual adapters with virtual MAC addresses, and they have to leave the vSphere host on one (or at most a much smaller number) of ports/MACs. But, if you could somehow stretch that out to a physical switch, that would mean that the switch now has “awareness” of the VM’s attributes in network land – virtual adapters, ports and MAC addresses. The physical world is adapting to and gaining awareness of the virtual world.
As a side effect – you could actually have the switch be a p-to-v’ed Nexus switch, which with some integration could be managed just like the generic dvswitch, but with all the extra management and feature options the full NX-OS brings, and so was born the Nexus 1000v (the virtual supervisor module+the virtual ethernet module). the Nexus 1000v doesn’t require VN-tags, and neither does a customer using the UCS – but VN-Link and tagging Ethernet with VM-level identifiers does enable something you can’t do otherwise.
And remember, when people go nuts about something being non-standard – all our standards start that way – the question is one of intent – does Cisco want it to be part of the standard? You betcha.
Now the Nexus 1000v has gotten a lot of attention (rightfully so, IMHO), but the bigger picture is that it was ENABLED by Cisco and VMware working together to make the physical world and the virtual world more integrated. Note that Brad even called out that it could be a “virtual appliance or a physical appliance”.
Read the notes Brad put in his original post…
“The network administrator does not configure the virtual machine interfaces directly. Rather, all configuration settings for virtual machines are made with Port Profiles (configured globally), and it’s the VMWare administrator who picks which virtual machines are attached to which Port Profile. Once this happens the virtual machine is dynamically assigned a unique Virtual Ethernet interface (e.g. ‘int vEth 10′) and inherits the configuration settings from the chosen Port Profile. “
“The VMWare administrator no longer needs to manage multiple vSwitch configurations, and no longer needs to associate physical NICs to a vSwitch.”
The key in the second line isn’t “no longer needs to manage multiple vSwitch configurations”, but “no longer needs to associate physical NICs to a vSwitch”.
Apply this logic to a large, cloud or enterprise-scale vSphere deployment, particularly with a bladed config, and I think the benefit is clear. So there you have it - a specific Cisco UCS unique VMware feature – one of many, but there’s an example.
As a side note - on the FC side of FCoE there are additional considerations. First, as far as I know in the last code in the Nexus I saw, it only supported N_Port proxy (i.e. you need to have an FC switch between it an your array target) – I’m sure that will be nailed fast. There’s also need for NPIV (not the easiest thing in the world to configure), VMDirectPath has specific (what I would call relatively severe) limits for VMotion – and other things. These are all being tackled.
Likewise, I used to like the vStorage API’s old name better – VMAS aka VMware-Aware Storage, because it expresses the same idea. vStorage APIs consist of three groups right now – vStorage APIs for Multipathing, vStorage APIs for Site Recovery Manager, and vStorage APIs (which were the original VMAS ideas). EMC has demonstrated support for all the existing vStorage APIs – but they won’t all be arriving at the same time, and none will be in the initial vSphere release. What will be coming in the vSphere release is the vStorage APIs for multipathing – which like folks in the Nexus 1000v beta saw in loading the cross-mem module, people in the PowerPath for VMware beta saw the similar architecture.
one of the major design milestones will be when (regardless of the Virtual Datacenter OS – I think it’s inevitable that Microsoft will get to this similar conclusion) in the Virtual datacenter OS – when you provision a VM, you don’t specify the physical, pre provisioned storage and network resources, but instead the vApp SLA level.
This is a journey. We don’t have the all the answers. There’s a lot of work (I can’t speak for the Cisco side, but only the EMC side) – this means a lot of functions that are LUN-level or file-level have ZERO VMware awareness. These nneed to evolve to be sub-LUN level – and not file-level, but vApp-level, and for the subsystem to be aware of the needs of the vApp.
But when Ed from Cisco and I said “DRS for Network/Storage” – that’s what we meant.
This is awesome stuff, Chad.
Being a security and compliance guy this looks good for us too, since it's going to force people to define better what interactions and dependencies there are - rather than #5 as happens today.
So when you need to secure a piece of information, or show that it's being dealt with properly, you've got a heck of a lot more information about the things that store/process/transmit that information.
That's half the battle here.
I blogged about it here
http://tokensecurityguy.typepad.com/token_security_guy/2009/03/that-cisco-ucs-stuff-looks-cool-it-might-even-help-with-security.html
Posted by: Paul Stamp | March 18, 2009 at 09:13 AM
Chad,
Great post. Thanks for the link and commentary of my Nexus 1000V article. I like how you have organized the content in your blog.
I think I'll steal your idea of an "Anti-FUD" category, I like that. :-)
Lots of great content you have here. Nice work.
Cheers,
Brad
Posted by: Brad Hedlund | March 18, 2009 at 03:52 PM
Great post, Chad! So, let me see if I get where you're going; combine VN Links/tags and DRS policy management with a policy-driven storage OS (such as ATMOS, but block level. I mean, how long before EMC moves ATMOS-type policy-driven storage OS into the block-level and FCoE space?), and you've REALLY got global cloud virtualization with Vapp spaces and their storage moving around the globe to follow the sun or demand. Sounds more game-changing than most analysts give them credit for, but I think you're spot on.
Posted by: Jerry Thornton | March 19, 2009 at 12:28 PM
Nice post Chad. Just think how much further we would be if FC originally had tagging, as opposed to being an afterthought. So its great to see it coming into vSphere now in the early stages. That really is huge!
I would assume that any relatively beefed up server system would be capable of supporting VN-Link and the 1000v, not just UCS. Of course, that would depend on whether Cisco sells the Nexus 1000v as a software product that runs on any qualified system or as a UCS-only virtual appliance. If it's the latter, it seems a wee bit artificial to spin it as a technology breakthrough of UCS - but its not difficult to understand Cisco wanting to maintain a key differentiation for UCS to get traction for its new business.
Regarding storage tie-ins, you said: "These need to evolve to be sub-LUN level – and not file-level, but vApp-level, and for the subsystem to be aware of the needs of the vApp." Yes! I totally agree that integrating at the vApp level would be optimal, which would radically change the way storage is subdivided, mapped and exported. (As a 3PAR employee) I have to mention that 3PAR's wide striping across chunklets is probably the right kind of architecture for getting this done.
You mentioned our marketing of Thin Built In for VMware. It's true that our thin provisioning is not integrated with VMware storage provisioning (like all other storage vendors products), but we do have other VMware integration points with VDI and SRM. FWIW, we think its great that everybody has some form of thin provisioning (including VMware soon) and we welcome comparisons. "Thin Built In" is not just marketecture, it is next-gen chip-level "capacity crunching" technology. But I don't want to "go pimpin" here and detract from your excellent post, so I'll shut up now. :)
Posted by: marc farley | March 19, 2009 at 12:50 PM
Thanks for the very insightful post Chad.
I have been following the Cisco Unified Computing announcement and I cannot help but smile...
Although It's difficult to foresee the exact impact it will have on the industry at large before the product actually ships,
one thing is certain - Cisco is betting their strategic growth on virtualization.
I think that the sheer size of Cisco (and fact they are not a 'traditional' server/virtualization vendor) and the weight they are throwing behind UC validates virtualization in a huge way,
not as a way to consolidate servers but a "way of the future data center". I am so curious to see what HP, IBM and Dell's response will be. Perhaps we are seeing their response already - IBM getting ready to acquire SUN?
There was something else I noticed in Cisco's announcement -
in it, Cisco clearly stated that their aim is "to deliver virtualization technologies all the way to the end user".
It is difficult to say exactly what they meant by that statement - but I think they mean virtualized desktop.
Cisco UC will ship with vSpehere built in and their hardware will be based on Nehalem CPU design
from Intel. Nehalem has some GREAT features - like the QuickPath Interconnect technology (WAY WAY faster memory access
than FSB), 1.25x-2x performance improvement based on early specs from Intel),
and some virtualization-specific hardware instructions vSpehere will be able to take advantage of.
I think these features will make virtualized desktop ROI a reality.
I.e. right now 5-6 virtual desktops per CPU core is pretty much a standard.
With Nehalem, this number will go up to 1.5x (conservatively speaking), which will give us 7.5-9 desktops
per CPU. coupled with the other areas where Nehalem will be better - i.e. memory access (QuickPath Interconnect),
and enhanced virtualization instruction set I would argue it is safe to assume 10 desktops/CPU core to be a more realistic
estimation.
Nehalem's server chips will initially ship with 6 cores, later (H2 2010) to go up to 8 cores.
so, for a typical 2-socket 6-core system (midline, affordable configuration) a customer will be able to
comfortably run up to 120 desktops per server. That is amazing and I think will drastically alter desktop computing.
I think stars will align themselves for a huge push in virtualized desktop - namely, Cisco entering the market with their network-compute-virtualization stack. Such combination is optimal for putting your data center through its paces if all your desktops
run from it - you need 100% optimized and intertwined virtual data center to be able to run ALL your infra from it (i.e. including all desktops).
All this coupled with the release of Windows 7 (ready to ship September '09 based on the latest news from Redmond) makes VDI not a matter of if but when/how soon.... I seriously think that starting with Windows 7 'traditional' desktops will be no more and
that customers will all (in some capacity) explore and implement desktop virtualization.
I think this is the most exciting time to be in the virtualization/storage industry!
With companies lining up to virtualize their desktops, their needs for virtualization
and storage will be massive...
Chad, what is your take on Virtual Desktop?
Do you see it as a huge potential?
Posted by: Paul Wegiel | March 24, 2009 at 11:31 AM
Don’t get what I’m driving at? These screenshots are always the thing that hammers it home for me.
Posted by: chalazion surgery | June 30, 2010 at 09:16 PM