I want to expound on the headline here. It’s been almost 6 months since I’ve stepped into my current role leading the great team at EMC’s Converged Platform Division: VCE. During that time, I’ve gotten this question frequently:
“Does VCE support VMware NSX on our Converged and Hyper-Converged Infrastructure systems?”
I want to make this clear and direct so there’s zero confusion… And hey, what’s the fun of being out there publicly if you can’t use the platform to cut through noise?
The short answer is simple: YES, VCE supports NSX. We also leverage ACI (it is at the core of Vscale).
The longer answer is important to understand – and comes down to 2 simple things:
- The first part is to understand our position on what “support” means to us. “Support” does NOT mean “does it work?”, or “will we turn down inbound calls” (never!). Rather “support”’ means “does it fit into our definition of the ‘system boundary’ – and the support implications that come with that?”. VCE maintains the Release Certification Matrix (RCM) for the whole system, and we are passionate (maniacal!) about maintaining things in RCM compliance. The RCM defines the system, and we maintain that for the service life of the system. The RCM is by definition a subset of “what works”, and is all about driving standardization furiously. The RCM also defines the boundaries for our single-call support model, our single source procurement and warranty model. This is the big difference between converged systems and reference architectures. We’ll always provide best-effort support for things outside the RCM – but inside that boundary, we hold ourselves completely accountable. When we are squishy on the RCM for Blocks and Racks, things go sideways.
- The second part to understand is an important technical consideration. If you use the Nexus 1000v virtual switch with vSphere, you cannot use Cisco ACI and/or VMware NSX. This isn’t positioning/naming/politics, it’s a real technical consideration. If you want to use NSX, you need to use the VMware Virtual Distributed Switch (VDS).
So what does this mean? Simple.
- Vblocks are designed, engineered, built, supported and sustained with the Nexus 1000v (“N1Kv”) as the default distributed virtual switch. This is at the core of the RCM for Vblocks, so you cannot put NSX on it – it just won’t work. Also, NSX is always outside the boundary of what is in the RCM of the Vblock. This means Vblock = no NSX. We cannot sell it, and it’s outside the scope of what VCE single-support would cover.
- VxBlocks are are designed, engineered, built, supported and sustained with the VMware Virtual Distributed Switch as the default distributed virtual switch. This is at the core of the RCM for VxBlocks, so you can easily put NSX on a VxBlock. Customers can do it. Partners can do it. In fact, VxBlocks can be shipped from VCE with NSX on it – in which case the RCM (design, engineered, built, supported and sustained) extends all the way up to NSX itself. If you want NSX – this is frankly the way to go.
PERIOD. Simple – right?
Ok – what if you have a Vblock and decide later you want NSX? Are you stuck? Nope.
We can migrate a Vblock in the field from a Vblock to a VxBlock if a customer so chooses – it’s a service engagement that shifts from N1Kv to VDS, and then from that point onwards, the customer’s environment follows the VxBlock RCM path vs. the Vblock RCM path.
So why is there confusion on this topic if this is so simple?
That’s pretty simple too!
- There are a lot of Vblocks out there that predate NSX, or the VDS being at the point of maturity that it is now. In then the vSphere 4.x and vSphere 5.x era, there were lots of reasons that N1Kv was used that frankly are less relevant now. For this reason – N1Kv was in the Vblock from the start.
- We haven’t been as clear on this as we could have been. In vSphere 5.5 timeframe, sometimes there were exceptional cases where we would need to deviate from the RCM to support VDS on a Vblock for example – before we formalized the VxBlock. Some of those customers of course themselves added NSX on their Vblocks (themselves or a partner). This works, and while it falls outside VCE support scope, VMware would of course support them. The mistake here was not to immediately formalize the VxBlock and RCM – but rather making a squishy exception. I’m really working to become very firm on these sort of things – because I’m a leader of the business that isn’t a sales guy - I’m an engineer :-)
- Otherwise smart technical people that don’t understand that converged infrastructure is different than the sum of the parts look at the hardware of the Vblocks and say “of course NSX can run on it!” (true!)… without realizing that it is a system (designed, built, supported, sustained as a system), so if N1Kv is in there, it doesn’t matter if you can run NSX on the Nexus hardware switches… you cannot run NSX on a Vblock without breaking the RCM… and this causes their heads to explode. If you want NSX, or ACI, you need to move the systems to the VxBlock RCM path.
- There was an unfortunate period where there was a lot of drama on this topic between VMware and Cisco. The reality is that customers want NSX and Nexus (in ACI and non-ACI use cases) to work together. During that era, one additional bonus of the Vblock = no NSX/ VxBlock = NSX was helpful because it stopped people from killing each other. It’s great to see this has become much more pragmatic in recent times, including Cisco noting the use cases as “and” vs. “or” during Cisco Live in Berlin. Don’t get me wrong – I’d still describe it as a “thaw” or a “rapprochement”. There are still a LOT of passionate fights on the topic :-) From my experiences – customers are pragmatic. They view Cisco as a critical partner in their datacenter, and view VMware as a critical networking partner. They want practical guidance (microsegementation automation with NSX and more with vRealize Automation 7.x over top an ACI fabric is common). VxBlock customers love NSX on Cisco. Outside the “buy” world of Vblock and VxBlocks – when customers build their own stuff (why?!) NSX runs most often on a physical networking fabric (in ACI or non ACI mode) that is Cisco.
- VCE is synonymous with “Vblock” (it’s the “Kleenex” of the converged infrastructure world) so sometimes people forget that we do VxBlock (and VxRack, and VxRail – as well as engineered solutions on them). In fact, at this point, we ship materially higher number of VxBlocks than Vblocks (same but VDS vs N1Kv)
If you hear something different from someone at EMC, Cisco or VMware – point them here – the buck stops with me :-)
In all seriousness – of course official support positions are not in blog form – but rather in living product support documentation. For example, EMCers can see these in VCE Product Bulletins (and several have gone out on this) here is a snippet:
There you have it! Q: Does VCE Support NSX? A: HECK YEAH.
So, do you support Cisco ACI virtual switches on the Vblock?
Posted by: Peter Watt | June 08, 2016 at 05:58 AM
Great article Chad!
it answers all the NSX/ACI, vblock/vxblock std q's
IMHO, the future will be NSX atop ACI
IMHO, ACI is not quite ready for prime time. I have been battling with my Cisco team for 3 years on the whole ACI /NSX and they are just now seeing the light. Just like any best practice, no single vendor has ALL the solutions. VCE continues to drive to all best integrated and converged solutions. for better security posture we added HyTrust and F5 Big-IP
Posted by: walter baziuk | June 08, 2016 at 04:31 PM
So, do you support Cisco AVS distributed switch? Why won't you answer?
Posted by: Peter Watt | June 09, 2016 at 05:35 AM
Very helpful your article, I'm VCI, and in my NSX classes, I have many students that use vBlock and want to use NSX.
Posted by: Jorge Luis Hernandez | June 09, 2016 at 06:13 PM
@Peter - no we do not support the AVS. Why won't I answer - a lot of balls in the air amigo :-) No conspiracy - just busy... Q: Now, would we support it going forward? A: completely open. Today, not seeing a lot of demand. In fact, seeing that most people are finding the VDS gets the job done, and is solid as a rock.
Posted by: Chad Sakac | June 10, 2016 at 10:31 AM
thanks for shared wonderful information of giving best information.its more useful and more helpful. great doing keep sharing
Posted by: saki | June 11, 2016 at 07:37 AM
We're looking at licensing a subset of our vBlock hosts for NSX. Are we able to continue using 1000v in our existing hosts/clusters and then use VDS on a newly built hosts/cluster that would utilize NSX functionality? Or would we have to convert the entire environment to VDS/VxBlock? Curious where the "NSX and 1000v aren't compatible" line is drawn and what your recommendation would be?
Posted by: Johnny | June 27, 2016 at 12:04 PM
Hi Chad, Has the AVS support been revisited for VXBlock? Just looking to see if support has changed since the end of June...
Thanks
Mike
Posted by: Michael Fortuna | November 30, 2016 at 12:06 PM
Hi Chad, Any update on my AVS question?
Thanks
Mike
Posted by: Michael Fortuna | December 21, 2016 at 08:24 AM
vSphere 6.0+ drops support for AVS and any 3rd-party vendors that require access to the kernel. Adds complexity and risk stability of the kernel.
https://www.vmware.com/support/vsphere6/doc/vsphere-esxi-vcenter-server-60-release-notes.html
Posted by: Djspry | March 07, 2017 at 06:02 PM