Dear reader – have you ever been misquoted? Has communication in your org ever been imperfect? I’m sure you have – and it can be frustrating.
There was a CRN article that went live today where it sounds like we have or will have VxBlocks that have non-UCS stuff in them.
NOPE. The buck stops with me, and it’s a FIRM NO.
Dell EMC PowerEdge is an awesome rack mount platform, and FX2 is an awesome modular platform. We are making PowerEdge a universal ingredient in our HCI portfolio, like NOW.
When customers buy CI like Vblock and VxBlock – they are getting the cake, not the ingredients (flour, eggs, sugar). BUT – the ingredients create parameters of the system, and UCS B-Series blades make for great CI (which has an external storage array in it). We are in NO rush to lower the competitiveness of our Vblock and VxBlock cake.
If you want to know what the difference is between a Vblock and VxBlock – it’s simple, and has NOTHING to do with the server. Vblock uses the N1Kv virtual switch (and cannot have ACI down to the VM – you can have ACI in the fabric and to the host… also no NSX), VxBlock uses the VMware distributed virtual switch (VDS), which in turn means it can have NSX and ACI from us. The Release Certification Matrix (RCM) for Vblock and VxBlock doesn’t include the Cisco AVS. If you want to know more, read here.
Via services, we can convert Vblocks to VxBlocks. The vast majority of what we ship in the “Block” family right now are actually VxBlocks.
Will we create bundles of Dell EMC Storage with Dell EMC PowerEdge – yes. But that’s a bundle a reference architecture, not an engineered system. No lifecycle, no sustained engineering. People that don’t understand this key difference don’t understand that CI => (is greater than) a bundle.
There is NO plan to have VxBlocks with anything other than UCS, and that’s a fact. Capiche?
@Chad - when you pick UCS are you choosing to build for density? aka so many servers per rack? With UCS you can create profiles and as many virtual nic's/hba's as you want. However, that is ONLY as good as the driver is good. Meaning you can't BUILD for availability as there is no way to put other PCI cards into the blade? So if the driver crashes you are SOL. I've had this already happen once with the network driver. If you had a 2u box aka non-blade you could put in multiple vendor adapters thus it would use different drivers.
Posted by: Davikes | October 21, 2016 at 11:56 AM
You did not say anything about HP. :)
Posted by: Stephen Fulmer | October 27, 2016 at 04:48 PM
Thanks Chad for clarifying!
However, I'd say that the real question is "how much focus will VBlock get compared to the "Racks and Rails" in Dell EMC's portfolio?
Right now, I see hardly any!
Posted by: Matthias Werner | November 28, 2016 at 05:29 AM
Chad- As of today, what is the current situation for VBlock and VXBlock for the virtual switch.
If the customer buys VXBlock today, what options they have for virtual switch ? (AVS, DVS, N1KV)
Same question for VBlock, what options they have for virtual switch ( AVS, DVS, N1KV) ?
You can provide the answer based on the latest RCM matrix as of now.
Thanks
Bhavin
Posted by: Bhavin Shah | December 01, 2016 at 12:05 AM
Chad,
Just a note in your picture the toast is not burnt. It is the cheese that is burnt so seeing burnt toast might be a misnomer from the start. Pattern recognition and cognitive attention choice/filters are massively important in how we understand the world. They inform our opinion not truth :). Although truth is unique to our POV. :)
Posted by: Chappy | December 05, 2016 at 10:27 AM