Today we took the covers off VxRack – the EMC hyper-converged RACK system-level architecture delivered by VCE.
It’s now more clear what I meant in this post when we announced the first expansion of VCE’s Converged Infrastructure portfolio.
Also, I would recommend reading this post on the CI taxonomy of BLOCKS, RACKS, and APPLIANCES before reading this post.
Here are the key things to understand about VxRack:
- VxRack is a RACK system architecture. VxRack it has a design center of FLEXIBILITY.
- VxRack is an engineered system – not an appliance. I’ll explain this in a little bit.
- VxRack will come in multiple personas (think “different model numbers”). The first persona, the VxRack 1000, will be an open persona (can have any mix of vSphere, KVM, baremetal, ScaleIO storage-only). This is for customers who want choice of vSphere, but perhaps other abstraction models – and any combination thereof. It will use ScaleIO as the SDS data service layer.
- VxRack will come in another personal later this year built on a vSphere-only persona. This is based on joint collaboration around VMware’s EVO:RACK project, and will use VSAN as the SDS data service layer.
- The “Vx” in VxRack means that like the “Vx” in VxBlock it is not fixed to Cisco UCS and Nexus networking. The first VxRack model will use Cisco Nexus for ToR switches and whitebox servers. They all come from EMC from ODM partners – and have the single support model and experience VCE is known for.
- VxRack will integrate with the whole CI portfolio from EMC – including integration with VCE Vision that is used with Vblocks and VxBlocks (and in the future VSPEX Blue). Our view is that CI will come in different types (BLOCKS, RACKS, APPLIANCES), and customers will have varying amounts of various types – but they will want them to integrate together.
- VxRack will leverage a cool underlying hardware management and orchestration layer because it uses a lot of low-cost industry standard hardware, so we need a “hardware abstraction level” – I’ll discuss this more on Day 3.
- VxRack will potentially have other future unannounced personas. It could very well be a vehicle for something we’re going to talk about on Day 3.
Read on past the break to learn more, and see the VxRack tech preview in action :-)
First of all – we’ve been working on this for a while. It’s why it was almost funny when some people jumped to a conclusion about ScaleIO when we launched VSPEX Blue using EVO:RAIL and VSAN. ScaleIO is at the core of the open VxRack 1000.
The first persona (the VxRack 1000) will be an open persona. This means that it will be very flexible: pick vSphere, pick KVM, pick baremetal, pick storage-only – and mix them how you see fit. It’s designed to start small (4 nodes, ToR switches, management network – roughly 1/4 of a rack) – but scale like crazy. Yes, there will be other VxRack personas – including one leveraging EVO:RACK for customers who want a pure and hyper-integrated VMware stack vs. one that is open (inclusive of vSphere, but also others).
The workload focus is mostly platform 2 workloads, but we’re applying “platform 3 tools” like industry standard servers and SDS storage stacks – so call it “ideal for ‘platform 2.5’”.
With that open, heterogenous requirement – and the need to scale to infinity and beyond – of course we would use the best open SDS transactional storage stack that scales like a mofo – ScaleIO.
In fact, the codename for the VxRack 1000 (the VxRack with the open persona) was “Buzz Lightyear” - “scale to infinity and beyond!”
VxRack is designed to scale into the hundreds and thousands of nodes. That’s real rack-scale. What are we talking about at that scale? Well, obviously the aggregate compute/memory/network is freaking enormous. At the SDS layer – we’re talking stratospheric. Think 250M IOps. Think 38PB of capacity. Wow. But remember, you can start at 1/4 of a rack.
People will immediately want to know what the hardware is. While this is a tech preview – and things could change, we’re pretty close. It will be a set of industry standard rack-mount servers with varying configurations. It will not be a UCS server- rather a whitebox. Remember always that the model of converged infrastructure in all forms is that it comes with a clean single support model – after all, you’re doing this so you can simplify.
Now – the VxRack 1000 is hyper-converged to be sure – but it’s still an engineered system.
What does that mean?
This cuts to the bone of the distinction between RACKS and APPLIANCES that I talked about here.
It’s the distinction between a “bespoke suit” and an “off the rack suit”.
Think about it:
- When you buy a “off the rack suit” – you go to the store, pick from a small set of styles and colors. You try them on. You can have alterations, but nothing too material. You leave in minutes with your choice, not tailored for you – but there’s nothing simpler. That’s the APPLIANCE experience.
- When you get a “bespoke suit” – the first step is determining your requirements (getting measured). You get measured in many ways. Then, you pick the materials. It’s nearly an infinite set of fabrics and colors. The tailor leaves, and in a few weeks returns with your suit. You try it on, it gets tweaked. It’s a longer process to be sure than an “of the rack” suit – but you end up with something made for YOU. That’s the RACK Engineered System experience.
Unlike appliances, that tend to have a relatively fixed set of hardware configurations, VxRack has a broad range of server nodes, each with variation of CPU/Memory combinations. The fitting with the VCE vArchitect and our partners is there to determine how to best fit you. It goes to the VCE team for engineering design and planning. At small scales, there’s not much to this – but at large scales, things like ToR switch design is very important. After a short amount of time, your VxRack 1000 arrives – racked, cabled, with the full VCE experience. Adding nodes is relatively simple – and the VCE Architect and our partner will have planned as much as possible based on your expansion thinking.
Conversely – and this isn’t bad or good, it just IS – with appliances like VSPEX Blue – you don’t need to do any planning. You order your appliance. It comes in fixed configurations. You can get it in minutes, and it arrives in days. You unpack, cable it up, and you’re up and running in minutes. What kind of ToR switch should you use? Whatever.
This is a perfect example of the difference between a RACK (“Flexible!”) and an APPLIANCE (“Simple!”).
Now – key to VxRack is the management and orchestration layers. Let’s talk about that.
Let’s be very clear first – converged infrastructure doesn’t make a cloud. Say that with me slowly and firmly :-) Converged infrastructure in all it’s forms does however make a cloud easier to deploy – because you’re not mucking around with infrastructure. What does make a cloud is the higher level management and orchestration layers. For IaaS – an example of this is the Federation Enterprise Hybrid Cloud (Federation EHC) stack using the VMware vRealize suite and the ViPR suite. EHC is what makes all forms of converged infrastructure an IaaS cloud, and if you also use Cloud Foundry, you have a PaaS cloud. BTW – note that today, you cannot deploy the EHC Solution stack on a VSPEX Blue appliance. We’re working on it - but for now, there isn't support for the full vRealize suite and ViPR (used in the orchestration elements) on VSPEX Blue. There will be FEHC support on VxRACK also (the technical gap isn't there)
So – what level of management and orchestration are we talking about?
- The very, very low level hardware management and orchestration. In the demo below you’ll see a quick “VCE VxRack powered by OnRack (tm) technology” note. This is a low-level hardware M&O layer – more on this on Day 3.
- The next level up – the infrastructure pools management and orchestration layer. This can be done in vSphere if you have a “vSphere only” persona – but if you want to have multiple personas vs. just one, and those need to include bare-metal, ScaleIO-only, KVM, etc… well, then there needs to be layer in there – that’s the VxRack Manager. It can manage VxRacks, add/remote nodes, create logical resource pools, and be used to blam down personas. There’s a view of the prototype in a demo below.
- Something like VCE Vision – which handles compliance against the VCE Release Certification Matrix (RCM) and can help easily move from one RCM to the next.
A demonstration is worth a thousand words – take a look!
Well – there you have it. The converged infrastructure market leader (by a long shot – with north of $2B of annual revenues growing at gangbusters rates) has a CI portfolio. Coming soon to a theater near you will be the newest member of that VCE family – the VxRack 1000!
What do you think? Are we on the right/wrong track? While there’s surely some overlap between Vblock/VxBlock use cases and VxRack – do the use cases seem clear to you? As always – feedback welcome!
Chad, thanks for always supporting the EMC Proven Professional Program at EMC World. Day 1 has been awesome here in the Proven Center.
I challenge you to an EMC Proven Lip Sync Battle. Marco Polo 701. You name the time!!
https://youtu.be/tEk09UeEu6M
Posted by: Wendy | May 04, 2015 at 05:17 PM
Hi Chad - looks like one of your sentences is cut short.
"BTW – note that today, you cannot deploy the EHC Solution stack on a VSPEX Blue appliance. We’"
Posted by: Tony Watson | May 04, 2015 at 06:19 PM
@Wendy - you're on :-)
@Tony - thanks for the catch, fixed!
Posted by: Chad Sakac | May 13, 2015 at 04:12 PM
vxrack is good and I love it and also supported by EMC proven Professional Program at EMC World
its awesome
Posted by: Tampan | August 17, 2015 at 06:48 AM