What’s the VNXe scoop in a nutshell?
- VNXe was code-named “Neo”. IMO - an apt name (read in the details to see why)
- VNXe fits all the goodness of EMC’s Unified VNX family into a smaller, even simpler, even more affordable package.
- When we say affordable, we’re not kidding. The MSRP is $9499. For this class of storage (feature rich, unified block and NAS, and expandable to 96 and 120 SAS spindles), that’s a GREAT deal - 20% less expensive than it’s competitors – some of which are block only, some unified (we compare the smaller VNXe 3100 to the HP P2000i, IBM DS3512i, Dell MD3200i, NetApp FAS2020)
- VNXe uses a completely homegrown EMC innovation (C4LX and CSX) to virtualize, encapsulate whole kernels and other multiple high performance storage services into a tight, integrated package. We call the process of prepare the storage services for integration “refactoring”. This is hands down, one of the coolest things I’ve seen. Hint – there’s a lot more getting ready to get “refactored” to be integrated into the VNXe and VNX family over time.
- It uses shares features/functions with the larger VNX family (it is running the same software, “refactored”), including Unisphere, which is drawing a lot of very positive feedback, including a “one click” online support experience.
- We made Unisphere on the VNXe even MORE easy, and integrate it with a set of storage use cases very common for the SMB customer – Microsoft Exchange, VMware, Hyper-V.
- Software bundles (like the VNX) for things like snapshots, replication and others have been simplified into 3 software packs.
- You can even leverage EMC Replication Manager for application-integrated local and remote replicas.
What’s important for partners (on top of the list above which customers care about, partners have other things they find important too)
- get trained, able to sell and service in 3 days – for free.
- best of breed program and overall profitability.
- dramatically simplified “ease of doing business” with all the associated systems and process stuff that implies.
In my personal opinion, VNXe is the coolest of the cool things we’re launching today… sometimes it’s cool to be small :-)
Read on for more, including demos and details…
Ok – here’s more detail on the hardware…
There are two members of the VNXe family – the VNXe 3100 and the VNXe 3300.
The VNXe 3100:
The “skinned” look (the VNXe 3300 looks very similar, but is 3u):
The “naked” array:
A zoom in on the storage processor of the VNXe 3100:
The VNXe 3300:
The “naked array”
A zoom in on the storage processor of the VNXe 3300:
The spec table:
A few noticeable takeaways:
- Each uses very current Intel architectures. The VNXe 3300 has 8 Westmere cores, and 24GB of memory – that’s a solid chunk of horsepower for something so small.
- They share the modular port expandability as their bigger siblings in the VNX family – the VNXe 3300 will have the full complement of Ultraflex SLICs that EMC customers are used to (10GbE today, FCoE and FC over time).
- They are remarkably scalable for something in this band – up to 120 spindles on the VNXe 3300 – that’s up to 240TB. Wow.
- While out of the chute, they use 3.5” enclosures, you will eventually be able to use the same 2.5” 25 drive disk enclosures that are on the VNX.
- They are active-active.
- They vault to flash. No more vault drives.
- You can self-service everything.
Hey – how are we doing this in a simple 2 brain package? All the competitors say EMC doesn’t do this? :-)
But the big deal is ABSOLUTELY the software.
I’ve hinted in the past about the C4 codebase and CSX projects, and Steve Todd has a great blog post on it here.
I agree with Steve – these could very well be the biggest internal innovation in EMC in years. It has 40 patents.
Why? The answer is these two pictures:
For every superpower, there is a weakness. For every superman, there is a kryptonite.
Or, put in less “DC Comics” style – everyone’s greatest strength is simultaneously your greatest weakness. I truly believe this of me personally, of people in general, of technologies, of companies.
In EMC’s case, our strength (to me) is: breadth of portfolio. If it’s storage, and you need it, we’ve got it. If we don’t have it, but our customers need it – we will built it or buy it. We’ve built a culture that absorbs acquisitions well. Not perfect, but well.
And, these are in some situations (again, IMO) are our biggest weakess: breadth of portfolio. Where a solution could be done “good enough” in a simpler way, sometimes we’ve needed to add more unnecessarily to a customer solution in terms of hardware, in terms of different “things”.
So – how do you turn a weakness into a strength? You don’t fight it – you leverage it.
Now – there are two ways to tackle this problem.
The first way is to try to integrate into one giant “new” monolithic kernel and software stack. The downside of this is that it doesn’t work. What happens down this path is:
- you lose an incredible asset - time tested, proven and battle-hardened software stacks. In storage land, this is really important. It takes around 5 “non-compressible” years to make a solid block stack. It takes about 7 “non-compressible” years to make a solid filesystem/NAS stack. The time is for development, but also for beating it into incredible hardened state through real world deployments. Remember, this “maturing” is really important - when you have a “bad day” in storage land, it’s a REALLY bad day. As an example, while Isilon is new to EMC, they have been doing it for 10 years.
- Even if hypothetically you pull off the one time effort, you’re still hosed. Winning means losing. Why? At that point, you are completely gated by your ability to update and maintain the monolithic kernel and software stack. You have also limited your ability to innovate to ONLY organic innovation – because any development that’s going on outside what you do is not in your monolithic stack. If you want to leverage that external innovation, you will need a multi-year effort to merge it in. I’ll put this another way… In other words, you’re making the unbelievably arrogant assumption that you can innovate faster than the entire academic and startup community. Inevitably your rate of innovation slows, and you get passed. It’s not the big that eat the small, it’s the fast who eat the slow.
Does that first way sound familiar?
The second way is the route EMC took. You build a platform that can accept “containerized” (abstracted, encapsulated) functional chunks from all sorts of sources. You make it fast, very lightweight, very scalable. You make it able to leverage the massively multicore x86 trend. That’s what the C4 codebase does.
Now, the question is – are you saying this is running a “virtual storage appliance”. Well – in many ways, yes. There are problems with the “just run a full VM” (and several advantages – so ideally you want that too!).
- The first is that many storage stacks don’t perform well in VMs for various reasons. There’s a reason most VSAs are limited to 1000-2000 IOps (about 10 15K RPM spindles, or a fraction of a single SSD). That’s not good enough if your goal is to be able to “refactor” just about anything – including what would now be considered “very high end” stuff.
- Hardware dependencies. In storage stack land, sometimes hardware dependencies exist. Now… In some cases they have little to no hardware dependency – think of Atmos as an example – it’ll run on anything. In other cases, they expect to see fans, power supplies, disk state and more. Note, I’m not talking about exotic, non-commodity hardware dependencies. EMC is trying to avoid those like the plague (we really don’t think that you can out-innovate commodity x86 hardware).
- Sometimes you need/want the full OS stack, but in a lot of cases, you wouldn’t.
So – you would build a thing that COULD run a series of types of containers. You would want to be able to scale out (many instances). Some containers would be lightweight system calls. Some would use an “EMC standard lightwight” wrapper (CSX). Some would use a full-blown encapsulated OS.
In the end, this is a MASSIVE innovation which has let us accelerate internal EMC innovation, and more importantly share innovations across the company, and absorb innovations from all sources much faster.
This means that VNXe, is literally, in a single active-active system that can be as small as 2U, has the mature, re-factored FLARE, DART, and other EMC bits that are in the bigger VNX family members. Common look, feel, function.
Don’t misunderstand, they aren’t fully encapsulated OSes as VMs, but as CSX containers. It’s extremely efficient, very high performance. It can leverage all the power we can fit into a tiny form factor.
Does this sound like something scary and new? Well, here’s another amazing fact. C4 has been shipping in EMC products for longer than you would believe. The first PRODUCTION GA code shipped in Q4 2008. Ever since then, core EMC platforms have been using C4 to introduce more and more functionality. VNXe – aka “Neo”, is the first time, however, when it’s being applied to something approaching it’s true capabilities… Not just adding new features in a modular, containerized way, but as the heart and soul of the platform itself.
Another interesting factoid – in the Nordics, we have been testing the core ideas with customers and partners. The NX3e was not VNXe but a focused “beta”. We were doing new things, and it’s always a good idea to check your ideas – and you can’t “kinda” check :-) We wanted to test out not only core principles in the product software design, but most importantly get partner feedback on the business processes and ideas. As VNXe is a product where the partners, the channel, play a huge role – that thinking has been built in to everything.
We wanted to make it so that the ease of doing business with EMC was easier, and that the VNXe represented not only a one-time opportunity for the customer and the partner, but would be something that would be the basis for ongoing partner/customer direct interaction.
Now - in other markets, delivering unified storage capabilities using extra hardware bits is not a big deal. As you get bigger, the extra hardware in a VNX (more filesystem blades, more storage pool blades) is “A-OK”, because those customers ask for the following: simple, efficient, performant solutions at very high scale. The pressures to on cost-of-goods-sold, physical footprint and other things mean that VNXe is the right place for the C4 code base to see the full light of day first.
Now, all of this C4 codebase technology is mind-blowingly cool in lots of ways… but of course – we would need to have a unified management model to deal with this modular code model…
…and that’s the bigger story behind Unisphere.
Unisphere makes it much simpler, much easier for EMC to innovate down a modular management model and integrate in all sorts of technology, quickly and easily. Common look, feel, and function. We’ve done it with Recoverpoint, Atmos, and others… and we’re accelerating. More importantly than looking at our own navel, customer feedback has been overwhelmingly positive.
But anyone thinking we’re stopping here is smoking something. And to anyone who says “EMC doesn’t innovate”, I say – “how do you like dem apples?” :-)
In all seriousness, at EMC, we innovate organically AND we innovate in-organcially. We don’t view those as two things, but a single thing – leveraging innovation. The C4 codebase and Unisphere help us do that. VNXe is a great example of what that means for customers.
A great quote from a respected (sincerely) competitor is an interesting way to close out:
“That said, over the years I think it's safe to say that the success of NetApp has been tied directly to ONTAP. Let's face it, if it hasn't been ONTAP-based our sales force has struggled to sell it. ONTAP innovation continued in parallel with convergence and that puts a lot of pressure on management interfaces. NetApp has made some big bets on improving systems management tools and must execute. EMC has strong management tools and an assortment of storage technologies. Convergence for EMC will take them down a different path (it has to if they want to compete in a timely fashion) but they must execute on integrating/converging what's behind the curtain. NetApp needs to work on the curtain.
Ultimately, I don't think the customer cares all that much - it can be hamster wheels turning SAS drives - as long as when they turn the storage apps up that it's all easy to manage and it doesn't slow them down. Execution - their business execution - is also the issue. The question will really be which is easier to do: pull together a management framework or pull together various storage apps behind a management framework.”
Source: http://blogs.netapp.com/efficiency/2011/01/more-questions-than-answers-storage-virtualization.html (Mike Riley).
I agree with Mike – customers don’t care too much how we do it.
So - I’ve just shown how we’re both innovating around an integrated management model, and converging our core capabilities and stacks WITHOUT being tied to a single monolithic kernel approach. It let’s us converge today where needed, and converge future organic and in-organic innovations.
Oh – this ain’t just talk. We just launched it.
It also highlights why I keep saying – competitors shouldn’t focus on the other guy too much – we should each focus on the customer, and focus on ourselves, our own execution. It’s so easy to be wrong about the other guy.
But anyone – any competitor - who says EMC is just putting a “management wrapper” around stuff has NO IDEA WHAT THEY ARE TALKING ABOUT. (BTW, I don’t think Mike was saying that – of course, I don’t want to put words in his mouth – it is something I hear sometimes from people who don’t know better). It’s inevitable when you’re the leader – which EMC is – that people will be downers, I guess.
I’m unbelievably pumped about VNXe – for technical reasons, for business reasons, for innovation reasons.
Congrats to Doug Wood and the full team behind it, my hat’s off to you!
Here’s a set of demonstrations so you can see the incredible coolness:
- simple
- solid
- flexible
- tiny
- expandable
- and really, really inexpensive (less than $10K for real-world configs)
A whole new world for us, and our customers.
Some demonstrations…
Introduction (4:31) – A brief overview of the VNXe product family and a preview of the use cases that will be covered in the rest of the videos in the series.
Download in high-rez iPad/iPhone MP4 or WMV
Installing the VNXe (9:41) – This video shows the steps required to take a VNXe platform from power up to storage provisioning in 10 minutes and show cases the VNXe’s ease of installation features.
Download in high-rez iPad/iPhone MP4 or WMV
VMware Integration and Provisioning (11:50) – This demonstration explores the integration between the VNXe and VMware and includes provisioning of VMware NFS and VMFS datastores as well as the migration of live virtual machines.
Download in high-rez iPad/iPhone MP4 or WMV
Windows File Sharing (9:43) – Consolidating your Windows file shares onto a highly available platform like the VNXe is easier than you think. This video demonstrates windows shared folder provisioning and Active Directory integration with the VNXe.
Download in high-rez iPad/iPhone MP4 or WMV
Provisioning Microsoft Exchange (10:10) – The VNXe makes provisioning storage for your quick and easy. This video shows all the steps required to provision storage for Exchange and have it running in 10 minutes.
Download in high-rez iPad/iPhone MP4 or WMV
Data Protection Part 1: Snapshots and Local Replicas (9:57) – The VNX includes a number of features to protect your critical data. In the first of a two part video we’ll explore data protection using snapshots and local replicas.
Download in high-rez iPad/iPhone MP4 or WMV
Data Protection Part 2: Application Consistency and Remote Replicas (12:27) – To streamline data recovery and maintain business continuity, Application Consistent snapshots and replicas are critical as well as the ability to replicate your data at a remote site. This video shows you how with the VNXe.
Download in high-rez iPad/iPhone MP4 or WMV
Unisphere Guided Tour (11:46) – The Unisphere management interface makes the management of EMC’s unified storage platform simple and intuitive for application and server administrators even if they have never used shared storage in the past. In this video we’ll take a guided tour of the interface and see how the VNXe is actively integrated with the EMC Online Ecosystem.
Download in high-rez iPad/iPhone MP4 or WMV
So – what do you think? Cool, or are we smoking something? :-) Courteous comments always welcome!
When trying to play "Installing the VNXe" youtube complains it is a private video and that I need to be your friend to play it.
Can you fix it?
Posted by: Dejan | January 18, 2011 at 08:59 AM
Full-res vid download links not working for me.
Posted by: Slowrider5 | January 18, 2011 at 03:37 PM
Hey Chad - this just isn't fair. I've made a damn good living by analyzing IO requirements and configuring RAID Groups, MetaLUNs, zones and storage groups etc on CLARiiONs for over a decade. Now along comes the VNX-series and it's all GUI wizards with automated and optimized configs thrown in that any Joe-Blow can use. Hell, EMC's even removed the beloved Vault disk group that's caused our customers so many capacity and performance issues that it's been like a guaranteed pay check for me and my colleagues. To top it off, EMC's changed to SAS disk drives - how are we going to justify charging mega-bucks for disks now? Next EMC will start to throw in native de-duplication and VTL capabilities - where will it all stop? It's just not fair on storage architects and it's not fair on EMC's competitors :-)
Posted by: Eugene Sergejew | January 18, 2011 at 03:54 PM
Cannot download any of videos, since the ftp location does not seem to exist. Is there an alternate download link? Thanks!
Posted by: Shiv | January 19, 2011 at 01:24 AM
Chad
Awesome job - mega post for mega launch! BTW the Total Protection Pack Demos are live on the RecoverPoint YouTube Channel: Here: http://www.youtube.com/watch?v=BLoppoGJGCc and
Here: http://www.youtube.com/watch?v=eWJd5FqFg5o
Rick
Posted by: Rick Walsworth | January 19, 2011 at 01:50 AM
Hi Chad,
I agree the VNXe is really exciting and is clearly where all the innovation has gone.
What I am a little surprised about is that we have not seen this same level of innovation in the VNX series.
The VNX series does not appear to be any more integrated than the Celerra, in fact I understand that you can buy a VNX for block only without the Data Movers (i.e. exactly the same as a CLARiiON).
I was hoping that with the VNX we would have seen converged hardware with Data Movers and Control Stations not requiring separate physical servers and completely independent network connections required for NAS and block.
I was also hoping that RecoverPoint would now be a VM that would run on C4 so you would not need any external hardware.
Surely in this day and age we do not need a dedicated server for the control station, FC connected Data Movers and completely separate network connections for NAS and block.
Maybe this makes sense with the 5700 and 7500 models, but I was hoping that the 5100 and 5300 would have the new architecture.
One problem is that the VNXe series may offer all the scalability that a customer needs, but it does not offer all the features (i.e. support for Flash drives, FAST VP/Cache, RecoverPoint, MirrorView/S, and VAAI support - I assume iSCSI is delivered from DART rather than FLARE).
You are hinting above that this is all coming, but if you can provide any more insight it would be much appreciated.
Many thanks
Mark
EMC Partner
Posted by: Mark Burgess | January 19, 2011 at 08:46 AM
Great write up, thx for posting
Posted by: Troy | January 19, 2011 at 04:12 PM
Hi Chad, Dimitris from NetApp here.
I have to admit I'm more interested in the VNXe than I am in the VNX.
VNXe seems to be the one that has a somewhat different way of running DART and FLARE - however, every single EMC blog is (to me) vague about how all this is done (I read all of Steve Todd’s stuff, yours, Mark’s and Chuck’s). Something about CSX and abstraction - however, it’s not clear whether:
1. VNXe still runs a Windows kernel for the FLARE bits
2. VNXe still runs DART and its kernel
3. VNXe still runs the control station on top of another Linux kernel.
If you can confirm any of those it would be grand…
From the writing it appears (or at least that would be the cooler explanation) that EMC is trying to say they re-wrote FLARE for the VNXe so instead of it being a set of WDM drivers running on top of Win XP embedded, it’s now a CSX package running on top of Linux, kinda like Java or Flash can run on various architectures as long as the right hooks are there. Which makes for very easy portability - but a VM would be just as portable.
Is this at all right?
DART and the control station would be easier since they’re Linux-based anyway…
If there's still Windows lurking in there that's OK, it's been working with FLARE for the longest time.
My question would be, if this architecture is cleaner, better, faster, easier, more scalable and less wasteful - why is VNX proper not based on it?
Is VNXe another experiment like the NX3e, or did the code for stuff like FAST and FC not make it to the FLARE respin?
Truly curious... I do like the GUI BTW. And the vault to flash.
Thx
D
Posted by: Dimitris Krekoukias | January 19, 2011 at 06:15 PM
@Dimitris - thanks for the comment.
Obviously as a competitor, I'm not going to tell you EVERYTHING how we do it :-)
But..
1) There is no Windows kernel in VNXe.
2) There is no DART and Controlstation kernel in VNXe.
3) the only kernel is the C4LX kernel.
4) FLARE and DART have been fully encapsulated in CSX modules.
5) the decision to not support FC (at least initially) was purely because market feedback was that in that segment, it was a lower priority than NAS, iSCSI and SAS connectivity. Count on FC/FCoE modules to arrive soon.
6) C4 is actually ALSO used in VNX, for a variety of modular functions.
Part of your question is "why don't we use the same C4 deployment model for functional capabilities in VNX and VNXe?"
I'd respond "why do you need to pick 7-mode or cluster mode up front when you install a FAS platform?" Answer is of course - that the merger is far from complete.
In the post (and the VNX post) I discuss that, and NetApp's own Mike Riley says it well - customers simply don't ask for "how does it work" (his words were "Ultimately, I don't think the customer cares all that much - it can be hamster wheels turning SAS drives - as long as when they turn the storage apps up that it's all easy to manage and it doesn't slow them down."
Customers ask for capabilities (from EMC, and from EMC competitors). The need to fit into small footprints, small COGS (cost of goods sold) is very pressing in the market segments that the VNXe (and it's competitors) serve. Hence, the use of fully encapsulated functionality there first.
Each vendor (EMC, NetApp and others) are on journeys of our own (as our customers are). In the same way that ONTAP 8 cluster mode is used by very few customers, it's clearly the center of much of NetApp's R&D, engineering, and long term strategy. Yet the VAST majority use ONTAP 8 in 7-mode. That's not bad, that's the mark of transitions.
It's analogous to what's going on with us. C4 is used across the VNX family, in slightly different ways. Just like in 7-mode and cluster mode, there are big parts that are shared.
C4 is a huge part of ONE aspect of EMC's storage business (the one that competes with NetApp, so I'm not surprised you're less interested in the others). VMAX, Atmos, Isilon - those all serve needs that cannot be met today by VNX/VNXe or NetApp.
As I mentioned in the post, NX3e was an intentional, small market, full test of the end-to-end experience (including the aspects for the partner channel - which is really critical). It was, in effect, a "beta the tech, and beta the business". Partner and customer feedback was good, and invaluable, and was a big part of the VNXe design (both technology and business).
Thanks for the feedback on the Unisphere UI and the other improvements. Will be an exciting 2011 for us, as well as our competitors.
Posted by: Chad Sakac | January 19, 2011 at 07:21 PM
Very educational post, thanks Chad. It was great to finally read some more information about the tightly-guarded CSX and C4 codebase. I actually speculated a bit about CSX in a blog Tuesday morning. Looking forward to reading more as it is released.
http://www.integrateddatastorage.com/blog/2011/01/18/emc-announces-vnx/
Posted by: Justin Mescher | January 19, 2011 at 10:26 PM
Hi Chad,
What is the timing for supporting VMware Site Recovery Manager and VAAI on the VNXe series?
I think these are important as I am guessing the competition has them on their low-end boxes.
Also is there a strategy regarding unifying the iSCSI stacks - I assume VNXe uses DART and VNX uses FLARE (which is why the VNXe does not support VAAI, but I guess this creates problems around replication in VNXe)?
I thought EMC had stated that they are no longer enhancing the DART iSCSI stack.
I have now had a chance to review all the videos - brilliant UI!!!
The only thing that really stands out is that Replication Manager is not integrated into Unisphere and the interface looks very complex/enterprise - what is the timing on Replication Manager getting the brilliant VNXe UI make-over?
Many thanks
Mark
EMC Partner
Posted by: Mark Burgess | January 22, 2011 at 12:37 PM
Hi Chad,
"I agree the VNXe is really exciting and is clearly where all the innovation has gone.
What I am a little surprised about is that we have not seen this same level of innovation in the VNX series.
The VNX series does not appear to be any more integrated than the Celerra, in fact I understand that you can buy a VNX for block only without the Data Movers (i.e. exactly the same as a CLARiiON).
I was hoping that with the VNX we would have seen converged hardware with Data Movers and Control Stations not requiring separate physical servers and completely independent network connections required for NAS and block.
I was also hoping that RecoverPoint would now be a VM that would run on C4 so you would not need any external hardware.
Surely in this day and age we do not need a dedicated server for the control station, FC connected Data Movers and completely separate network connections for NAS and block.
Maybe this makes sense with the 5700 and 7500 models, but I was hoping that the 5100 and 5300 would have the new architecture.
One problem is that the VNXe series may offer all the scalability that a customer needs, but it does not offer all the features (i.e. support for Flash drives, FAST VP/Cache, RecoverPoint, MirrorView/S, and VAAI support - I assume iSCSI is delivered from DART rather than FLARE).
You are hinting above that this is all coming, but if you can provide any more insight it would be much appreciated."
as an EMC-Customer currently using an NS-120, i fully agree and wonder about the overlap between VNXe 3500 and VNX 5300.
form an architectural standpoint, the 3300 seems to be the way to go forward and the changes are very welcome. And if the 3300 gets FAST (at least FAST CACHE) and block via FLARE, there seems to be no reason for the 53000 except for customers who plan to scale-out, and those would likely start with the 5500 or 5700 from the beginning...
very curious how all theis will develop over time.
Posted by: timkos | January 22, 2011 at 03:00 PM
@Mark - thanks for the comments. Will you be at PEX? I will be discussing this in more detail there.
To answer your question: No, VNXe and VNX both have complete block and NAS stacks, one using CSX to "scale via cores" one using non encapsulated stacks to "scale via blades", and both being managed via Unisphere. The code stacks are in TOTAL sync yet, (hence VAAI, FAST not on VNXe yet), but soon they will be in sync.
You can expect to see:
- VAAI on VNXe soon enough.
- SRM support soon (linked to an SRM release).
- VSI support VERY soon.
And YES - Replication Manager is getting a Unisphere makeover via a module.
Long and short - you're seeing the beginning of the payoffs from the last 2 years of engineering around simplification of the portfolio. We still have a long way to go, but dividends are starting to payoff.
@TimKos - the most important thing that drives the engineering priorities is what our customers tell us. The most important drivers in the VNX band are simplicity, rich functionality, deep VMware and other application integration. Availability and performance are right at the top of the list. In the VNX band, getting down the physical form factor into smaller sizes is very, very LOW on the priority list. So, using the full C4LX and CSX module approach isn't the top priority.
Beyond the flack thrown by competitors, most customers actually don't care whether NAS and block stacks are in one box connected by cables or in a single box, connected via CSX modules.
It does mean that EMC has a higher cost for components, but we eat that. VNX is priced competitively (often much lower) than its competition.
Incredible innovation has occurred in both the VNX and VNXe. Think of it this way:
1) VNX is where the R&D about further improving our core block and NAS functionality occurs first. FAST for NAS being just one of MANY examples. There's also massive improvements in performance across the board (thin, ALUA, dedupe, IOps and MBps) that aren't just a function of hardware, but being able to leverage the new Westmere + dense memory + PCIe2 + SAS designs. VNX uses the C4 codebase also to bring in new functionality fast. You will tend to see new functionality (VAAIv2 and many other things) there first (but not only).
2) VNXe is where R&D about further encapsulating our core stacks and merging them occurs - because in that segment, extreme simplicity and aggressive prices rule the day (and ultra fancy features are important, but lower on the priority list). You'll see things like other EMC functionality merging in there first (but not only), but in general the functionality will appear first on the bigger VNX family. As I mentioned to mark, VNXe has the encapsulated FLARE and DART stacks (but not kernels - it is one merged kernel).
Of course, we are working hard to bring the release trains for everything into lockstep. VNXe 3300 customers can expect, over time, to see most, if not all, of the VNX features become available to them.
Thanks - and glad for your interest!
Posted by: Chad Sakac | January 23, 2011 at 10:56 AM
Hi Chad,
thanks for your answer. Just a quick question regarding configuration of the controllers on the VNXe:
When running controllers active-active, can i assign block- or file-servers to ethernet-ports on both controllers to have high throughput and redundancy when one server fails? Or do i need to run an active-passive configuration in order to achieve redundancy (like on celerra)?
Any other possibilities?
Thanks.
Posted by: timkos | January 23, 2011 at 03:37 PM
@timkos - you can assign block and file to either brain, and have failover paths and aggregated paths when they are failed.
though obviously, even though the components in the storage processor are available - after one storage processor has failed, if the remaining storage processor failing would mean the storage devices (filesystems/LUNs) would not be available. Good news is, replacing a storage processor takes about 30 seconds :-)
Posted by: Chad Sakac | January 23, 2011 at 04:43 PM
Hi Chad, what about CLARRiON CX Series any new about new product
Posted by: Yuri Semenikhin | January 24, 2011 at 04:30 AM
Hi Chad,
You said "most customers actually don't care whether NAS and block stacks are in one box connected by cables or in a single box, connected via CSX modules.
It does mean that EMC has a higher cost for components, but we eat that. VNX is priced competitively (often much lower) than its competition."
Actually (and of course I'm obviously biased)...
It depends. For example, if you partition most of your space and give it to the Celerra blades, use all that space up, then later on free a lot of it, and then try to give some space back to FC - you may find it won't work, since the Celerra owns all the disk and best practices dictate you don't thin out the Celerra LUNs on the CX side. To me, that's a big one.
So, that's a very real case (and not an edge case) where the separation between the products can become problematic.
Another case is the integration with Recoverpoint and the space needed for it, and the way block and NAS is replicated with it.
There are some cases where the separation is beneficial.
In general though, people need to be aware of the pros and cons with each approach. The underlying architecture does matter, extremely so in some cases.
Thx
D
Posted by: Dimitris Krekoukias | January 24, 2011 at 02:48 PM
Hi Chad,
You mentioned that the VNXe has active/active SPs where as the VNX has active/active SPs and N+1 active/passive Data Movers.
You suggest that with the VNXe that if an SP fails then the LUNs/file systems will not fail-over to the other SP and you will need to replace the faulty SP - is this correct?
I also see that the vault space is now held on internal flash storage, but I still understand that capacity is reserved on the first 4/5 disks for de-staging the cache as per previous models.
Can you confirm this and how much space is reserved(or do you get the full usable capacity of all drives)?
I have been looking for the VNX Capacity Calculator on Powerlink, but it does not appear to be available yet?
Many thanks
Mark
Posted by: Mark Burgess | January 27, 2011 at 10:29 AM
@Mark - on the VNXe, if an SP faults, everything supported by the failed SP is supported by the remaining SP. You replace the SP (which is super simple), and then the workload comes back. It's that simple.
I triple checked with engineering on the use of the SSD and disk. I won't paraphrase, I'll quote:
"VNXe does not use the backend drives for cache vaulting anymore. We have an SSD on the SP that uses up to 2GB of DRAM memory that gets on persisted to this device on a power fail. In a dual SP environment each SP has this memory persistent feature. The system configured non-volatile write cache is contained in this persisted DRAM area.
We use the 48GB reserved space on the back end disk drives for system metadata, log information, SW recovery images, and other uses dictated by the components we integrate. It is not used for the vault information any more. The SSD is used for vaulting, the OS root device, and metadata storage for the VNXe operating SW."
The updated VNX Capacity Calculator (which includes the whole family including the VNXe) will be up early in March.
Posted by: Chad Sakac | January 28, 2011 at 09:45 PM
What's the software? Does it still run on Windows 3.11 like Clariion? Do the drivers still suck more than DOS2.10? Does the management interface feel like a ca. 1997 Java Applet ... well is it still a damn Java Applet? Do they actually support Linux or just pretend to? If they do, do the init scripts respect the LSB? Do they still panic the Kernel during normal operation?
In other words, did EMC learn to do software in recent months or is it still something they outsource to semi-literate unpaid interns?
Posted by: nixar | February 02, 2011 at 08:38 AM
@Chad - Can you just clarify for me what the case is for VNX (not VNXe) regarding
1. Is the SPs on VNX running (real)Actice/Active or is it still just ALUA?
2. Is it the same procedure for "vauling" to flash as it is for VNXe, or does VNX still use the vault-drive like previous CX/NS versions.
I'm on my locals at EMC all the time with detailquestions (until published on PL),but I think there still are some confusions between functionality in VNX and VNXe
Posted by: Johan | February 11, 2011 at 04:31 AM
@Nixar - I answered your specific question earlier. There is no Windows kernel in VNXe. There is also no Windows 3.11, customer feedback on Unisphere is very, very positive and some of your other comments, are well - not really worth responding to.
@Johan - it is active active, not ALUA. There is no vault destage like you have seen in the past. Cache destage is to Flash. All configuration is also on flash. There is a small amount of space used on some disks (noted earlier in the comment stream) that is used for metadata and other purposes. Thank you for your questions!
Posted by: Chad Sakac | February 11, 2011 at 02:05 PM
I disagree with the comment that customers do not care. Customers care about how it works and how many components are required. A singular OS is much different than multiple mashed together or integrated as you’d prefer. Interoperability is huge. As a storage customer of both EMC and NetApp, EMC still makes me nervous. I take away from this article that EMC may not go down a path of true unification within a singular OS, instead continue improving the molding of many together. Unisphere is the topic that consumes the majority of discussions I participate in with EMC. From a technical customer perspective, Unisphere is like a pretty paint job on a vehicle. It’s easy to repaint your vehicle. But what happens if you bend the frame? Bad news.. The framework, the core architecture is what’s most important. When you build a car, don’t you build it from the ground up?
Posted by: Wade | February 15, 2011 at 12:16 PM
Chad if the vault drives are now only used for some logs and meta data is it now ok to use these drives for heavy IO. Previously the best practice from EMC was to use the vault drives available space for low IO data stores, such as ISO repository, which made this space quite useless.
Posted by: Nikolai | March 10, 2011 at 04:33 AM
Celerra, Centera, CX3, CX4, VNX7500, FMA, soon to be VNXe customer here. I like Unisphere NAS and SAN management in one console/array. Any idea on the future data for FMA integration (hardware and management)? Seems like a logical next step. Currently, I need FMA for NAS archive to Centera.
Posted by: Hman | April 18, 2011 at 01:21 PM
Chad, an answer to this question would be very helpful for us storage architects:
"Chad if the vault drives are now only used for some logs and meta data is it now OK to use these drives for heavy IO?"
Posted by: Nirvan | April 28, 2011 at 11:35 AM
Is SRM integration for the VNXe coming VERY soon??
Posted by: Shawn Cannon | May 18, 2011 at 08:15 AM
@shawn - the VNXe SRM support is taking longer than I would like. Partially due to focus on the upcoming SRM release, partially because the VNXe APIs are being revved. Current target is Nov 2011. Working to try to pull it in.
Posted by: Chad Sakac | June 13, 2011 at 02:39 PM
Hello Chad,
Good post on the new technologies. In the past I have seen customers with CX3/CX4 series nuke their vault packs either by accident or decommissioning.
This leaves the units unrecoverable and there are allot of non-working units out there which are left in the corner. It does not sound like much has changed with the VNX series; but with the VNXe and the VNX for that matter, is there a way to recover from bare metal without having to replace the internal flash / vault drives completely (assuming your utility partition is gone).
- Adrian
Posted by: Adrian Sender | July 29, 2011 at 12:25 PM
Nirvan, after speaking with EMC support, they have told me that using the vault drives for heavy I/O is still not recommended. You should put those disks in a pool by themselfs and use them for low I/O.
Posted by: baj p. | August 09, 2011 at 05:15 PM
VNXe 3300 or IBM N3300? Let me know your thoughts and reasons why. Thanks!
Posted by: Lulu | August 15, 2011 at 01:06 PM
@Chad : Johan was asking about VNX not VNXe, and despite what you/EMC want to sell, they are very different product inside:
- on VNX it is ALUA
- on VNX it is still Windows
- on VNX mgmt is Java-based, look alike flash-based VNXe unisphere
- on VNX vault drives hosts the OS / VNXe used internal SSD.
- on VNXe a LUN is hosted by one SP only, like celerra in case of SP failure the whole datamover will (should) move to the other SP.
I know VNXe and VNX pretty well being a customer for both: we got rid of VNXe to upgrade to VNX after a few months. We were one of the first customers of this product. The change is due to:
- numerous failures of SP due to memory leak bugs and/or file system problems. And data mover has not failed over like it should. This whole CSX environment just seems not being always in sync.
- replication: think of your rto/rpo. On a vmware environment, using iscsi and RM, we could not get anything below 3 hours replication frequency, for a few TB and very little change.
- global poor performance: in terms of throughput, latency, iops. We had disconnection of our VM when replicating(during VM snapshots), saturating the array.
- FYI, about 40 users, 2 DB VMs, Exchange and windows environment, web servers, for a total of about 20 VMs most of them with low IO.
After a long time troubleshooting no one could explain clearly the bad experience we had. OS Upgrades, and also the recent memory upgrade, do look to me as VNXe was not a mature product when it came out.
It looks like to me VNX is more like a resurfaced clariion with some new features, the VNXe is a new unified architecture currently running something closer to celerra technology, not in a stable way at the moment.
Posted by: martin | October 30, 2011 at 02:32 PM
Hi there!
Would it be ok for me to use the specs you used above in a report for my university report?
I am researching the The VNXe 3300!
Many Thanks!
Posted by: conor | November 28, 2011 at 06:03 PM
Hi Chad,
I have a question regarding shared folder and iSCSI accessing from different subnet.
I tried to create shared folder server and iscsi server and used different IP and subnet from management IP. The trick is, when i tried to ping the shared folder server IP and iscsi server IP, its request time out.
But in your video, its possible. can you please teach me the trick? ^_^
- Kevin
EMC Reseller (Philippines)
Posted by: Kevin Raotraot | February 29, 2012 at 04:20 AM