« Mega, Giga, Ultra, Uber launch!!!! | Main | B-Roll gags and Behind the Scenes »

January 18, 2011

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dejan

When trying to play "Installing the VNXe" youtube complains it is a private video and that I need to be your friend to play it.

Can you fix it?

Slowrider5

Full-res vid download links not working for me.

Eugene Sergejew

Hey Chad - this just isn't fair. I've made a damn good living by analyzing IO requirements and configuring RAID Groups, MetaLUNs, zones and storage groups etc on CLARiiONs for over a decade. Now along comes the VNX-series and it's all GUI wizards with automated and optimized configs thrown in that any Joe-Blow can use. Hell, EMC's even removed the beloved Vault disk group that's caused our customers so many capacity and performance issues that it's been like a guaranteed pay check for me and my colleagues. To top it off, EMC's changed to SAS disk drives - how are we going to justify charging mega-bucks for disks now? Next EMC will start to throw in native de-duplication and VTL capabilities - where will it all stop? It's just not fair on storage architects and it's not fair on EMC's competitors :-)

Shiv

Cannot download any of videos, since the ftp location does not seem to exist. Is there an alternate download link? Thanks!

Rick Walsworth

Chad
Awesome job - mega post for mega launch! BTW the Total Protection Pack Demos are live on the RecoverPoint YouTube Channel: Here: http://www.youtube.com/watch?v=BLoppoGJGCc and
Here: http://www.youtube.com/watch?v=eWJd5FqFg5o

Rick

Mark Burgess

Hi Chad,

I agree the VNXe is really exciting and is clearly where all the innovation has gone.

What I am a little surprised about is that we have not seen this same level of innovation in the VNX series.

The VNX series does not appear to be any more integrated than the Celerra, in fact I understand that you can buy a VNX for block only without the Data Movers (i.e. exactly the same as a CLARiiON).

I was hoping that with the VNX we would have seen converged hardware with Data Movers and Control Stations not requiring separate physical servers and completely independent network connections required for NAS and block.

I was also hoping that RecoverPoint would now be a VM that would run on C4 so you would not need any external hardware.

Surely in this day and age we do not need a dedicated server for the control station, FC connected Data Movers and completely separate network connections for NAS and block.

Maybe this makes sense with the 5700 and 7500 models, but I was hoping that the 5100 and 5300 would have the new architecture.

One problem is that the VNXe series may offer all the scalability that a customer needs, but it does not offer all the features (i.e. support for Flash drives, FAST VP/Cache, RecoverPoint, MirrorView/S, and VAAI support - I assume iSCSI is delivered from DART rather than FLARE).

You are hinting above that this is all coming, but if you can provide any more insight it would be much appreciated.

Many thanks
Mark
EMC Partner

Troy

Great write up, thx for posting

Dimitris Krekoukias

Hi Chad, Dimitris from NetApp here.

I have to admit I'm more interested in the VNXe than I am in the VNX.

VNXe seems to be the one that has a somewhat different way of running DART and FLARE - however, every single EMC blog is (to me) vague about how all this is done (I read all of Steve Todd’s stuff, yours, Mark’s and Chuck’s). Something about CSX and abstraction - however, it’s not clear whether:

1. VNXe still runs a Windows kernel for the FLARE bits
2. VNXe still runs DART and its kernel
3. VNXe still runs the control station on top of another Linux kernel.

If you can confirm any of those it would be grand…

From the writing it appears (or at least that would be the cooler explanation) that EMC is trying to say they re-wrote FLARE for the VNXe so instead of it being a set of WDM drivers running on top of Win XP embedded, it’s now a CSX package running on top of Linux, kinda like Java or Flash can run on various architectures as long as the right hooks are there. Which makes for very easy portability - but a VM would be just as portable.

Is this at all right?

DART and the control station would be easier since they’re Linux-based anyway…

If there's still Windows lurking in there that's OK, it's been working with FLARE for the longest time.

My question would be, if this architecture is cleaner, better, faster, easier, more scalable and less wasteful - why is VNX proper not based on it?

Is VNXe another experiment like the NX3e, or did the code for stuff like FAST and FC not make it to the FLARE respin?

Truly curious... I do like the GUI BTW. And the vault to flash.

Thx

D

Chad Sakac

@Dimitris - thanks for the comment.

Obviously as a competitor, I'm not going to tell you EVERYTHING how we do it :-)

But..

1) There is no Windows kernel in VNXe.
2) There is no DART and Controlstation kernel in VNXe.
3) the only kernel is the C4LX kernel.
4) FLARE and DART have been fully encapsulated in CSX modules.
5) the decision to not support FC (at least initially) was purely because market feedback was that in that segment, it was a lower priority than NAS, iSCSI and SAS connectivity. Count on FC/FCoE modules to arrive soon.
6) C4 is actually ALSO used in VNX, for a variety of modular functions.

Part of your question is "why don't we use the same C4 deployment model for functional capabilities in VNX and VNXe?"

I'd respond "why do you need to pick 7-mode or cluster mode up front when you install a FAS platform?" Answer is of course - that the merger is far from complete.

In the post (and the VNX post) I discuss that, and NetApp's own Mike Riley says it well - customers simply don't ask for "how does it work" (his words were "Ultimately, I don't think the customer cares all that much - it can be hamster wheels turning SAS drives - as long as when they turn the storage apps up that it's all easy to manage and it doesn't slow them down."

Customers ask for capabilities (from EMC, and from EMC competitors). The need to fit into small footprints, small COGS (cost of goods sold) is very pressing in the market segments that the VNXe (and it's competitors) serve. Hence, the use of fully encapsulated functionality there first.

Each vendor (EMC, NetApp and others) are on journeys of our own (as our customers are). In the same way that ONTAP 8 cluster mode is used by very few customers, it's clearly the center of much of NetApp's R&D, engineering, and long term strategy. Yet the VAST majority use ONTAP 8 in 7-mode. That's not bad, that's the mark of transitions.

It's analogous to what's going on with us. C4 is used across the VNX family, in slightly different ways. Just like in 7-mode and cluster mode, there are big parts that are shared.

C4 is a huge part of ONE aspect of EMC's storage business (the one that competes with NetApp, so I'm not surprised you're less interested in the others). VMAX, Atmos, Isilon - those all serve needs that cannot be met today by VNX/VNXe or NetApp.

As I mentioned in the post, NX3e was an intentional, small market, full test of the end-to-end experience (including the aspects for the partner channel - which is really critical). It was, in effect, a "beta the tech, and beta the business". Partner and customer feedback was good, and invaluable, and was a big part of the VNXe design (both technology and business).

Thanks for the feedback on the Unisphere UI and the other improvements. Will be an exciting 2011 for us, as well as our competitors.

Justin Mescher

Very educational post, thanks Chad. It was great to finally read some more information about the tightly-guarded CSX and C4 codebase. I actually speculated a bit about CSX in a blog Tuesday morning. Looking forward to reading more as it is released.

http://www.integrateddatastorage.com/blog/2011/01/18/emc-announces-vnx/

Mark Burgess

Hi Chad,

What is the timing for supporting VMware Site Recovery Manager and VAAI on the VNXe series?

I think these are important as I am guessing the competition has them on their low-end boxes.

Also is there a strategy regarding unifying the iSCSI stacks - I assume VNXe uses DART and VNX uses FLARE (which is why the VNXe does not support VAAI, but I guess this creates problems around replication in VNXe)?

I thought EMC had stated that they are no longer enhancing the DART iSCSI stack.

I have now had a chance to review all the videos - brilliant UI!!!

The only thing that really stands out is that Replication Manager is not integrated into Unisphere and the interface looks very complex/enterprise - what is the timing on Replication Manager getting the brilliant VNXe UI make-over?

Many thanks
Mark
EMC Partner

timkos

Hi Chad,

"I agree the VNXe is really exciting and is clearly where all the innovation has gone.

What I am a little surprised about is that we have not seen this same level of innovation in the VNX series.

The VNX series does not appear to be any more integrated than the Celerra, in fact I understand that you can buy a VNX for block only without the Data Movers (i.e. exactly the same as a CLARiiON).

I was hoping that with the VNX we would have seen converged hardware with Data Movers and Control Stations not requiring separate physical servers and completely independent network connections required for NAS and block.

I was also hoping that RecoverPoint would now be a VM that would run on C4 so you would not need any external hardware.

Surely in this day and age we do not need a dedicated server for the control station, FC connected Data Movers and completely separate network connections for NAS and block.

Maybe this makes sense with the 5700 and 7500 models, but I was hoping that the 5100 and 5300 would have the new architecture.

One problem is that the VNXe series may offer all the scalability that a customer needs, but it does not offer all the features (i.e. support for Flash drives, FAST VP/Cache, RecoverPoint, MirrorView/S, and VAAI support - I assume iSCSI is delivered from DART rather than FLARE).

You are hinting above that this is all coming, but if you can provide any more insight it would be much appreciated."

as an EMC-Customer currently using an NS-120, i fully agree and wonder about the overlap between VNXe 3500 and VNX 5300.
form an architectural standpoint, the 3300 seems to be the way to go forward and the changes are very welcome. And if the 3300 gets FAST (at least FAST CACHE) and block via FLARE, there seems to be no reason for the 53000 except for customers who plan to scale-out, and those would likely start with the 5500 or 5700 from the beginning...

very curious how all theis will develop over time.

Chad Sakac

@Mark - thanks for the comments. Will you be at PEX? I will be discussing this in more detail there.

To answer your question: No, VNXe and VNX both have complete block and NAS stacks, one using CSX to "scale via cores" one using non encapsulated stacks to "scale via blades", and both being managed via Unisphere. The code stacks are in TOTAL sync yet, (hence VAAI, FAST not on VNXe yet), but soon they will be in sync.

You can expect to see:
- VAAI on VNXe soon enough.
- SRM support soon (linked to an SRM release).
- VSI support VERY soon.

And YES - Replication Manager is getting a Unisphere makeover via a module.

Long and short - you're seeing the beginning of the payoffs from the last 2 years of engineering around simplification of the portfolio. We still have a long way to go, but dividends are starting to payoff.

@TimKos - the most important thing that drives the engineering priorities is what our customers tell us. The most important drivers in the VNX band are simplicity, rich functionality, deep VMware and other application integration. Availability and performance are right at the top of the list. In the VNX band, getting down the physical form factor into smaller sizes is very, very LOW on the priority list. So, using the full C4LX and CSX module approach isn't the top priority.

Beyond the flack thrown by competitors, most customers actually don't care whether NAS and block stacks are in one box connected by cables or in a single box, connected via CSX modules.

It does mean that EMC has a higher cost for components, but we eat that. VNX is priced competitively (often much lower) than its competition.

Incredible innovation has occurred in both the VNX and VNXe. Think of it this way:

1) VNX is where the R&D about further improving our core block and NAS functionality occurs first. FAST for NAS being just one of MANY examples. There's also massive improvements in performance across the board (thin, ALUA, dedupe, IOps and MBps) that aren't just a function of hardware, but being able to leverage the new Westmere + dense memory + PCIe2 + SAS designs. VNX uses the C4 codebase also to bring in new functionality fast. You will tend to see new functionality (VAAIv2 and many other things) there first (but not only).

2) VNXe is where R&D about further encapsulating our core stacks and merging them occurs - because in that segment, extreme simplicity and aggressive prices rule the day (and ultra fancy features are important, but lower on the priority list). You'll see things like other EMC functionality merging in there first (but not only), but in general the functionality will appear first on the bigger VNX family. As I mentioned to mark, VNXe has the encapsulated FLARE and DART stacks (but not kernels - it is one merged kernel).

Of course, we are working hard to bring the release trains for everything into lockstep. VNXe 3300 customers can expect, over time, to see most, if not all, of the VNX features become available to them.

Thanks - and glad for your interest!

timkos

Hi Chad,

thanks for your answer. Just a quick question regarding configuration of the controllers on the VNXe:
When running controllers active-active, can i assign block- or file-servers to ethernet-ports on both controllers to have high throughput and redundancy when one server fails? Or do i need to run an active-passive configuration in order to achieve redundancy (like on celerra)?
Any other possibilities?

Thanks.

Chad Sakac

@timkos - you can assign block and file to either brain, and have failover paths and aggregated paths when they are failed.

though obviously, even though the components in the storage processor are available - after one storage processor has failed, if the remaining storage processor failing would mean the storage devices (filesystems/LUNs) would not be available. Good news is, replacing a storage processor takes about 30 seconds :-)

Yuri Semenikhin

Hi Chad, what about CLARRiON CX Series any new about new product

Dimitris Krekoukias

Hi Chad,

You said "most customers actually don't care whether NAS and block stacks are in one box connected by cables or in a single box, connected via CSX modules.

It does mean that EMC has a higher cost for components, but we eat that. VNX is priced competitively (often much lower) than its competition."

Actually (and of course I'm obviously biased)...

It depends. For example, if you partition most of your space and give it to the Celerra blades, use all that space up, then later on free a lot of it, and then try to give some space back to FC - you may find it won't work, since the Celerra owns all the disk and best practices dictate you don't thin out the Celerra LUNs on the CX side. To me, that's a big one.

So, that's a very real case (and not an edge case) where the separation between the products can become problematic.

Another case is the integration with Recoverpoint and the space needed for it, and the way block and NAS is replicated with it.

There are some cases where the separation is beneficial.

In general though, people need to be aware of the pros and cons with each approach. The underlying architecture does matter, extremely so in some cases.

Thx

D

Mark Burgess

Hi Chad,

You mentioned that the VNXe has active/active SPs where as the VNX has active/active SPs and N+1 active/passive Data Movers.

You suggest that with the VNXe that if an SP fails then the LUNs/file systems will not fail-over to the other SP and you will need to replace the faulty SP - is this correct?

I also see that the vault space is now held on internal flash storage, but I still understand that capacity is reserved on the first 4/5 disks for de-staging the cache as per previous models.

Can you confirm this and how much space is reserved(or do you get the full usable capacity of all drives)?

I have been looking for the VNX Capacity Calculator on Powerlink, but it does not appear to be available yet?

Many thanks
Mark

Chad Sakac

@Mark - on the VNXe, if an SP faults, everything supported by the failed SP is supported by the remaining SP. You replace the SP (which is super simple), and then the workload comes back. It's that simple.

I triple checked with engineering on the use of the SSD and disk. I won't paraphrase, I'll quote:

"VNXe does not use the backend drives for cache vaulting anymore. We have an SSD on the SP that uses up to 2GB of DRAM memory that gets on persisted to this device on a power fail. In a dual SP environment each SP has this memory persistent feature. The system configured non-volatile write cache is contained in this persisted DRAM area.

We use the 48GB reserved space on the back end disk drives for system metadata, log information, SW recovery images, and other uses dictated by the components we integrate. It is not used for the vault information any more. The SSD is used for vaulting, the OS root device, and metadata storage for the VNXe operating SW."

The updated VNX Capacity Calculator (which includes the whole family including the VNXe) will be up early in March.

nixar

What's the software? Does it still run on Windows 3.11 like Clariion? Do the drivers still suck more than DOS2.10? Does the management interface feel like a ca. 1997 Java Applet ... well is it still a damn Java Applet? Do they actually support Linux or just pretend to? If they do, do the init scripts respect the LSB? Do they still panic the Kernel during normal operation?

In other words, did EMC learn to do software in recent months or is it still something they outsource to semi-literate unpaid interns?

Johan

@Chad - Can you just clarify for me what the case is for VNX (not VNXe) regarding
1. Is the SPs on VNX running (real)Actice/Active or is it still just ALUA?
2. Is it the same procedure for "vauling" to flash as it is for VNXe, or does VNX still use the vault-drive like previous CX/NS versions.
I'm on my locals at EMC all the time with detailquestions (until published on PL),but I think there still are some confusions between functionality in VNX and VNXe

Chad Sakac

@Nixar - I answered your specific question earlier. There is no Windows kernel in VNXe. There is also no Windows 3.11, customer feedback on Unisphere is very, very positive and some of your other comments, are well - not really worth responding to.

@Johan - it is active active, not ALUA. There is no vault destage like you have seen in the past. Cache destage is to Flash. All configuration is also on flash. There is a small amount of space used on some disks (noted earlier in the comment stream) that is used for metadata and other purposes. Thank you for your questions!

Wade

I disagree with the comment that customers do not care. Customers care about how it works and how many components are required. A singular OS is much different than multiple mashed together or integrated as you’d prefer. Interoperability is huge. As a storage customer of both EMC and NetApp, EMC still makes me nervous. I take away from this article that EMC may not go down a path of true unification within a singular OS, instead continue improving the molding of many together. Unisphere is the topic that consumes the majority of discussions I participate in with EMC. From a technical customer perspective, Unisphere is like a pretty paint job on a vehicle. It’s easy to repaint your vehicle. But what happens if you bend the frame? Bad news.. The framework, the core architecture is what’s most important. When you build a car, don’t you build it from the ground up?

Nikolai

Chad if the vault drives are now only used for some logs and meta data is it now ok to use these drives for heavy IO. Previously the best practice from EMC was to use the vault drives available space for low IO data stores, such as ISO repository, which made this space quite useless.

Hman

Celerra, Centera, CX3, CX4, VNX7500, FMA, soon to be VNXe customer here. I like Unisphere NAS and SAN management in one console/array. Any idea on the future data for FMA integration (hardware and management)? Seems like a logical next step. Currently, I need FMA for NAS archive to Centera.

Nirvan

Chad, an answer to this question would be very helpful for us storage architects:

"Chad if the vault drives are now only used for some logs and meta data is it now OK to use these drives for heavy IO?"

Shawn Cannon

Is SRM integration for the VNXe coming VERY soon??

Chad Sakac

@shawn - the VNXe SRM support is taking longer than I would like. Partially due to focus on the upcoming SRM release, partially because the VNXe APIs are being revved. Current target is Nov 2011. Working to try to pull it in.

Adrian Sender

Hello Chad,

Good post on the new technologies. In the past I have seen customers with CX3/CX4 series nuke their vault packs either by accident or decommissioning.

This leaves the units unrecoverable and there are allot of non-working units out there which are left in the corner. It does not sound like much has changed with the VNX series; but with the VNXe and the VNX for that matter, is there a way to recover from bare metal without having to replace the internal flash / vault drives completely (assuming your utility partition is gone).

- Adrian

baj p.

Nirvan, after speaking with EMC support, they have told me that using the vault drives for heavy I/O is still not recommended. You should put those disks in a pool by themselfs and use them for low I/O.

Lulu

VNXe 3300 or IBM N3300? Let me know your thoughts and reasons why. Thanks!

martin

@Chad : Johan was asking about VNX not VNXe, and despite what you/EMC want to sell, they are very different product inside:
- on VNX it is ALUA
- on VNX it is still Windows
- on VNX mgmt is Java-based, look alike flash-based VNXe unisphere
- on VNX vault drives hosts the OS / VNXe used internal SSD.
- on VNXe a LUN is hosted by one SP only, like celerra in case of SP failure the whole datamover will (should) move to the other SP.

I know VNXe and VNX pretty well being a customer for both: we got rid of VNXe to upgrade to VNX after a few months. We were one of the first customers of this product. The change is due to:
- numerous failures of SP due to memory leak bugs and/or file system problems. And data mover has not failed over like it should. This whole CSX environment just seems not being always in sync.
- replication: think of your rto/rpo. On a vmware environment, using iscsi and RM, we could not get anything below 3 hours replication frequency, for a few TB and very little change.
- global poor performance: in terms of throughput, latency, iops. We had disconnection of our VM when replicating(during VM snapshots), saturating the array.
- FYI, about 40 users, 2 DB VMs, Exchange and windows environment, web servers, for a total of about 20 VMs most of them with low IO.

After a long time troubleshooting no one could explain clearly the bad experience we had. OS Upgrades, and also the recent memory upgrade, do look to me as VNXe was not a mature product when it came out.

It looks like to me VNX is more like a resurfaced clariion with some new features, the VNXe is a new unified architecture currently running something closer to celerra technology, not in a stable way at the moment.

conor

Hi there!

Would it be ok for me to use the specs you used above in a report for my university report?
I am researching the The VNXe 3300!

Many Thanks!

Kevin Raotraot

Hi Chad,

I have a question regarding shared folder and iSCSI accessing from different subnet.

I tried to create shared folder server and iscsi server and used different IP and subnet from management IP. The trick is, when i tried to ping the shared folder server IP and iscsi server IP, its request time out.

But in your video, its possible. can you please teach me the trick? ^_^

- Kevin
EMC Reseller (Philippines)

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.