« EMC WorldPrepare for facemelting awesomesauce | Main | EMC/VMware Performance Toolkit »

April 05, 2011

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dan Barr

Nice overview. As a new VNX customer (boxes arrived last week!) this definitely whets my appetite. Can't wait to get it all up and running!

JonJon

Great video! Thanks for the post.
I am having a problem, however, with the VSI plugin. I recently upgraded the CX4 to FLARE 30 and I am moving to VP pools. The problem is that the VP pools do not display correctly in the plugin.
If I pick a host, go to the EMC VSI tab, then choose LUN's I see all the expected info for the traditional RAID groups but the VP pools show the Product as 'VRAID' and the Model, Array, Device ID, and Type as 'Unknown'. Some of the other info is missing entirely. Is this a known bug? Any workaround?

Arran

Does the vCenter plugin work with the VNXe arrays also?

Thanks

Chad Sakac

@Dan - THANK YOU for being an EMC customer - we're here to serve, let me know if there's anything I can do to help. Would also love to hear about your post-unboxing experience (feel free to comment here).

@JonJon - the VSI plugin is a 100% supported product. You can open a case, and EMC support will work with you (1-800-svc4emc, or support.emc.com). That's the best way. For informal best-effort support, if you post your question to "Everything VMware at EMC" (www.emc.com/vmwarecommunity) my team will respond and support you. BTW - doesn't sound like a bug - sounds like something is messed up.

@Arran - the current VSI plugin works with VNXe arrays, but in a more limited way. The Unified Storage Management plugin lets you provision datastores (block and NAS), the path management features work, but the visibility features (see end-to-end relationships) does not. That will be coming for VNXe later in Q2.

Jason b

My vnx is getting racked as i type this. Researching storage pool design for use with esx, this plugin looks sweet. Just waiting for the nexus switches so I can tie it all together

Dan Barr

Well Chad, our VNX has been up & running for a little over a week now and I must say it is FAST (pun intended)! Anywhere from 6-10x faster than our old array depending on Iometer parameters (though it's not a fair fight, the old one was all SATA).

I must admit to being a bit disappointed that Unisphere does not yet support linking systems into a Domain, which prevented configuring MirrorView via the GUI. It had to be done with NaviCLI, and the process was apparently less than thoroughly documented (our partner had to muster some support resources to get it done). We were also under the impression (from numerous sources) that ALUA was going to be the default failover mode with the VNX; alas this is not the case.

But otherwise I am quite happy, and I haven't even installed the VSI plugins yet. We also have yet to configure Replication Manager to get app consistency for SnapView and MirrorView.

Some background for you; we got a VNX5300 for Block (2 actually, one for the DR site), using 10Gb iSCSI hooked up to Nexus 5k's in the primary site. Got the FAST suite and Total Protection Pack. ESXi hosts are new HP BL490c blades with the HP/Emulex CNAs in iSCSI mode. Multipathing is working really well now that the hosts are in ALUA mode.

Next up: getting upgraded to Avamar 6!

Chad Sakac

@Dan - thanks again for being a customer, and also for sharing your experiences! Working on fixing that MV issue for you. Let me know your experiences on the VSI plugins... we want to make them as easy to get going as possible, so any/all feedback is welcome.

Chad Sakac

@Jason B - thank you for being an EMC customer! Share your experiences - good/bad/ugly, let us know!

Niels Roetert

Chad,

The high-rez files don't seem to be available anymore?

//niels

NotBigButBlue

Hey guys... you need the VSphere ESXi enterprise edition to use VAAI... so there is no way for us to use the greatest benefit of all benefits of the VSI... btw. i love dots... and the installation process with all those CLIENT side plugins and different versions (32bit for 64bit vcenter clients and so on) is awfully. I'll sit and wait for the vcenter web-plugin.

Kind regards

colin

do you plan on updating this article for 5.1 esxi. multipathing is much easier now. I want to know on a vnx 5300 block with both iscsi and fiber links can they be configured to vconsole. I have two esx host direct connected loop mode to vnx, i also have iscsi toe cards. I dont see how vnx separates these connection types to assign to a lun. I should get two isci initiators and two fiber initiators. I get two with both fiber and isci bound to it. how do i separate the traffic ?

at no time will fiber luns be accessed by iscsi luns.

Sudheer

Hi ,

Can you please give the procedure to expand lun in VNX5300 , I was able to expand the lun by doing selecting Lun > Right click > expand and new size showed up but I could not see the expanded lun size on Vcenter where there are couple of Vm's hosted. Can you please guide me through the process.


--Thanks
Sudheer

Valentine

wants the password for the high res videos

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.