« Available NOW - VM-Aware Storage, Local and Remote Replication and fluid 10GbE upgrades. | Main | Latest from the VMworld floor »

August 26, 2009

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e552e53bd288330120a578e210970c

Listed below are links to weblogs that reference Important note for all EMC CLARiiON Customers using iSCSI and vSphere:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

David Barker

Ahhar - thankyou :-)

For what its worth, I'd agree this is a bug. Merging this in with other iscsi boxes that rely on single-subnet iSCSI redirection (e.g. EQL, Lefthand) will be fun. ;-)

Chad Sakac

David - agreed on all counts. This is at best a workaround and a workaround that will work on a sustainable level for MANY customers. Most, but not all, don't have CLARiiON and Dell/EqualLogic or Lefthand.

But - just a workaround. I'm applying tons of pressure to get it fixed.

I did, however - want to get the info out there to the world ASAP.

David

Hi Chad,
Is this also required for hardware adapters? I'm using Qlogic 4062C dual port iSCSI adaptors?

Thanks,
David

Bj

Hi Chad,

we are using an Clariion AX4-5i with ESX 3.5. (SW-iSCSI, Path Policy: VMware Native MRU)

Please look at this vmtn thread:

"http://communities.vmware.com/message/1320834"

We are using exactly the same configuration as "oberon1973".

It seems that your described workaround doesn't work in this case (haven't verified this in our environment yet because we haven't time + money for an extra vSphere test environment only for iSCSI-Failover-Testing (SMB environment).

I'm not feel confident to migrate to vSphere if not even basic MRU failover works properly.

I'm a bit disappointed (EMC very often tells everyone about the very good storage integration with VMware, therefore i wonder about the qa testing processes inside EMC with vSphere (vSphere is released since 3 months ..).

By the way i can't find any informations about this case on powerlink (Is this not important to all (clariion) customers? Not every customer reads your blog Chad *g)

Please consider this as constructive criticism.

Bj
Greetings from Germany

Chad Sakac

BJ - thank you very much, it is indeed constructive criticism.

I've commented on the VMware communities post, thanks for pointing it out - and I can confirm that that configuration should indeed work, and you can go to vSphere with your AX4-5i and use NMP with confidence.

This issue is well known - and was known prior to vSphere GA - the core issue is a basic CLARiiON one, not a vSphere one. It affects any case where an iSCSI initiator logs in more than once (for example the MS iSCSI initiator in a guest or a physical also does this). The resolution is underway in the FLARE release train, and I'm tracking it.

The EMC Primus Knowledgebase article number is emc156408. If you call into customer support, they should know EXACTLY all about it, and find the case and workaround immediately. I might in fact blind-test this tomorrow :-)

If you wanted to test it, you could use the evaluation version of vSphere ESX/ESXi (free) on almost any server (including home-brew hardware) at almost no cost.

But - I agree that we could do more to make it well know (it belongs in the ESX/vSphere guide for CLARiiON for example - and am working to get it clear there as well).

Thank you for being an EMC and VMware customer!

cemal dur

Hi chad,
We are using vmware esx 3.5 on emc celerra ns40 . We are planing to upgrade vmware from 3.5 to vsphere .Do we need to reconfigure vmkernel NICs on separate subnets ?

Thanks ,

Chad Sakac

@Cemal: On the Celerra, you don't need to have the vmknics on seperate subnets.

The iSCSI stack (including the target) on the Celerra is done above the Celerra filesystem, and is different than the iSCSI stack on the CLARiiON (getting the best of both)

Over time, these will merge, but move forward with confidence. You can put them on the same or different subnets on a Celerra and there is not the issue noted in this article.

On the Celerra, you configure an iSCSI target with multiple logical ethernet interfaces in an multiple network portal configuration. Unlike a CLARiiON a LUN is only behind a SINGLE iSCSI target, so configuring multiple network interfaces/portals as part of that target, you can multipath. Ignore the "non-redundant" message you will see in the vCenter datastores and storage views panes - this is a bug (it looks for multiple targets logged into for a single LUN as a cue of multipathing).

Thanks again for being an EMC customer!!!

Enrique

Hi Chad,
Thanks for the article. What I am not clear with is: The AX4 supports only one initiator login (iqn) per SP or per storage?
VMKernel1 should be in the same VLAN with SP0A and SP1A and VMKernel2 with SP0B and SP1B?
In the iSCSI configuration, should I configure the 4 SP IPs?
Thanks again

Dan Lah

Chad,

Any update on where this fix is in the FLARE code update cycle?

Thanks,
Dan

Mike Bruss

Chad,

Thanks a lot for the article. Do you know if there has been any further progress on this? I was also curious if this affects a hardware ISCSI initiator also...

Thanks,
Mike

Ryan

Chad, you're my hero.

Was tearing my hair out trying to figure out why this was happening. This is apparently NOT resolved in the latest flare code. EMC seems to hint at it in their CLARiiON/VSphere integration manual (page 20 I think it was), but doesn't outright say why.

Thanks again, saved me some grey hairs.

Bart Perrier

The blog was written in August of 2009. Is this still an issue with the CLARiiON?

We are/will be using an NS120, which I understand is based on the CLARiiON line.

Bart

Chad Sakac

@Bart - the behavior will change shortly (very early Q3). The FLARE update that will change this (very much for the better) is now in Beta.

Bart Perrier

Thanks for the reply, Chad. Our initial environment will only have one datamover (we have an additional DM planned) with two iSCSI ports for each ESX host (pre-production). Should we expect to see the degradation in iSCSI traffic when we add the second iSCSI port?

Chad Sakac

@Bart - are you using iSCSI to the Celerra (connecting to the datamover) or the CLARiiON backend behind the Celerra (connecting to the storage processor)

The Celerra doesn't have this same issue (requiring the subnet workaround, and scaling is linear as you add ports.

The Celerra and CLARiiON iSCSI initiator are merging, and the fact that it works on Celerra and not on CLARiiON will be resolved in a CLARiiON update VERY soon (EMC world starts tomorrow :-)

Bart Perrier

@Chad -- we are connecting to the datamover. Glad to hear it doesn't exist on the Celerra. Thanks again, Chad.

Chad Sakac

For anyone following this thread, note:

UPDATE (May 22nd, 2010): At EMC World 2010, FLARE 30 was announced, which amongst many (MANY!) new features, also has some fixes – one of which fixes this underlying behavior. You can read about it at this post here:

http://virtualgeek.typepad.com/virtual_geek/2010/05/iscsi-clariion-and-vsphere-nice-fix.html

Forex Brokers

Well... round about every blog posts online don't have much originality as I found on yours.. Just keep updating much useful information so that reader like me would come back over and over again.

Mike

What about a fix for the AX4-5i? I spoke with a support person that said this hasn't been applied to the relevant software for it.

Kun Huang

per

"Hi Chad,
Is this also required for hardware adapters? I'm using Qlogic 4062C dual port iSCSI adaptors?

Thanks,
David
"

this bug does not apply to qlogic dual port card, each qlogic port has different iqn name, you are good.

for software iscsi on ESX, each vmknic will use same iqn to login, w/ different IPs.

Regards,

- Kun

Owen

Chad, we are on Flare 29 and having some disk latency issues using iSCSI. You mentioned this does not log an error, how do we know if it is affecting us?

Hussain

I'm interested in this to be applied on my AX4-5i any news? we are in end of May 2011... :)

Chad Sakac

@Hussain - I'm sorry, but the AX4-5i aren't going to be getting more major software updates. That means that the way initiator records are stored isn't changing, which means that on an AX4-5i, you need to follow the workaround (separate subnets).

The workaround doesn't lower your performance or availability, but is a little more complex.

Sorry!

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC. This is my blog, it is not an EMC blog.

Enter your email address:

Delivered by FeedBurner