Literally last night, I was working furiously on the chapter for the book between my day job, and noted some really weird behavior in vSphere 4 with 2TB LUNs that I didn’t expect. I poked at it, filed it away for further investigation.
Then today, a customer in france reported a case where a large MetaLUN was having an issue being partition – and stranger still, when they presented the LUN to an ESX 3.5 host, it worked fine, and then they could bring it back to the vSphere host. I thought they might be related. Sure enough they are.
I pinged my fellow EMCers and VMware folks – and there also a thread on this on VMTN here: http://communities.vmware.com/message/1269655#1269655
Clearly – this is something people are going to run into – if we all ran into it in the last couple of days.
So what the scoop with vSphere behavior at, slightly under, and over the 2TB LUN boundary case?
It’s no longer accurate to say that the largest single partition in vSphere is 2TB (this is a function of the LVM which uses a CHS partitioning scheme – not of VMFS, which can be up to ~64TB using the maximum of 32 extents) – but rather almost 2TB. In fact, it’s 2TB – 512 bytes – or specifically 2199023255552-512 = 2199023255040 bytes. Huh – that seems like a nit?
this would be totally silly EXCEPT that I think a fair number of people will bump into it because most arrays enable you to provision LUNs in GB or TB, and people will in some cases say “2TB please” – not “1.99TB”
Watch this video if you want to see this error in action:
So – here’s the scoop:
1) 2TB is a hard limit (still) in vSphere just like it was in VI3.5 for a single partition – but VMware is now actually stopping an out-of-space condition that can result in a bad case. If the last 512b is written to, bad things can happen. This change eliminates this possibility.
2) Note the behavior with LUNs that are LARGER than 2TB – it used to be (in the 3.0.x days) that you could get borked by simply saying “use all the available space” and bam, your VMFS datastore was gone – poof. See this KB article. You can’t bork yourself here – but you CAN waste space (after all in the example I did in the video with a 2.5TB LUN, only 500GB could be used), and who wants to be inefficient?
Remember – you can absolutely create large VMFS datastores, and extents can be your friend – read up on this here.
Back to the lab – it’s a lot more fun than watching all this DDUP stuff :-)
congrats on popping the "bork" cherry.... progress
and good article too :-)
Posted by: Keith Norbie | June 04, 2009 at 10:45 PM
Thanks Keith - I told you I would stick a "bork" in an upcoming post just for you :-)
Of course - that's also how I really speak in real life :-)
Posted by: Chad Sakac | June 05, 2009 at 08:54 PM
Good article and great video. I especially like it that you show the provisioning steps in the Navisphere management GUI. I would like to request more information and demos in Navisphere. I would really like to see more information geared towards the SAN admins (complete with Navisphere walkthroughs) like in your video. This way I could show them the videos (or your book?) and help them understand what ESX needs.
Posted by: aenagy | June 06, 2009 at 03:17 PM
@aenagy - the funny thing is that ALL this stuff is simple, as long as people are willing to learn about things "cross domain", and not treat the silo lines as sacred territory.
Storage folks are be well served by learning (at least a little) about VMware, and VMware folks are be well served by learning (at least a little) about storage.
Thanks for the feedback - I'll try to produce more videos. I've got one in the hopper that is KILLER based on features to come relatively soon, but I need to keep it sealed up until we get closer to GA.
Posted by: Chad Sakac | June 06, 2009 at 11:42 PM
Hi chad geat post.
I have a LUN with user capacity showing in navisphere as 1.999 GB
Would an ESX4 host if introduced to the SAN see this existing LUN, and would it have any issues accessing it?
Thanks
DC
Posted by: Dougie | July 09, 2009 at 11:25 AM
@Dougie - that will work fine - so long as it's below 2GB by 512 bytes - it will be great...
Posted by: Chad Sakac | July 09, 2009 at 11:26 AM
I was able to attach a 10 TB RDM to a ESX4u1 VM Guest no problems....we actually created it in a seperate datacenter directly zoned to a physical box loaded it up with data and moved the box to datacenter 2 across town and reattached it to the vm as RDM and everything was there no problems!
but when i tried to attach a 20 TB RDM it throws an error , im still trying to work around it but any advice is well recieved!!!!
the actual error is as follows:
#######
Reconfigure virtual machine
VMNAME
File [DATASTORENAME]
VMNAME/datastorefile.vmdk is larger than the maximum size supported by datastore: DATASTORENAME DOMAIN\username
VC-1
Matt
Posted by: Matt | January 31, 2010 at 11:37 PM