So this week (9/13 is today’s date) EMC has made generally available two important software updates, and an important VAAI hotfix.
The VNX has gotten major updates to both block (iSCSI/FC/FCoE) and NAS functionality, including critical VAAI and VASA support, the VNXe has gotten some material fixes and new features, and there is an important hotfix VMware customers using VMAX should know about. Read on for more!
You can get them both EMC updates from the EMC support site. Personally, I’m switching to the beta of the new support site (http://supportbeta.emc.com), which is nice, clear, offers everything you want.
I’ve circled two important top level clickdowns – the “support by product”, and “downloads”
VNX: Operating Environment (O/E) Update
This update includes:
- VNX OE for Block v05.31.000.5.502
- VNX OE for File v18.104.22.168
- Unisphere v1.1 (22.214.171.124.0129)
- VNX Installation Toolbox (VIT) 1.1 (126.96.36.199.10)
- EMC ESRS IP Client for VNX Version 1.1 (188.8.131.52.0124-1)
Get it from http://supportbeta.emc.com (also on powerlink)
What’s in this VNX update? A LOT (this is a shortened list with key ones from my perspective):
- VAAI Support for NFS: This feature allows VAAI for NFS. It’s delivered via a VMware ESX server vStorage plug-in that creates and manages virtual disks as a set of files on a NAS device. The use cases include virtual disk cloning, VM instant provisioning, NAS hardware assisted VM snapshots and storage vmotion.
- Fast Clone
- Full Clone (in vSphere 5)
- Reserve Space (in vSphere 5)
- Extended Stats
- Offload Status/Abort (in vSphere 5)
- Expanded VAAI Support for Block: This feature provides support for all VAAI 7.1 Specification requirements for vSphere 5:
- Integrate block vStorage APIs for Array Integration (VAAI) support into the core storage stack (avoiding the need for vendor plugins): “VAAI Full Copy” using XCOPY always was standard; “Hardware Accelerated Locking” moving from CAS (compare and set) to the more standard CAW command; “VAAI Block Zero” moving from the SCSI Write Same SCSI operation to the more standard SCSI UNMAP command. All this
- Support for Thin Provisioning Stun, and Thin Provision Reclaim (some work still underway for EMC and other vendors too on the HCL cert for this one).
- VAAI Enhancement: This is important. This is a fix for the VNX issue associated with XCOPY that was discussed in this webcast and blog post here. Now, most customers are loving VAAI on VNX right now – in all cases it offloads the ESX host, and the storage network, but in some cases, if can take longer for storage vmotion and cloning operations to complete. Please check out the link for the “why”. The engineering team has been diligently working this. So, we have a VAAI XCOPY enabler which changes the array internals which improves performance with VMware. This enabler is free but is only given out by escalation engineering. Any customer request for this enabler should be directly escalated to escalation engineering. The reason for this is that anything which changes core operations needs insane testing before being in the “main release”. This didn’t get it in time, but we thought it was important to make it available on an on-needed basis as it’s rolled into the future mainstream VNX OE releases.
- VASA Support via Solutions Enabler: Discussed at length here.
- VMAXe support:
- VNX Domain Support: This feature allows multiple VNX systems to be added into a VNX domain and allows Unisphere 1.1 to support Celerra and CLARiiON systems.
- FCoE Support for Gateways: This can simplify config for customers who are all FCoE
- Retain Domain Security Settings on exit: This feature allows that when a VNX system is removed from a Domain it will, by default, retain its security settings including all user credentials and LDAP. Optionally, users can erase Domain Security Settings with a CLI command.
- True Local Roles: This feature provides local roles to only allow the user to see a single system in the System List.
- Asynchronous Login: This feature allows users to be permitted to log in to a VNX system even if the Control Station is not available. Also if the SPs are not available, users would be permitted to log in to the File side and use file services. With this feature, however, you will not be able to see or perform certain file operations unless the Control Station is available. Providing partial access is considered desirable during faults or connection issues. Similarly, certain Block operations are not allowed if an SP is not available.
- Default Read/Write Cache Settings Flash (EFD) LUNs: This feature changes the default Cache Settings for LUNs created on EFD/Flash drives by selecting both read and write cache ON by default. In the previous software versions by default read/write cache is disabled for these LUNs.
- VNX5500 High Bandwidth Offering: This version of VNX for Block now supports a new 6GB SAS BE SLIC for the VNX5500 platform which will increase the backend port count by 4 to a total of 6 ports per SP which also increases system bandwidth.
- Unified SW Upgrade Enhancement: This feature provides file and block code revision checking to better allow software upgrades for File and Block Operating Environment components together. It provides an improved compatibility check for mismatched Block and File revisions.
- 3rd Gen VE P/S Support: This feature provides support for 3rd generation high efficiency power supply for Viper (and D15LFE) DAE.
- 3 TB 7.2 FAT SAS / Manta Ray 3.5" Disk: This feature adds and qualifies code changes to support the 3TB 7.2K NL SAS drive. This drive will be used as a vault drive in the VNX5100, VNX5300, and VNX5500. The drive will also be used as an optional drive in the 15 drive DAE, and Voyager (60 drive) DAE. This drive should be treated with the same tier attributes as the current 2TB NL SAS drives.
- Support for new Dense Rack DAE 3.5, 4U: This version of VNX OE supports a new SAS disk array enclosure. This enclosure supports 60 SAS drives per enclosure, with a form factor of 3.5".
- Mapped Pool (AVM) Performance Enhancement: This feature allows VNX file system to setup using AVM on mapped pools to deliver comparable performance (within 10-15%) of file systems setup using AVM on traditional FLARE LUNs. This is important as we continue to simplify the VNX architectural model, while improving “out of the box” transactional NAS performance.
- Loads of Unisphere improvements, including Multi-domain environment support: Like Unisphere 1.0, Unisphere 1.1.25 supports VNX domains with multiple VNX systems, domains with multiple legacy systems (CLARiiON and Celerra), and multi-domain configurations. A multi-domain environment lets you manage and monitor a group of domains (potentially all the systems in your storage enterprise) using the same instance of Unisphere. Beyond this – there are loads of Unisphere fixes. Feedback (that I hear at least) on Unisphere as a whole isn’t perfect, but is overwhelmingly positive.
VNXe: VNXe operating environment (O/E) version 2.1
As a reminder, VNXe takes the goodness of VNX and makes it available in smaller, simpler packaging (as discussed here). Here’s the http://supportbeta.emc.com page for VNXe – and everything is right there in a simple little package.
So – what’s in it?
BEFORE I GET ASKED (yes, these are taking longer than I would like):
- VNXe VMware VAAI Support for File is targeted for mid Q4 2011.
- VMware SRM Adapter (block/file) is very close (and will support SRM 5), also officially Q4 2011.
On to what’s in this update release
- 100 GB Flash drives for VNXe3300. These deliver the highest random I/O performance per drive. They can be effectively used for demanding random I/O performance loads where the dataset size can reside within a Flash Storage Pool. Note: VNXe does not support FAST VP (yet), so the data must be explicitly provisioned on the flash drives.
- VNXe3100 dual controller models with 8 GB of system memory per controller: Optional memory upgrades on VNXe 3100 deliver additional scalability and performance. Memory upgrades will be available for existing 4 GB memory dual controller VNXe3100 systems. Dual processor configurations of the VNXe3100 will automatically be converted to 8 GB memory at no extra cost for open orders and quotes as of August 22.
- Up to 15 percent in performance gain for random I/O intensive applications such as Exchange and databases.
- Better overall performance when a dual SP system is running in degraded mode (after failover of one controller).
- VNXe3100 single controller to dual controller upgrade kits can be ordered as of August 22. This upgrade requires a mandatory service pack update (SP1) applied on top of VNXe3100 OE v2.1. Both the SP1 update and the upgrade kit will be available in September.
- Not Supported: VNXe3100 single controller upgrades from 4 GB to 8 GB and VNXe3100 4 GB single controller upgrades to dual 4 GB controllers. VNXe3100 4 GB single controller models may only be upgraded to VNXe3100 8 GB dual controller models.
- 1 TB near-line SAS drive for VNXe3300: a cost-effective option for customers who want balanced performance and capacity.
- DC power and NEBS support for VNXe3300: designed for telecommunications, federal, and green environments where DC power is required. NEBS-certified 15,000 rpm SAS drives and 100 GB Flash drives are also available for these configurations.
- STIG compliance for public sector/federal support.
- Common criteria certification: VNXe has been submitted for common criteria certification and conformance with IT security standards sanctioned by the International Standards Organization.
- EMC Storage Integrator (ESI) support: EMC Storage Integrator for Windows provides application-aware storage provisioning for Windows and SharePoint environments. Available as a set of free plug-ins to the Microsoft Management Console (MMC), EMC Storage Integrator for SharePoint and EMC Storage Integrator for Windows will support VNX, VNXe, VMAX, CX4, and NS Series with both block and file functionality. Trust me, this thing is awesome, and like VSI (see below), is free. Rather than explain – check this out.
- EMC Virtual Storage Integrator (VSI) support: As pointed out here – this is a key (and free!) tool for VMware admins using all EMC storage platforms – the smallest to the largest.
An important ESXi 5 patch:
An important “closed loop”. I wrote about the VAAI VMAX issue and workaround (both on the VMAX side and the VMware side). Until now, the fix needed you to get it from VMware support after you had a case. Good news – it’s been built into the current vSphere 5 public hotfix here:
Thanks again to all EMC/VMware joint customers out there! We do our best to serve you!