Recently EMC released their upgraded storage platform the VNX2 or Next Generation VNX. There are actually quite a few changes to the hardware and capabilities of this platform that are really impressive and worth a look. Also a lot of marking spiel and numbers that will need to be proven in our real world environments to be believed.
Let’s start where it began to get a good understanding of where it’s going and to have something to measure the improvements against.
The EMC VNX Series SAN has been a pretty damn reliable and solid platform that my customers have been very happy with over the last several years, EMC has some impressive numbers around how many they’ve sold.
- More than 3,600 PBs shipped
- Over 500 customers purchasing more than a PB of storage
- Over 71,000 Units shipped
- SSD’s have gotten traction in the enterprise with over 200,000 SSD’s shipped, this boils down to over 60% of the VNX Series shipping with at least some Flash
All very impressive and substantial numbers and figures. I have been involved in the architecture and design as well as deployment of about 30 VNX Series SAN’s from the VNXe 3300 to the higher end of the scale. In all cases I’ve found that building an understanding of the customers’ requirements and then ensuring that they are met by the solution recommended is key. Storage isn’t just about TB’s and Drives anymore and Virtualisation changed so many aspects of how the Data center is designed and operated that in essences the storage array can be thought about as the foundation of the data center. Without that foundation meeting requirements then everything that sits on top of it will be impacted, from the Hyper-Visors, operating systems, applications, business services, etc. once these are impacted blame gets thrown around to the network, wireless, internet and other IT services.
Today’s Storage Area Network Arrays have to support a large number of physical hosts, an even larger number of Virtual machines and possibly thousands of applications, with workloads and demands that can fluctuate and change significantly with no warning. It will be interesting to see how EMC’s Next Generation VNX series adapts to and handles this workload.
EMC like a lot of vendors have adopted Flash as the key to the future of storage design. This works to ensure that the focus is on optimizing for both performance and cost. EMC is leveraging their investment in the FAST suite of technologies from the original VNX and this is a great place to start as it’s proven that a very small amount of flash can serve a very high percentage of the overall IOPS and typical workloads. Hybrid arrays are a great balance.
As with all things new and shiny and ‘Next Generation’ almost everything is faster. Faster CPUs with more cores. More RAM. Better I/O. It all works together to get things get done quicker, the new VNX’s scale up to over 1 Million IOPS. Up to 3PB. Can do 200,000 IOPS in a 3U package. All in all pretty damn impressive!
I’ll detail the biggest changes around VNX below;
- Vault Drives now use 300GB per disk for the operating system, eg if you put 4 300GB drives in as vault you will not be able to provision anything on them.
- System cache is now one pool, there is no assigning read cache/write cache or watermarks, also it is set to 8K. Cache allocated for write is still mirrored to the other service processor
- FAST Cache disks can be used for FAST VP, but FAST VP disks cannot be used as FAST Cache
- FAST Cache will now promote single page read requests until it hits 80% capacity, then it will defer to 3 read promotion.
- Hot Spares are now managed by the system, you don’t manually assign disks as hot spares, you apply a policy to a type of disk and the system will automatically select hot spares based off that policy
- Drive mobility – you can move disks between any DAE or slot in the system and it will still be a member of the same raid group/pool, this allows you to move DAE’s between buses if you are doing an array expansion, also if a disk reports a failure, you can’t remove it and re-add, the system won’t allow the disk to be re-used.
- Permanent sparing – when an array rebuilds to an assigned hot spare, that disk will become a permanent replacement.
- Rebuilding RAID groups now uses write logging to LUN metadata, when a disk fails or goes offline writes will be logged like a journal to the RAID group so when a rebuild takes place there is less time required as parity is not being calculated to rebuild.
- Symmetrical LUN access replacing ALUA is only available on Classic/Traditional LUN’s with the initial VNX2 release, it is not available on storage pool LUN’s and also the host has to have updated native multipathing or power path software to support this.
- By default LUN’s in storage pools will be provisioned thin now
The array now supports block level deduplication
- Deduplication is scheduled, rather than in-line
- Chuck sizes are 8K so host filesystem cluster sizes should be sized accordingly
- A storage pool can have one deduplication container, but the storage pool can contain mixed thin/thick luns and dedupe volumes.
- A dedupe volume is tied to a single service processor, so when creating multiple dedupe pools they should be balanced between processors
- The dedupe container with in a pool is basically a private LUN containing every chunk from every deduped LUN in it, FAST Cache and FAST VP teiring policies are applied to the whole dedupe container, not individual deduped LUN’s so this needs to have design considerations when mixing Fast/Slow LUN’s.
VNX2 Datamovers support SMB 3.0 now, this may not seem like a big thing except now Windows 2012 Hyper-V allows you to use SMB 3.0 to provision shared storage for VM’s on CIFS rather than needing block shared storage, so for a Hyper-V setup you can look at hosting your VM’s through CIFS folders via VDM’s rather than via FC or iSCSI.