Part 1 – EMC VPLEX Experiences

 

Welcome everyone to Part One of my EMC VPLEX Metro Experiences

 

I recently designed and deployed a VPLEX Metro for a client to enable them to achieve some business requirements around disaster recovery and to in some cases avoid even the recovery aspect of a disaster by automating and replicating their data and services across multiple data centres.

The first thing I want to say about the EMC VPLEX in relation to the design and architecture phases is “Do not be fooled by the pretty web interface” Yes there is a pretty web interface and yes I believe the client will use this as the first touch point moving forward, however the EMC VPLEX is very CLI intensive and this needs to be taken into account, you need a skilled resource involved and engaged in the design

 

Overview

Simply put, the EMC VPLEX federates data located on heterogeneous (i.e. difference vendors and type) storage arrays to create dynamic, distributed, highly available data centers. You use this to achieve a number of tasks and objectives. This is a very powerful capability, however it also needs to be used correctly to have that power realised. The primary and most valuable uses for VPLEX are centred around Mobility, Availability and Collaboration.

VPLEX comes in three flavours, VPLEX Local, VPLEX Metro and VPLEX GEO.

  • VPLEX Local is for intra Data Center or across a campus and can be used to federate data on SAN’s from multiple vendors
  • VPLEX Metro is for Regional or Metropolitan area up to approx. 100KM apart and within 5ms RTT for latency
  • VPLEX Geo is for when you start to look at going across far greater distances (up to 50ms in latency) and asynchronous replication.

In my experience the most valuable feature of the EMC VPLEX is its ability to protect data in the event of disasters striking your business facilities or data center, however it also protects you from failure of components in your data centers.

Using the EMC VPLEX you can move data without interruption or downtime to hosts between EMC storage arrays or between EMC and non-EMC storage arrays. As the storage is presented through the virtual volumes it retains the same identities and access points for the hosts.

Collaboration is critical to many of today’s businesses and is driven by the highly competitive nature of so many industries, collaboration over distance is achieved with Access Anywhere which provides cache-consistent active-active access to your critical data across VPLEX clusters.

 

EMC has a nice Info graphic that shows how this looks.

VPLEX Active-Active

There is also a VPLEX management server which has Ethernet connectivity which provides cluster management services when connected your client’s network. This Ethernet port also provides the point of access for communications with the VPLEX Witness.

Witness Server

To help control where things land during a disaster or failure, a Witness server is used, this is a VMware Virtual machine located within a separate site, network or location (a separate failure domain) to provide a witness between VPLEX Clusters that are part of a distributed solution. This additional site needs only IP connectivity to the VPLEX sites and a 3 way VPN will be established between the VPLEX management servers and the VPLEX Witness. I’ve utilised the client head office or secondary sites with existing network and infrastructure to facilitate. While not something that I’ve implemented some customers require a third site, with a FC LUN acting as the quorum disk. This must be accessible from the solution’s node in each site resulting in additional storage and link costs.

 

So what physically is it?

 

Below is an image of the front and back of a VPLEX VS2 engine

One thing that should be kept in mind from the beginning is that the VPLEX hardware is designed and locked with a standard preconfigured port arrangement this is not reconfigurable. The VS2 hardware must be ordered as a Local, Metro or Geo. It is pre-configured with FC or 10 Gigabit Ethernet WAN connectivity from the factory. You can not currently purchase a VPLEX with both IP and FC connectivity, I hope that EMC changes this in the future as being able to have redundant paths or multiple paths to different arrays could be very valuable.

The VPLEX cluster sits in your racks and is connected between your storage array and and your compute. It consists of

  • 1,2 or 4 VPLEX Engines
  • Each engine contains 2 directors
  • Management Server
  • In Dual or Quad Engine designs there is also 1 pair of FC switching for communication between the directors and 2 UPSs for battery backup to the FC switching and Management Server.

 

As a solution architect I’ve been frustrated by customers with 2 or 3 types of Storage array in their environment, as they don’t have the budget to swap out multiple SAN’s at the same time it’s limited the solutions that can be presented or its required another storage array to specifically address a requirement. VPLEX can really slot into and fill certain needs and utilise existing storage at the same time.

The VPLEX’s connectivity is split between front and back end connectivity (FE and BE). The FE ports will log in to the fabrics and present themselves as targets for zoning to the hosts. The BE ports will log in to the fabrics as initiators to be used for zoning to the storage arrays.

Each director will connect to both SAN fabrics with both FE and BE ports. It should be noted that direct attaching can be done and is supported however is limiting and might not meet the customers’ requirements.

The WAN connectivity ports are configured as either 4 port FC modules or dual port 10GigE modules.

The FC WAN Com ports should be connected to dual separate backbone fabrics or networks that span the two sites. If the VPLEX is a IP version the 10GigE connections will need to be connected to dual networks consisting of the same QoS. The networking / site connectivity can be very complex and I would strongly recommend having a service provider who is experienced in successful VPLEX deployments involved or engage EMC to work with you.

 

The CLI

The VPLEX CLI is divided into command contexts. Some commands are accessible from all contexts, and are referred to as ‘global commands’. The remaining commands are arranged in a hierarchical context tree. These commands can only be executed from the appropriate location in the context tree. Understanding the command context tree is critical to using the VPLEX command line interface effectively.

The root context contains ten sub-contexts:

  • clusters – Create and manage links between clusters, devices, extents, system volumes and virtual volumes. Register initiator ports, export target ports, and storage views.
  • data-migrations – Create, verify, start, pause, cancel, and resume data migrations of extents or devices.
  • distributed-storage – Create and manage distributed devices and rule sets.
  • engines – Configure and manage directors, fans, management modules, and power.
  • management-server – Manage the Ethernet ports.
  • monitoring – Create and manage performance monitors.
  • notifications – Create and manage call-home events.
  • recoverpoint – Manage RecoverPoint options.
  • security – Configure and view authentication password-policy settings. Create, delete, import and export security certificates. Set and remove login banners. The authentication sub context was added to the security context.
  • system-defaults – Display systems default settings.

Except for system-defaults directory, each of the sub-contexts contains one or more sub-contexts to configure, manage, and display sub-components.

Command contexts have commands that can be executed only from that context. The command contexts are arranged in a hierarchical context tree. The topmost context is the root context, or “/”.

The commands that make up the CLI fall into two groups:

  • Global commands that can be used in any context. For example: cd, date, ls, exit, user, and security.
  • Context-specific commands that can be used only in specific contexts. For example, to use the copy command, the context must be /distributed-storage/rule-sets.

Use the help command to display a list of all commands (including the global commands) available from the current context.

Use the help -G command to display a list of available commands in the current context excluding the global commands

As with most half decent CLI’s these days you can use the Tab key to complete commands, display command arguments and display valid contexts and commands

 

The VPLEX command line interface includes 3 wildcards:

* – matches any number of characters.

? – matches any single character.

[a|b|c] – matches any of the single characters a or b or c.

 

* wildcard

Use the * wildcard to apply a single command to multiple objects of the same type (directors or ports). For example, to display the status of ports on each director in a cluster, without using wildcards:

ll engines/engine-1-1/directors/director-1-1-A/hardware/ports

ll engines/engine-1-1/directors/director-1-1-B/hardware/ports

ll engines/engine-1-2/directors/director-1-2-A/hardware/ports

ll engines/engine-1-2/directors/director-1-2-B/hardware/ports

.

.

.

Alternatively:

Use one * wildcard to specify all engines, and Use a second * wildcard specify all directors:

ll engines/engine-1-*/directors/*/hardware/ports

 

** wildcard Use the ** wildcard to match all contexts and entities between two specified objects. For example, to display all director ports associated with all engines without using wildcards:

ll /engines/engine-1-1/directors/director-1-1-A/hardware/ports

ll /engines/engine-1-1/directors/director-1-1-B/hardware/ports

.

.

.

Alternatively, use a ** wildcard to specify all contexts and entities between /engines and ports:

ll /engines/**/ports

 

? wildcard Use the ? wildcard to match a single character (number or letter).

ls /storage-elements/extents/0x1?[8|9]

Returns information on multiple extents.

 

[a|b|c] wildcard Use the [a|b|c] wildcard to match one or more characters in the brackets displays only ports with names starting with an A, and a second character of 0 or 1.

ll engines/engine-1-1/directors/director-1-1-A/hardware/ports/A[0-1]  

 

Clusters – VPLEX Local™ configurations have a single cluster, with a cluster ID of cluster 1. VPLEX Metro™ and VPLEX Geo™ configurations have two clusters with cluster IDs of 1 and 2.

VPlexcli:/clusters/cluster-1/

 

Engines are named <engine-n-n> where the first value is the cluster ID (1 or 2) and the second value is the engine ID (1-4).

VPlexcli:/engines/engine-1-2/

 

Directors are named <director-n-n-n> where the first value is the cluster ID (1 or 2), the second value is the engine ID (1-4), and the third is A or B.

VPlexcli:/engines/engine-1-1/directors/director-1-1-A

 

For objects that can have user-defined names, those names must comply with the following rules:

  • Can contain uppercase and lowercase letters, numbers, and underscores
  • No spaces
  • Cannot start with a number
  • No more than 63 characters

Command and handy commands from my experience are

 

 

 

 


One response to “Part 1 – EMC VPLEX Experiences

Leave a Reply to Amal Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: