Try our new research platform with insights from 80,000+ expert users
PeerSpot user
Solutions Architect at a tech services company with 51-200 employees
Consultant
I welcome the software only version of the brilliant VPLEX hardware but I find its use may be somewhat limited currently.

It seems so long ago now when I reviewed EMC World 2014. One of the things I wanted to learn more about while I was there was VPLEX/VE.
So far, everything I have found out makes me wonder, “Which type of customer it is designed to fit?”. To explain what I am talking about here you need to know the architecture.

VPLEX/VE Architectural Features:

  • Uses vApps and runs on ESXi.
  • Requires 4 vDirectors per site and each is statically bound to an ESXi Host.
  • Has a virtual management server per site that can reside on any of the ESXi hosts.

  • Has optional cluster witness feature and for ideal circumstances this needs a third site.
  • VPLEX/VE will only operate between 2 sites synchronously (VPLEX Metro).
  • 2x WAN links are preferred with latency up to 10ms roundtrip between replicated sites and up to 1000ms between replicated sites and witness site
  • Is iSCSI only (FC is only in the bigger brother full VPLEX).
  • Supports VNXe arrays only.
  • Is limited to 80K IOPS
  • Is managed via the vSphere Web Client
  • Needs distributed Switches for operation [Edit: Correction , Spotted by Leonard McCluskey and supported in EMC documentation. Is supported on vDS or vSS. Thanks!]

The Argument:
Having covered the above points lets extract what this really means. VPLEX/VE is an amazing feat of engineering. I welcome the software only version of the brilliant VPLEX hardware but I find its use may be somewhat limited currently. Perhaps I am not thinking openly enough?
My argument is based on the fact that VPLEX/VE supports VNXe and iSCSI only, so can only appeal to companies who would use this combination of storage array and protocol for production storage. i.e. small businesses.
I find the following areas conflict with the typical profile of small businesses:

  • 4 ESXi Hosts per site are required as a minimum. Due to needing distributing switches these Hosts will require Enterprise Plus licensing. Many small businesses aren’t likely to have as many as 8 hosts and usually license vSphere at lower versions due to costs. [Edit: Based on above correction]
  • The witness should reside on a third site. Many small businesses are lucky to have somewhere suitable to run their Server hardware at 1 site, let alone 3.
  • Having 2 WAN links between Site 1 and Site 2 with less than 10ms round trip time is a big ask for a small business. Even 2 WAN links between Site 1 & 3 and Site 2& 3 with 1000ms round trip time could be challenging in some small businesses. I appreciate however that it will work with 1 WAN link between each Site.
  • Implementing a stretched vSphere cluster doesn’t stop once compute resource and active/active multi-site storage has been provided. It requires networking configuration providing a stretched layer 2 subnet and this is again something a small business is less likely to have.

Many of these requirements are easily met in larger companies. Multiple sites with facilities to run hardware, 4 hosts per site on 2 sites with a third to run the witness, low latency WAN links.

These are all pretty trivial for larger customers but VNXe as a main production storage array running a workload important enough to give it a multi-site stretched vSphere cluster is something I think is unlikely to be present in those customers.

I appreciate that VNXe is frequently used in larger companies (e.g. branch, departmental use or backup targets) but those same companies are much more likely to run the full blown VPLEX with a high-end VNX or VMAX, especially for very important workloads.

VNXe as a production storage array in my experiences are primarily found in small businesses whereas the environment required to support VPLEX/VE is rarely found in companies of that size. There are always exceptions but to put it bluntly, if a company can afford the environment required to run VPLEX/VE, they are likely to use a higher caliber storage array (Not putting VNXe down, it is a great product).

Disagree? Let me know in the comments.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user250299 - PeerSpot reviewer
it_user250299MTS - QA / Test Lead at a tech company with 10,001+ employees
Real User

Hi,

Thanks for your candid feedback on VPLEX ve

I am not allowed to talk specifics yet but some of the pain points on cumbersome requirements will go away in the upcoming release of VE.

Also with VE 2.1 SP1 (our last VE release ) we support VNX as well.

The witness can now also reside in the cloud -certified to work with vcloud air.

Thanks
Anurag

PeerSpot user
Solutions Architect with 51-200 employees
Vendor
NetApp MetroCluster vs. EMC VPLEX

MetroCluster was the last major feature of 7-Mode to be ported over to Clustered Data ONTAP (it is included in the recently announced 8.3 version). Metro cluster solutions enable zero RPO and near zero RTO and they are typically a requirement for building a VMware Stretched Cluster (to enable vMotion, HA, DRS and FT over distance). Let’s take a look at how MetroCluster compares with EMC’s flagship continuous availability solution – VPLEX Metro:

MetroCluster is standard feature of ONTAP, rather than a separate product, and requires:

  • A 2-node cluster at each of the two sites – all nodes in a MetroCluster need to be identical (FAS2500 series not supported)
  • 2 x FC switches per site and 4 x FC ports per controller – dedicated to MetroCluster (for FlexArray the same switches are used for both MetroCluster and storage attachment)
  • 1 x dual port FCVI card per controller – provides remote NVRAM mirroring
  • 1-4 dedicated ISLs per switch – up to a maximum of 200 km (dark fibre or xWDM)
  • 2 x FibreBridges for each disk stack – connects the SAS disk stack to the FC switches (not required for FlexArray)
  • 2 x Ethernet ports per controller – used to replicate the cluster configuration between the sites
  • Tiebreaker software (optional) – automatically triggers a switchover in the event of a disaster by monitoring the environment from a 3rd location

VPLEX Metro is a storage virtualisation appliance, that can simultaneously read/write to the same data across two data centres, consisting of:

  • GeoSynchrony software – enables N+1 clustering, non-disruptive hardware and software upgrades, and the ability to virtualise storage
  • 1, 2 or 4 Engines per site – each consisting of two high-availability directors
  • 2 x FC switches per site – for connectivity of hosts and storage
  • Inter-cluster connectivity – FC (dark fibre or DWDM) or IP with up to a maximum 10 ms RTT
  • Host and storage connectivity – FC only
  • Witness software – automatically makes storage available on the surviving site in the event of a disaster by monitoring the environment from a 3rd location

The core capability of both solutions is to provide continuous availability (zero downtime) – hosts are not impacted by the loss of local storage as the remote copy seamlessly takes over data-serving operations, but let’s see how they compare in other areas:

Ease of Use

Easy win for NetApp as MetroCluster is a standard feature, essentially it is a “set it and forget it” solution – any changes to the primary storage are automatically mirrored to the secondary.

Licensing

Easy win for NetApp as MetroCluster is a standard feature therefore there is no additional charge for the software (additional connectivity hardware is required), whereas VPLEX requires a licence for all the storage managed as well as additional hardware appliances.

Advanced Storage Features

Easy win for NetApp as MetroCluster supports nearly all of the features of Clustered Data ONTAP (i.e. de-duplication, compression, snapshots, integrated data protection and NAS), whereas VPLEX only provides storage virtualisation and non-disruptive operations (i.e. LUN/array migration).

The only features not supported by a MetroCluster are Infinite Volumes, NSE drive encryption, disk partitioning on the root aggregate and SSD partitioning for Flash Pool.

Connectivity and Scalability

  • Inter-site connectivity – easy win for EMC as VPLEX can replicate over both FC and IP, MetroCluster is limited to FC
  • Host connectivity – easy win for NetApp as MetroCluster supports FC, FCoE, iSCSI and NFS, VPLEX is limited to FC
  • Scalability – easy win for EMC as VPLEX supports eight directors (controllers) per site, MetroCluster is limited to two

Planned and Un-planned Site Failure

With MetroCluster a volume or LUN is online in only one cluster at a time, client/host access is not possible on the remote cluster unless a switchover is performed. Switchover operates at the site level – all aggregates, volumes, LUNs, and SVMs will switchover to the other site. The configuration is active-active, so that each cluster can serve its own separate workloads while providing DR protection for the other.

VPLEX is far more flexible as it is able to “stretch” a LUN across sites and allow hosts at each site to have read/write access to the local version of the LUN. With VPLEX there is no concept of a switchover as LUNs are simultaneously active on both sites.

Both solutions support disaster avoidance (i.e. planned site failure) by manually taking down a cluster at one site such that all storage is active on the remaining site – in the event of an un-planned site failure the disaster recovery process can be automated by the Tiebreaker/Witness software.

External Array Virtualisation

Win for VPLEX as it supports more external array platforms and can utilise data on an existing LUN – NetApp FlexArray can virtualise external arrays, but the data on the LUNs must be destroyed before they can be used.

It is important to note that MetroCluster can be deployed with internal disks only, with external disks only or a combination of the two – VPLEX does not support internal disks.

VMware Metro Storage Cluster (vMSC) support

Both MetroCluster* and VPLEX have full support for vMSC and therefore will enable vMotion, HA, FT and DRS between data centres.

One significant advantage of VPLEX is that it does not require the hosts to be configured to access the storage at both sites (cross-cluster connect), VPLEX does support cross-cluster connect and there are some availability advantages to doing it, but it is not mandated. This makes it possible to move Virtual Machines from one site to another and have both the compute and storage resources delivered locally – with MetroCluster the storage is only ever active on one site.

MetroCluster does have more flexible protocol support – FC, FCoE, iSCSI and NFS, VPLEX is limited to FC.

* 8.3 certification pending

So which is the best?

MetroCluster wins with its simplicity, advanced storage features and lower cost, and VPLEX wins with its ability to replicate over IP and to simultaneously access the storage at both sites. Therefore there is no clear winner, it comes down to which matches your requirements and budget the best, but there could be if:

  • NetApp were to add in some key missing features (i.e. replication over IP, volume move across sites, multi-node scalability and simultaneous LUN access)
  • EMC were to integrate VPLEX into their storage platforms and support block and NAS protocols

So there you have it, hopefully a balanced view of the two solutions, and as always comments would be appreciated.

Disclosure: My company has a business relationship with this vendor other than being a customer. We are Partners with NetApp and EMC.
PeerSpot user
Buyer's Guide
Download our free Dell VPLEX Report and get advice and tips from experienced pros sharing their opinions.
Updated: July 2025
Product Categories
Storage Management
Buyer's Guide
Download our free Dell VPLEX Report and get advice and tips from experienced pros sharing their opinions.