Newer version V6 (XtremIO Data Protection XDP) increases performance with built-in data protection.
Improved density with ability to scale out to eight X-Bricks if necessary / more density on capacity.
In memory Space / Efficient Copies.
Newer version V6 (XtremIO Data Protection XDP) increases performance with built-in data protection.
Improved density with ability to scale out to eight X-Bricks if necessary / more density on capacity.
In memory Space / Efficient Copies.
Moved and zoned our Prod Client facing Applications. Quicker response to the business and more efficiency.
Also assisted with large datasets, thus dramatically reducing our batch runs.
Newer HTML 5, no more JAVA required.
Three years.
In the very beginning, with the old version 4, we had issues around Snap and clones.
Yes , I do recall we had an issue with a version of code on XtremeIO before we could Scale.
Seven out of 10.
We are using HDS and EMC.
I was not involved with setup.
I was not involved.
HDS, IBM and Pure storage.
A proof of concept will provide the best results for determining what solution you should go for.
None of the features are valuable for us. The product is poorly designed and not reliable. It performed well when it did work.
We have had no benefits from this product. It produced an outage and didn't deliver on promises made by the sales and engineering teams.
It needs a lot of improvement to be more like Pure Storage or Nimble AFAs.
We have been using this product for a year.
We had stability issues. We ran out of space pretty quickly and the cost to scale did not match our needs.
We had scalability issues. It does not scale in a granular manner.
The technical support is horrible. It took a week for them to get back to us on what caused the outage. Also, there are too many different teams that support the product internally.
We used EMC VMAX. We switched because EMC made it too costly to stay on VMAX and gave us discounts on XtremIO.
The initial setup was complex. We hired someone to do it. It took them two days. Our new array was done by my team in less than two hours.
EMC has not delivered on promises with their array. We have been with them for seven years and had five different arrays. Same story every time. Worst support in the industry.
We evaluated Pure Storage, Nimble, and SolidFire.
Don't do it.
It provides reliable and predictable performance with very little administration required beyond the initial setup.
Native data replication: To replicate data between XtremIO devices, you need to use EMC’s RecoverPoint appliances to move the data. More and more arrays are providing the ability to replicate the data natively without the need for a secondary device to do it for them.
The EMC VNX platform is the same way. It only requires native replication via RecoverPoint. EMC’s flagship VMAX and their new Unity platform replicate natively. Even EMC's Isilon does data replication natively.
XtremeIO needs to catch up. That’s about the only Achilles heel of the product.
We have not had any issues with stability.
We have not had any issues with scalability.
Technical support is excellent.
We did not have a previous solution.
The setup was straightforward. Follow the installation guide and it’s a slam dunk.
We evaluated HPE 3PAR, Pure Storage, and EMC VNX all-flash arrays.
We had an established EMC footprint in our data center and a good relationship. Exploring their AFA made sense. To keep things honest, we evaluated other products and conducted a PoC with other vendors.
The XtremeIO product wasn’t always the fastest, but it was absolutely linear in performance and we encountered no issues. The PoC kept pricing honest as well.
Several processes that used to take several hours to complete are now taking minutes to complete.
There are advanced features we currently are not utilizing (AppSync, Snaps of Prod, etc.) but they are features we plan to deploy that would bring additional efficiencies.
Speed and reliability: This system hosts several mission-critical, latency-sensitive workloads and XtremIO has delivered on those promises.
I would like hardware capacity additions to be a little more flexible. The upgrade path for the existing XTremIO units requires you to purchase 2 XBricks at a time and they need to be the same capacity as the existing XBricks.
-You could not mix drive sizes
-You could not add just a single XBrick
-You had to fully populate both XBrick’s
All of this equals a very expensive\large upgrade path
However; after saying all of this, EMC announced a new generation of XTremIO (X2) which allows more granular growth, mixed drive size, etc…
(You need to purchase new hardware – I don’t believe they are adding these features to existing XTremIO Clusters.)
Knock on wood, we have not had any stability issues.
We have not specifically had scalability problems. Upgrade paths are fixed.
Paths, capacity, and performance scale as X-Bricks are added.
Technical support is excellent. This product line receives high-level support.
The only negative was enough field support during early deployments (longer lead times than average). That has since been resolved with additional training and staffing levels.
We were using several other technologies prior to introducing XtremIO.
We switched because PoC testing proved the all-flash option to offer superior performance as compared to existing in-house technologies.
The install and setup was very easy and straightforward. With the proper pre-planning and facility work (power, cooling, network & FC connections), we were up and operational within a few hours:
At first glance, this solution is pretty expensive. However, when you factor in inline deduplication, inline compression, zero-overhead snaps, thin, etc., you find the overall cost to be inline\better than with traditional tier 1 storage subsystems.
With some workloads that benefit from compression and deduplication, costs are actually better than some tier 2 subsystems (while latency remains <1ms).
This makes some happy dev testers!
We did research several other vendors (Pure Storage, Hitachi, IBM, etc.), but we only conducted a PoC on EMC’s XtremIO.
Download and utilize a free deduplication\compression tool to identify effective rates to determine effective capacity cost.
VDI is one of the top mission-critical things we offer our users. This storage runs our whole VDI environment and barely shows a blip on I/O. Previously, we had ran the VDI on non-flash storage and when Windows updates came out, we had to install them in schedule segments so as not to overload the storage. With this storage, we do them all at the same time and there is no impact to performance if 1 or 100 VMs reboot at the same time.
Dedupe, compression and high I/O are the most valuable features. It is great for applications like Microsoft Exchange, ERP, SQL and VDI; basically saved the VDI buy-in from users, as now performance was seamless in comparison to a physical PC.
Get rid of the Java aspect of the GUI console. Basically, the GUI to administrator the array uses Java as its base to run on. Java at best is buggy and prone to loading issues, so moving away from this platform would be nice.
We had no issues with stability.
We had no issues with scalability.
I would give technical support 9 out of 10. Nothing is perfect but they sure are close to it.
We previously used deployed EMC VNX storage (and still use it for our lower performance applications) and before that, we had Dell EqualLogic. We switched to an all-flash array as we wanted high performance storage for our three most critical applications (Exchange, ERP and VDI). We wanted to do a full VDI platform for all our users and locations. We wanted the best experience for them, as any hiccup would mean a lower buy-in rate from them. This storage made that task much easier.
We bought it through VCE, so they included setup with it. Things went smoothly. When we did receive the storage, within a day or two, we had a controller failure but since it had two controllers, there was no impact to users. Support was fantastic and got it replaced over the weekend, and we didn’t even have to do anything other than authorize them into our data center to replace the failed part.
It is costly but worth it. If the network or infrastructure you have is always a sticking point to users or management, spending the bucks on an all-flash array can help win them over.
We looked at more EMC VNX storage but at that time, we were not aware of this offering. When we started doing the talks with EMC, our rep pointed at this product line and once we saw a demo, we were sold. After more research, it didn’t take us long to get the paperwork in place. We also didn’t look at other vendors, as we utilize VCE as our main infrastructure at our data center so regardless of what model or product line of EMC we bought, VCE would handle the support. This was one of the main reasons of going with VCE, so we wanted to carry it on with the new storage.
I wish we bought double the capacity but we only had so much to spend, as I would put every application/server on this array.
We have had major stability issues. Over the course of two years, three of the four storage controllers failed. It took us over two weeks to get a replacement for the first.
Technical support was the worst possible. Regardless of the fact that we paid for a four-hour turnaround, we were waiting two weeks for support calls. When we did get support calls, the engineers were up to four hours late to the datacenter. After the engineers would finally show up, we would still wait weeks for parts.
We previously used a large NetApp array, but the issue was storage density. We were at the point where we couldn’t add any further disk shelves to the controllers.
Don’t buy this array. You’re paying for loads of magic beans, since it’s mediocre at best for a platform in a rapidly growing field. Look instead at Pure Storage or something with variable block deduplication. You’ll end up spending less and getting a better product with actual support.
We were not given the opportunity to evaluate alternatives. Upper management made the decision without the input of the engineers.
Don’t consider it. Look at a platform that has actual support. EMC is a big name, but their support model is terrible with an even worse model for implementation. For a platform you literally can’t touch without them, you’re stranded on a desert island with no help in sight.
We use it together with VPLEX, which virtualizes the storage array with all its benefits.
This virtualization layer adds to the latency. With XtremIO behind the VPLEX, the response times are far below the response time we have on our other storage arrays, even with the SSD onboard.
The data reduction (deduplication and compression) is the most valuable feature in our business case.
We calculated a reduction ratio of 3:1 to get a positive case, and we actually reached a little bit higher (3,1:1). This makes our business case even better.
Even with this feature, the response time is far below what we received with our other storage arrays.
Another valuable feature is the guaranteed sub-millisecond response time for a 4K block.
It has no storage replication. The replication is done through the VPLEX. In some cases where we don’t need the flexibility of the virtualization layer, we could free up resources on the VPLEX by using the storage replication.
Until now, we have not encountered any issues with stability.
We did not immediately have scaling issues. Scaling up is, in fact, very easy. Just “buy” an X-brick of 40TB and plug it in. The system does the rebalancing automatically. Since we use a VPLEX, the scaling limitation lies with the VPLEX.
Technical support is good. The installation went smoothly from DEL EMC’s site. We did not encounter real technical issues yet, but the questions we had were all answered within an acceptable time frame.
Part replacements are done transparently without any intervention from our site.
We used HPE and EMC storage arrays, but the main reason we switched was the positive business case. We have a lot more flexibility (VPLEX), reduction of cost and floor space (XtremIO), due to deduplication and compression.
The initial setup of the XtermIO was very straightforward in combination with VPLEX. The setup of the VPLEX was little bit more complex, but XtremIO just needed to be connected to the VPLEX.
XtremIO is pretty straightforward about pricing. However, you need to look at your data so you can estimate, with the advice of DEL EMC, what data reduction ratio you will reach. In our case, a 3:1 reduction ration gave us a positive case compared to other storage arrays.
The XtremIO by itself without a virtualization layer has some drawbacks, like storage replication. I really would recommend them to install it in combination with a storage virtualization layer.
It has offloaded high IOPS processes and cleared the main arrays for bulk work.
Even with the fast SSD drives and processing on the controller, there was still a lag on the FC ports.
The initial node came with only two FC ports per controller. It was used for multiple ports on the VMAX to spread traffic over several VSANs.
For more detail:
I had 4 dH2i powerpath servers hitting it, along with 4 vmware clusters 8 host each, on a X1 brick we only had two controllers both with 2 port
So a total of 4 FC ports.
Compared to the VMAX 20K, where I had 8 ports on vlan 2, 6 ports on vlan 100, 8 ports on vlan 50, so I was able to spread the traffic around between process.
I had 2 directors on one VMAX, whereas I had 3 directors on the other VMAX.
With only 4 ports on the xtremeIO, the most I could do was send traffic on 2 ports to two different VLANS one on each controller.
So my comment was get additional ports, so the DH2I servers don’t hog all the IOPS.
Recommend getting the second brick X2 and the matrix switch, then with 8 FC connector can start spreading the traffic.
The company had me routing the data thru a fabric switch MDS9500, separate from the main traffic as this was a test.
Most of production was on 4 other MDS9500 switches.
Monitor of the switch, did not show a bottleneck going to the servers, only on the 4 8GB FC going to the XtremeIO.
Connect to different blades on the 9500.
Don’t think they have touched it since I left. Nor on the other 8 SAN units.
We have been using the solution for two years.
We had some stability issues. Initially, one of the ports failed. The unit could not use a LUN larger than 2TB. After testing all our variables, it was determined that it was XtremeIO, and a patch was created.
The servers were attached with both PowerPath and VMware 5.1 datastores, via a MDS 9500 Fabric Switch network.
It didn't expand to the second Node X-2, although that was a stated option.
The technical support was poor, even during the port or 2TB limit. It was rare to hear back from the technical analyst looking at the unit from ESRS.
Over my thirty years in the IT field, I have tried many solutions. I worked with:
Compared to others, the setup and operation is easy. I worked at the company almost three years, learning XtremeIO with little assistance from co-workers or the vendor.
Even before Dell bought it, EMC pricing was steep.
We evaluated Pure Storage and NetApp.
Our company didn’t send anyone to operations training until we had the unit for two years. I would advise you to send your technical expert to take the training early on.
Did you try IBM FlashSystem 900? If you are looking for performance, resilence and simplicity you really should ask for a PoC, you will be positively impressed