Snapshots, deduplication, compression – We are using the XtremIO for multi-version databases, in this respect being able to snapshot a database consistency group to create DEV/UAT versions while utilizing deduplication/thin provisioning/compression allows us to maintain the numerous copies of each database we need. To go along with that, being able to refresh any snapshot set with the contents of any other snapshot set (in the same lineage) allowed us to reduce our refresh times from hours to minutes.
Principal Storage Engineer at a tech consulting company with 1,001-5,000 employees
We use it for multi-version databases. We've seen reduced DEV/UAT refresh periods.
What is most valuable?
How has it helped my organization?
DEV/UAT refresh periods reduced from hours or longer to under five minutes.
What needs improvement?
Volume count. A hard limit of 8192 volumes per cluster. This becomes an issue with DR replication and RecoverPoint and trying to maintain the best RPO possible.
There is a limit of 8192 volumes/LUNS that can be created. This includes all volumes/LUNs presented to hosts along with all snapshots, so it becomes very easy to bump up against the limits in certain circumstances. For us, we use RecoverPoint to replication between XtremIO devices, and since RecoverPoint creates a snapshot of each volume to allow for point in time recovery that results in a lot of snapshots that have to be accounted for.
For how long have I used the solution?
One year.
Buyer's Guide
Dell XtremIO
July 2025

Learn what your peers think about Dell XtremIO. Get advice and tips from experienced pros sharing their opinions. Updated: July 2025.
861,524 professionals have used our research since 2012.
What do I think about the stability of the solution?
None, the product has been working as expect without issue.
What do I think about the scalability of the solution?
None that weren’t expected. This is a scale out product, not scale up or scale up and out. You can go from 1 XIO brick to 2, 2 to 4 and then 4 to 8. If you know this up front it is very easy to plan around.
How are customer service and support?
Extremely good. EMC has been outstanding with support, especially when using their call-home utility ESRS.
Which solution did I use previously and why did I switch?
Yes, we had a previous all flash array vendor, however we encountered many issues with support, scalability and a general lack of data efficiency services that ultimately were more important than all flash performance.
How was the initial setup?
Initial setup is very straightforward. There is a configuration workbook you complete to provide the basic information (IP addresses, domain names, mail, SNMP, etc.) and work with a Dell EMC project manager to get it installed. Array comes preconfigured from a storage standpoint, so once it is up and running you can start allocating storage immediately.
What's my experience with pricing, setup cost, and licensing?
Not much to a basic XIO installation, everything is licensed initially. There is no built in replication or other business continuity features, if you need that you will need to look at products like VPLEX to sit in front of the XtremIO.
Which other solutions did I evaluate?
What other advice do I have?
Understand your workloads and use-cases. This is not a perfect solution for all flash workloads. If you cannot take advantage of deduplication and compression there may be better/cheaper solutions. If you want simplified replication, this is not the product for you. For us, performance wasn’t the prime driver. We wanted a scalable solution and our workloads could take advantage of deduplication extremely well so this was an obvious choice.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Practice Manager - Cloud, Automation & DevOps at a tech services company with 501-1,000 employees
This is the first time I have witnessed 400,000 IOPS in any kind of enterprise lab.
Originally posted at vcdx133.com.
Today I completed the initial performance testing of my EMC XtremIO PoC system. I wanted to take a shot at it myself before the EMC SMEs come in to tune and optimise the configuration. In a single word, “Wow!” This is the first time I have witnessed 400,000 IOPS in any kind of enterprise lab. I look forward to seeing what additional tricks the experts can make my “X-bricks” perform.
Business Requirement for XtremIO
I can imagine people reading this and asking, “Why? It is so expensive!”. Well, the organisation I work for uses monolithic storage (EMC Symmetrix VMAX) which has been sized for capacity, and after 2 years of use we are feeling the impact of performance degradation as we consume the total capacity of the solution. My business requirement is to create a small but powerful “High Performance” cluster of compute, network and storage that will provide low latency, high I/O resources for my business critical applications that are currently suffering. This XtremeIO PoC is an attempt to meet that business requirement; I am also seriously considering hyper-converged infrastructure and server-side flash-cache acceleration as well.
Iometer Test Configuration
- 3 x HS22 blades with 2 x 4C Intel Xeon X5570 2.9GHz CPU, 96GB RAM and QLogic HBAs per blade running ESXi 5.5 Update 2 (Boot from DAS)
- IBM BladeCenter Chassis with Brocade Switch modules connected to XtremIO chassis with 6 x 8Gb FC
- IBM BladeCenter Cisco 1GE Switch Modules connected to Core switch network
- EMC XtremIO X-bricks version 2.4.1 with EMC XtremIO Storage Management Application version 2.4.1
- 8 x 1TB Volumes (Encryption enabled) mounted as VMFS-5 Datastores with VMware NMP set to “Round Robin”
- 8 x Iometer Dynamos running on Windows Server 2008 R2 with 3 x 40GB vDisks connected to Paravirtual vSCSI Adapters (1:0, 2:0, 3:0)
- 1 x Iometer Manager running on Windows Server 2008 R2
- Test 512b, 4K and All-in-one Access Specifications (Two variants: 100% Read 0% Random, 0% Read 100% Rand)
Iometer Test Results
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
Dell XtremIO
July 2025

Learn what your peers think about Dell XtremIO. Get advice and tips from experienced pros sharing their opinions. Updated: July 2025.
861,524 professionals have used our research since 2012.
Practice Manager - Cloud, Automation & DevOps at a tech services company with 501-1,000 employees
The XtremIO cost would be slightly cheaper than the VMAX, and 1/3rd the size.
Originally posted at vcdx133.com.
I am currently testing my EMC XtremIO PoC system with EMC. One of the great benefits of XtremIO is the Deduplication feature, which at a minimum will be 10:1, so the experts tell me and will be even better in version 3.0. My current Symmetrix VMAX configurations are 250TB and 350TB of tiered SSD, 15KFC and SATA storage for 2 sites. So assuming a 10:1 dedupe ratio, could I replace my two Symmetrix VMAX solutions with two XtremIO systems of 2 X-bricks (with 20TB model)? It almost seems too good to be true! From a price perspective, the XtremIO cost would be slightly cheaper than the VMAX (after the highly combative process of vendor bashing, sorry – negotiation, in my region) and from a space perspective 1/3rd the size (with SAN Fabric). No need to state the obvious about performance.
UPDATE: 10:1 is too good to be true, EMC experts tell me 1.x-2:1 is more realistic for business critical databases. V3.0 will add compression, which will increase space efficiency by a small percentage also. So hold your plans to drop spinning disks from your data center.
The picture below shows my VNX VG8 NAS Gateway with a 3 bay, 2 Engine Symmetrix VMAX 20K on the left (yes, I run entirely with NFSv3 and am 99% virtualised with vSphere 5.5 on Cisco UCS – I built my own vBlock!) and my XtremIO PoC system on the right (with two X-bricks, but can handle four 20TB X-bricks in the same rack). Could this be my new motto? “Spinning disks are a waste of space, flash is packed!”
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Solutions Architect with 51-200 employees
NetApp vs. XtremIO vs. HDS
Flash and Hybrid block arrays
Flash has changed storage forever and almost every new array purchase needs to have some degree of flash included, so the market now offers three distinct types of array:
- Hybrid - Exploits the performance of flash and the lower cost of HDDs
- All-Flash hybrid - Packaged to deliver a low cost per GB of flash
- Ground-up design - Purely designed for flash with no support for HDDs
As always there is a huge range of price points, due to the architecture, features and performance scaling, for these arrays. Efficiency features are critical in some use cases (i.e. VDI), but less so in many others and performance scaling for the majority of solutions is substantially higher than legacy arrays built just for HDDs.
Historically array performance scaling was limited to the number of HDDs that it could support (i.e. the drives were the bottleneck), with Flash the drives are so fast the bottleneck moves to the controllers. The result of this is that the entry-level arrays will not scale performance beyond 20-30 SSDs so it is very important to have an idea of your ultimate performance scaling requirements.
For most use cases today a hybrid array that has been optimised for flash is the best fit, but there are certainly workloads that need the capabilities of a ground-up all-flash design. As always your requirements and budget will dictate the best fit so let’s take a look at what EMC, HDS and NetApp have to offer:
EMC VNX | EMC XTREMIO | HDS HUS 100/VM | NETAPP FAS | NETAPP E/EF-SERIES | |
Type | Hybrid/All-Flash Hybrid | Ground-up Design | Hybrid | Hybrid/All-Flash Hybrid | Hybrid/All-Flash Hybrid |
Largest Flash Drive | 800 GB eMLC | 800 GB eMLC | 400 GB eMLC 1.6 TB FMD (150)/3.2 TB FMD (VM) |
1.6 TB eMLC | 1.6 TB eMLC |
Replacement of drives under maintenance when write limit reached | No | Yes | No (SSD) Yes (FMD) |
Yes | Yes |
FC, FCoE & iSCSI | Yes | FC and iSCSI | FC and iSCSI (100) FC (VM) |
Yes | FC and iSCSI (E2700) FC or iSCSI (E5500/EF550) |
Writeable Snapshots | Yes | Yes | Yes | Yes | Yes |
Integrated Remote Replication | Yes | No | Yes | Yes | Yes |
De-duplication | Optional (Post) | Always On (Inline) | No | Optional (Post) | No |
Compression | Optional (Post) | Always On (Inline) | No (Inline for FMDs) | Optional (Inline or Post) | No |
Thin Provisioning | Optional | Always On | Optional | Optional | Optional |
Flash Caching of HDDs | Yes | N/A | No | Yes | Reads (E-Series) N/A (EF-Series) |
Auto-Tiering (Up to 3 tiers) | Yes | N/A | Yes | No | No (E-Series) N/A (EF-Series) |
Read the rest of this post here.
Disclosure: My company has a business relationship with this vendor other than being a customer. We are Partners with NetApp and EMC.
Independent IT Analyst with 51-200 employees
It clearly needs some improvements here and there but this product is maturing very quickly.
EMC XtremIO most interesting characteristic? Predictability.
Last week, thanks to Tech Field Day Extra, I attended a presentation from the EMC’s XtremIO team. Some of my concerns about this array are still there but there is no doubt that this product is maturing very quickly and enhancements are released almost on a monthly basis… and it’s clear that it has something to say.
A rant about All Flash
In these days, contrary to the general (and Gartner?) thinking, I’m developing the idea that considering All Flash Arrays a separate category is a totally non sense (
you can also find an interesting post from Chris Evans about this topic). Flash memory is only a media and storage should be always categorized looking at its characteristics, features and functionalities. For example, I could build a USB-keys based array at home, it’s AFA after all… but would you dare saving your primary data into it? Will it be fast? (you don’t have to answer, of course!)
The fact that a vendor uses Flash, Disks, RAM or a combination of them to deliver its promises is only a consequence of designing choices and we have to look at the architecture (both hardware/software) as a whole to understand its real world positioning. Resiliency, availability, data services, performance, scalability, power consumption and so on, are the characteristics you still have to consider to evaluate if an array is good for a job or another.
Back to XtremIO
In this particular case, If we go back and look deeply into XtremIO design we will find that the system is equipped with plenty of RAM which is heavily leveraged to grant better constant performance and the highest predictability. In fact, looking at the charts shown during the presentation (around minute 14 of the video below), you’ll find that the system, no matter the workload, delivers constant latency well under the 1ms barrier.
The product, which has finally received updates enabling all common data services expected on a modern storage array (replication is still missing though), doesn’t shine for power consumption, used rack space or other kinds of efficiencies (at this time it’s also impossibile to mix different type of disks for example). But again, granting first class performance and predictability is always the result of a give-and-take.
XtremIO is based on a scale-out architecture with a redundant infiniband backend. Different configurations are available starting from a single brick (a dual controller system and its tray populated with 12 eMLC drives, out of the 25 available) up to a six-brick configuration for a total of 90TB (usable capacity before deduplication/compression). No one gave me prices… but you know, if you ask the price you can’t afford it (and, of course, they are very careful to that because $/GB really depends on the size of the array and deduplication ratio you can obtain from your data).
Why it is important
XtremIO is strongly focused on performance and on how it’s delivered. From this point of view it clearly targets traditional enterprise tier 1 applications and it can be considered a good competitor in that space. It clearly needs some improvements here and there but EMC is showing all its power with the impressive quantity of enhancements that are continuously added.
You know what? From my point of view, the worst part of EMC XtremIO story is that there isn’t a simple and transparent migration path from the VMAX/VNX, which would be of great help for the end user (and EMC salesforce)…
First published here.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Federal Civ/Intel Engineering Lead at a tech vendor with 1,001-5,000 employees
XtremIO Gen2 delivers. There's potential for improvement, efficiencies, and even hybrid considerations.
Several months ago I walked through some of the issues we faced when XtremIO hit the floor and found it not to be exactly what the marketing collateral might present. While the product was very much a 1.0 (in spite of its Gen2 name), EMC Support gave a full-court-press response to the issues, and our account team delivered on additional product. Now it’s 100% production and we live/die by its field performance. So how’s it doing?
For an organized rundown, I’ll hit the high points of Justin Warren’s Storage Field Day 5 (SFD5) review and append a few of my own notes.
- Scale-Out vs. Scale-Up: The Impact of Sharing
- Compression: Needed & Coming
- Snapshots & Replication
- XtremIO > Alternatives? It Depends
Scale-Out vs. Scale-Up: The Impact of Sharing
True to Justin’s review, XtremIO practically scales up. Anything else is disruptive. EMC Support does their best to make up for this situation by readily offering swing hardware, but it’s still an impact. Storage vMotion works for us, but I’m sure spare hardware isn’t the panacea for everyone, especially those with physical servers.
The impact of sharing is key as well. XtremIO sharing everything can mean more than just the good stuff. In April, ours “shared” a panic over the InfiniBand connection when EMC replaced a storage controller to address one bad FC port. I believe they’ve fixed that issue (or widely publicized to their staff how not to swap an SC in a way that leads to panic, until code can protect), but it was production-down for us. Thankfully we were only one foot in, so our key systems kept going on other storage. We’ve seemed to find the InfiniBand exceptions, so I do not think this is a cause for widespread worry. ‘Just stating the facts.
I could elaborate further, but choosing XtremIO means being prepared to swing your data for disruptive activities. If you expect the need to expand, plan for that–rack space, power, connections, etc for the swing hardware, or whatever other method you choose.
Compression: Needed & Coming
This was the deficit that led to us needing four times the XtremIO capacity to meet our Pure POC’s abilities. At the time, we thought Pure achieved a “deduplication” ratio of 4.5 to 1 and were sorely disappointed when XtremIO didn’t. Then we realized it was data “reduction”, which incorporated compression and deduplication. Pure’s dedupe is likely still more efficient since it uses variable block sizes (like EMC Avamar), but variable takes time and post-processing.
When compression comes in the XIOS 3.0 release later this year, I hope to see our data reduction ratio converge with what we saw on Pure. As it stands, we fluctuate around 1.4 to 1 deduplication (which feels like the wrong word–dedupe seems to imply a minimum of 2:1). I choose to ignore the “Overall Efficiency” ratio at the top, as it is a combination of dedupe and thin provisioning savings, the latter of which nearly everyone has. We’ve thin provisioned for nearly 6 years with our outgoing 3PAR, so that wasn’t a selling point; it was an assumption. As a last note on this, Pure Storage asks the pertinent question: “The new release will come with an upgrade to compression for current customers. Can I enable it non-disruptively, or do I have to migrate all my data off and start over?”
Snapshots & Replication
I won’t say much on these items, because we haven’t historically used the first, and other factors have hindered the second. Given that our first EMC CX300 array even had snapshots, the feature arrival in 2.4 was more of an announcement that XtremIO had fully shown up to the starting line of the SAN race (it was competing extremely well in other areas, but was hard to understand the lag here). We may actually use this feature with Veeam’s Backup & Replication product as it offers the ability to do array-level snapshots and transfer them to a backup proxy for offloaded processing.
As for replication, my colleagues and I see it as feature with huge differentiating potential, at least where deduplication ratios are high. VDI or more clone-based deployments with 5:1, 7:1, or even higher ratios could benefit greatly if only unique data blocks were shipped to partnering array(s). For now, VPLEX is that answer (sans the dedupe).
XtremIO > Alternatives? It Depends
As I mentioned in the past, we started this flash journey with a Pure Storage POC. It wasn’t without challenges, or I probably wouldn’t be writing about XtremIO now, but those issues weren’t necessarily as objectively bad or unique to them as I felt at the time. Everyone has caveats and weaknesses. In our case, Pure’s issues with handling large block I/O gave us pause and cause to listen to EMC’s XtremIO claims.
Those claims fleshed out in some ways, but not in others (at least not without more hardware). Both products can make the I/O meters scream with numbers unlikely to be found in daily production, though it’s nice to see the potential. The rubber meets the road when your data is on their box and you see what it does as a result. No assessment tool can tell you that; only field experience can.
If unwavering low-latency metrics are the goal, XtremIO wins the prize. It doesn’t compromise or slow up for anything–the data flies in and out regardless of block size or volume. Is no-compromise ideal? It depends.
Deduplication is the magic sauce that turned us on to Pure, and XtremIO marketing said, “we can do that, too!” Without compromising speed, though, and without post-processing, the result isn’t the same. That’s the point of the compression mentioned earlier.
Then there’s availability arguments. Pure doesn’t have any backup batteries (but stores to NVRAM in flight, so that’s not a deal-breaker), which EMC can point out. EMC uses 23+2 RAID/parity, which Pure is quick to highlight as a weakness. Everyone wants to be able to fail four drives and keep flying, right?
From what I’ve heard, Hitachi will take an entirely different angle
and argue that magic is unnecessary. Just use their 1.6TB and 3.2TB flash drives and swim in the ocean of space. Personally, I think that’s short-sighted, but they’re welcome to that opinion.
Last Thoughts
In production, day to day, notwithstanding our noted glitches, XtremIO delivers. Furthermore, it has the heft of EMC behind it, and the vibe I get is that they don’t seem to be content with second place. Philosophies on sub-components may disagree between vendors, but nothing trips XtremIO’s performance. Is there potential for improvement, efficiencies (esp. data reduction), and even hybrid considerations (why not a little optional post-processing?)? Absolutely. And I’ve met the XtremIO engineers from Israel who aim to do just that. Time will tell.
This article originally appeared here.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Nice real use case, thank you!

Buyer's Guide
Download our free Dell XtremIO Report and get advice and tips from experienced pros
sharing their opinions.
Updated: July 2025
Product Categories
All-Flash StoragePopular Comparisons
Dell PowerStore
Pure Storage FlashArray
NetApp AFF
Dell Unity XT
IBM FlashSystem
Pure Storage FlashBlade
VAST Data
HPE 3PAR StoreServ
HPE Primera
Dell PowerMax
Hitachi Virtual Storage Platform
Tintri VMstore T7000
NetApp EF-Series All Flash Arrays
Buyer's Guide
Download our free Dell XtremIO Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- Dell EMC XtremIO Flash Storage OR Hitachi Virtual Storage F Series
- Tegile vs. XtremIO?
- Dell EMC XtremIO Flash Storage OR Hitachi Virtual Storage F Series
- Pure Storage or NetApp for VDI?
- When evaluating Enterprise Flash Array Storage, what aspect do you think is the most important to look for?
- IBM vs. EMC vs. Hitachi Compression
- Is all flash storage SSD?
- Which should I choose: HPE 3PAR StoreServ or Hitachi Virtual Storage Platform F Series?
- What is the difference between thick and thin provisioning?
- Was your research of Enterprise Flash Array products on our site for a purchase? If not, what was it for?
Rene,
Great review did you alter any of the hosts setting IE round robin and queue depth. This will help bring down the latency times dramatically.