The primary use case is for SAP production environments. We are running the shared file systems for our SAP systems on it.
Service Architecture at All for One Group AG
High availability enables us to run two instances so there is no downtime when we do maintenance
Pros and Cons
- "NetApp's Cloud Manager automation capabilities are very good because it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well."
- "Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations."
- "Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair."
- "One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have."
What is our primary use case?
How has it helped my organization?
It's helped us to dive into the cloud very fast. We didn't have to change any automations which we already had. We didn't have to change any processes we already had. We were able to adopt it very fast. It was a huge benefit for us to use the same concepts in the cloud as we do on-premise. We're running our environment very efficiently, and it was very helpful that our staff, our operators, didn't have to learn new systems. They have the same processes, all the same knowledge they had before. It was very easy and fast.
We did a comparison, of course, and it was cheaper to have Cloud Volumes ONTAP running with the deduplication and compression, compared to storing everything, for example, on HA disks and have a server running all the time as well. And that was not even for the biggest environment.
The data tiering saves us money because it offloads all the code data to the Blob Storage. However, we use the HA version and data tiering just came to HA with version 9.6 and we are not on 9.6 in our production environment. It's still on RC, the pre-release, and not on GA release. In our testing we have seen that it saves a lot of money, but our production systems are not there yet.
What is most valuable?
The high availability of the service is a valuable feature. We use the HA version to run two instances. That way there is no downtime for our services when we do any maintenance on the system itself.
For normal upgrades or updates of the system - updates for security fixes, for example - it helps that the systems and that the service itself stay online. For one of our customers, we have 20 systems attached and if we had to ride that customer all the time and say, "Oh, sorry, we have to take your 20 systems down just because we have to do maintenance on your shared file systems," he would not be amused. So that's really a huge benefit.
And there are the usual NetApp benefits we have had over the last ten years or so, like snapshotting, cloning, and deduplication and compression which make it space-efficient on the cloud as well. We've been taking advantage of the data protection provided by the snapshot feature for many years in our on-prem storage systems. We find it very good. And we offload those snapshots as well to other instances, or to other storage systems.
The provisioning capability was challenging the first time we used it. You have to find the right way to deploy but, after the first and second try, it was very easy to automate for us. We are highly automated in our environment so we use the REST API for deployment. We completely deploy the Cloud Volumes ONTAP instance itself automatically, when we have a new customer. Similarly, deployment on the Cloud Volumes ONTAP for the Volumes and access to the Cloud Volumes ONTAP instance are automated as well.
But for that, we still use our on-premise automations with WFA (Workflow Automation). NetApp has a tool which simplifies the automation of NetApp storage systems. We use the same automation for the Cloud Volumes ONTAP instances as we do for our on-premise storage systems. There's no difference, at the end of the day, from the operating system standpoint.
In addition, NetApp's Cloud Manager automation capabilities are very good because, again, it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well. It's pretty good.
Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations. We're just using it, deploying volumes and using them. We see that, in some way, as being the future of storage services, for us at least: completely managed.
What needs improvement?
Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face.
One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.
Buyer's Guide
NetApp Cloud Volumes ONTAP
May 2025

Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
851,604 professionals have used our research since 2012.
For how long have I used the solution?
We've been using Cloud Volumes for over a year now.
What do I think about the stability of the solution?
The stability is very good. We haven't had any outages.
What do I think about the scalability of the solution?
Right now, the scalability is sufficient in what it provides for us, but we can see that our customer environments are growing. We can see that it will reach its performance end in around a year or so. They will have to evolve or create some performance improvements or build some scale-up/scale-out capabilities into it.
In terms of increasing our usage, the tiering will be definitely used in production as soon as its GA for Azur. They're already playing with the Ultra SSDs, for performance improvements on the storage system itself. As soon as they become generally available by Microsoft, that will probably a feature we'll go to.
As for end-users, for us they are our customers. But the customers have several hundred or 1,000 users on the system. I don't really know how many end-users are ultimately using it, but we have about ten customers.
How are customer service and support?
Technical support has been very good. The technical people who are responsible for us at NetApp are very good. If we contact them we get direct feedback. We often have direct contact, in our case at least, to the engineers as well. We have direct contacts with NetApp in Tel Aviv.
It's worth mentioning that when we started with Cloud Volumes ONTAP in the past, we did an architecture workshop with them in Tel Aviv, to tell them what our deployments look like in our on-premise environment, and to figure out what possibilities Cloud Volumes ONTAP could provide to us as a service provider. What else could we do on it, other than just running several services? For example: disaster recovery or doing our backups. We did that at a very early stage in the process.
Which solution did I use previously and why did I switch?
We only used native Azure services. We went with Cloud Volumes ONTAP because it was a natural extension of our NetApp products. We have a huge on-premise storage environment from NetApp and we have been familiar with all the benefits from these storage systems for several years. We wanted to have all the benefits in the cloud, the same as we have on-premise. That's why we evaluated it, and we're in a very early stage with it.
How was the initial setup?
To say the initial setup was complex is too strong. We had to look into it and find the right way to do it. It wasn't that complex, it was just a matter of understanding what was supported and what was not from the SAP side. But as soon as we figured that out, it was very straightforward to figure out how to build our environment.
We had an implementation strategy: Determining what SAP systems and what services we would like to deploy in the cloud. Our strategy was that if Cloud Volumes ONTAP made sense in any use case, we would want to use it because it's, again, highly automated and we could use it with our scripting already. Then we had to look at what is supported by SAP itself. We mixed that together in the end and that gave us our concept.
Our initial deployment took one to two weeks, maximum. It required two people, in total, but it was a mixture of SAP and storage colleagues. In terms of maintenance, it doesn't take any additional people than we already have for our on-premise environment. There was no additional headcount for the cloud environment. It's the same operating team and the same people managing Cloud Volumes ONTAP as well as our on-premise storage systems. It requires almost no maintenance. It just runs and we don't have to take care of updating it every two months or so for security reasons.
What about the implementation team?
We didn't use a third-party.
What was our ROI?
We have seen return on investment but I don't have the numbers.
What's my experience with pricing, setup cost, and licensing?
The standard pricing is online. Pricing depends. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company. We have an Enterprise Agreement or something similar to that. So we get a different price for it.
In terms of additional costs beyond the standard licensing fees, you have to run instances in Azure, virtual machines and disks. You still have to pay for the Azure disks, and Blob Storage if you're using tiering. What's also important is to know is the network bandwidth. That was the most complicated part in our project, to figure out how much data would be streamed out of our data center into the cloud and how much data would have to be sent back into our data center. It's more challenging than if you have a customer who is running only in Azure. It can be expensive if you don't have an eye on it.
Which other solutions did I evaluate?
We have a single-vendor strategy.
What other advice do I have?
Don't be afraid of granting permissions because that's one of the most complex parts, but that's Azure. As soon as you've done that, it's easy and straightforward. When you do it the first time you'll think, "Oh, why is it so complicated?" That's native Azure.
The biggest lesson I've learned from using Cloud Volumes ONTAP is that from an optimization standpoint, our on-premise instance was a lot more complex than it had to be. That's was a big lesson because Cloud Volumes ONTAP is a very easy, light, wide service. You just use it and it doesn't require that much configuring. You can just use the standards which come from NetApp and that was something we didn't do with our on-premise environment.
In terms of disaster recovery, we have not used Cloud Volumes ONTAP in production yet. We've tested it to see if we could adopt Cloud Volumes ONTAP for that scenario, to migrate all our offloads or all our storage footprint we have on-premise to Cloud Volumes ONTAP. We're still evaluating it. We've done a lot of cost-comparison, which looks pretty good. But we are still facing a little technical problem because we're a CSP (cloud service provider). We're on the way to having Microsoft fix that. It's a Microsoft issue, not a NetApp Cloud Volumes ONTAP issue.
I would rate the solution at eight out of ten. There are improvements they need to make for scale-up and scale-out.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.

Senior Systems Engineer at Cedars-Sinai Medical Center
You don't need to spend time and resources planning and setting up physical storage equipment in your data center
Pros and Cons
- "The main benefit we get from this product is the ability to deploy it anywhere we want, whether that's on-prem, a remote physical location, or in the cloud. It doesn't matter from an operational perspective where it is. The command line and operating system are the same."
- "The encryption and deduplication features still have a lot of room for improvement."
What is our primary use case?
Our organization utilizes a hybrid cloud in which Cloud Volumes ONTAP is a single node. We have multiple instances of Cloud Volumes on a single node in AWS, and we primarily use it to take snapshots for disaster recovery.
We save many snapshots at that location so we can redirect users if something happens on our primary site.
The other use case is backup. We enabled SnapLock, which acts as the WORM, making those snapshots immutable. In other words, they can't be deleted.
Those are the two use cases. One is disaster recovery, and the other is to preserve a third copy of the snapshot. This is typically for Tier 1 applications. We have a third copy, and no one can delete the volume's snapshot. The end-users don't work with Cloud Volumes directly, but if our operational team needs to restore some files that aren't on-prem, they sometimes go to those instances in Cloud Volumes. That's only when they have to restore something beyond the date range of the on-prem snapshot.
How has it helped my organization?
The main benefit we get from this product is the ability to deploy it anywhere we want, whether that's on-prem, a remote physical location, or in the cloud. It doesn't matter from an operational perspective where it is. The command line and operating system are the same.
If I give it to someone to manage, they don't know if the product is running in the cloud or on the physical location. That's great because you don't have to worry about knowledge transfer. The product runs the same regardless of how it's deployed. Cloud Volumes has also significantly improved performance and storage efficiency because it has capacity tiering, which is helpful if you're cost-conscious.
It provides unified storage, so you can use it for NAS or block. However, we segregate a separate cluster for files and another for block storage. Fortunately, it's the same ONTAP operating system, so a user doesn't need to understand a different set of command lines or another method if dealing with block storage or files. It's all the same for them.
It helps us manage our native cloud storage. Cloud Volumes allows us to choose which storage types are applicable for us. In our case, it lets us choose a cheaper EBS storage, and then we can perform capacity tiering in S3. It gives us the flexibility to determine which type of native AWS storage to use, which is cool.
What is most valuable?
We mainly use Cloud Volumes for two features: SnapMirror and SnapVault. Those are the two that our use case requires. Data deduplication and capacity tiering are the main primary reasons we adopted the solution. The data is deduped and encrypted, and we use capacity tiering to cut down on our S3 storage costs.
What needs improvement?
The encryption and deduplication features still have a lot of room for improvement.
For how long have I used the solution?
We first deployed Cloud Volumes ONTAP four years ago.
What do I think about the stability of the solution?
Cloud Volumes has been stable so far. We haven't had many issues. If there are any issues, it's typically during an upgrade. Some tools are upgraded automatically through the cloud manager, but it's nothing major, and the upgrade has been smooth as well.
What do I think about the scalability of the solution?
Cloud Volumes added an option to stack licenses to increase capacity. Before, you were only allowed one license per instance, which gave you 360 terabytes. Now, you can stack the licenses to add a second license of the same instance to get another 360 terabytes, totaling 720.
That's vertical scalability, but we haven't scaled horizontally. We just use it for a single node per instance. We started with one instance, and now we are on the seventh. As we add new on-prem projects, they always require a copy of their data somewhere. That's when we deploy additional instances.
How are customer service and support?
My experience with technical support has been positive overall. I would rate NetApp support eight out of 10. I would deduct two points because they don't have complete control of the solution. It's more of a hybrid setup. They provide the software level, but the underlying infrastructure is AWS. If there's an issue, it's hard to distinguish if Cloud Volumes is to blame or AWS. That's why I would say eight because there is that question. When you have multiple layers, it takes more time to troubleshoot.
How was the initial setup?
Installing Cloud Volumes is quick and straightforward. I can deploy an instance in half an hour. Compare that to an on-prem serverless instance, which requires a lot of planning and work with other teams to lay cables and plot out space in a data center. That takes three to six months versus 30 minutes. It's a big difference. We only need one staff member to maintain it.
What about the implementation team?
We used our in-house engineers to deploy Cloud Volumes.
What was our ROI?
As we store more data, we save more money using Cloud Volumes. The deduplication engine can find more commonalities as you accumulate more data, which has helped. Of course, it depends on the data type. It doesn't help if you have compressed data, but it's suitable for unstructured data.
Deduplication is one of the most significant improvements I've seen in the product. In the past, Cloud Volumes could only dedupe on the volume level, but now it can dedupe on the aggregate level, which means you can look at more volumes and commonalities. You have a greater chance to dedupe more data in that scenario.
We save on storage in general. One of the biggest selling points of Cloud Volumes is that you can deploy it quickly. You don't need to spend time and resources planning and setting up physical storage equipment in your data center. Real estate in a data center is precious, so cost savings makes Cloud Volumes enticing. In our case, we don't need a physical disaster recovery location. Anything that isn't Tier 1 goes to the cloud.
What other advice do I have?
I rate NetApp Cloud Volumes ONTAP nine out of 10.
Which deployment model are you using for this solution?
Hybrid Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
NetApp Cloud Volumes ONTAP
May 2025

Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
851,604 professionals have used our research since 2012.
Senior Analyst at a comms service provider with 5,001-10,000 employees
All our data shares and volumes are on one platform making adjustment of share permissions easier than with Azure native
Pros and Cons
- "We're able to use the SnapMirror function and SnapMirror data from our on-prem environment into Azure. That is super-helpful. SnapMirror allows you to take data that exists on one NetApp, on a physical NetApp storage platform, and copy it over to another NetApp storage platform. It's a solid, proven technology, so we don't worry about whether data is getting lost or corrupted during the SnapMirror."
- "When Azure does their maintenance, they do maintenance on one node at a time. With the two nodes of the CVO, it can automatically fail over from one node to the node that is staying up. And when the first node comes back online, it will fail back to the first node. We have had issues with everything failing back 100 percent correctly."
What is our primary use case?
It is managing services in our production environment that are in Azure. It provides file shares, both NFS and CIFS, that are used by other applications that are also in Azure.
NetApp Cloud Volumes ONTAP is part of the production environment of our company so the entire company, over 5,000 employees globally, is touching it somehow. It's a part of an application that has data that resides on it and they may consume that application.
How has it helped my organization?
Cloud Volumes ONTAP is great because of the storage efficiencies that it provides. When you look at the cost of running Azure native storage versus the cost of Cloud Volumes ONTAP, you end up saving money with Cloud Volumes ONTAP. That's a big win because cost is a huge factor when putting workloads in the cloud. We had a cost estimate survey done, a comparison between the two, and I believe that Cloud Volumes ONTAP saves us close to 30 percent compared to the Azure native costs.
Azure pricing is done in a type of a tier. Once you exceed a certain amount of storage, your cost goes down. So the more data you store, the more you're going to end up saving.
The storage efficiencies from the NetApp platform allow you to do inline deduplication and compaction of data. All of this adds up to using less of the disk in Azure, which adds up to savings.
We have two nodes of the NetApp in Azure, which means we have some fault tolerance. That is helpful because Azure just updates stuff when they want to and you're not always able to stop them or schedule it at a later time. Having two CVO nodes is helpful to keep the business up when Azure is doing their maintenance.
The solution provides unified storage no matter what kind of data you have. We were already using the NetApp platform on our on-premise environments, so it's something we're already familiar with in terms of how to manage permissions on different types of volumes, whether it's an NFS export or a CIFS share. We're able to utilize iSCSI data stores if we need to attach a volume directly to a VM. It allows us to continue to do what we're already familiar with in the NetApp environment. Now we can do them in Azure as well.
It enables us to manage our native cloud storage better than if we used the management options provided by the native cloud service. With CVO, all of your data shares and volumes are on the one NetApp platform. Whether you are adjusting share permissions on an NFS export or a CIFS share, you can do it all from within the NetApp management interface. That's much easier than the Azure native, where you may have to go to two or three different screens to do the same stuff.
What is most valuable?
The storage efficiencies are something that you don't get on native.
Also, because of the NetApp product, we're able to use the SnapMirror function and SnapMirror data from our on-prem environment into Azure. That is super-helpful. SnapMirror allows you to take data that exists on one NetApp, on a physical NetApp storage platform, and copy it over to another NetApp storage platform. It's a solid, proven technology, so we don't worry about whether data is getting lost or corrupted during the SnapMirror. We are also able to throttle back the speed of the SnapMirror to help our network team that is paying for a data circuit. We're still able to copy data into Azure, but we can manage the transfer cost because we can throttle back the SnapMirror. It's just very solid and reliable. It works.
And all of us IT nerds are already familiar with the NetApp platform so there was not a major learning curve to start using it in Azure.
NetApp also has something called Active IQ Unified Manager, and it gives us performance monitoring of the CVO from an external source. There are several people on my team that utilize the CVO and we each have a personal preference for how we look at data. The Active IQ Unified Manager is a product you can get from NetApp because, once you license your CVO, you are entitled to other tools. CVO does have resource performance monitoring built in, but we primarily utilize the Active IQ Unified Manager.
Beyond that, it provides all the great stuff that the NetApp platform can do, but it's just in the cloud.
What needs improvement?
I think this is more of a limitation of how it operates in Azure, but the solution is affected by this limitation. There's something about how the different availability zones, the different regions, operate in Azure. It's very difficult to set up complete fault tolerance using multiple CVO nodes and have one node in one region and one node in another region. This is not something that I have dug into myself. I am hearing about this from other IT nerds.
For how long have I used the solution?
We've been using NetApp Cloud Volumes ONTAP for two years.
What do I think about the stability of the solution?
We had issues with Azure when they did maintenance on the nodes. They just do their maintenance and it's up to us, the customer, to make sure that our applications are up and data is flowing. When Azure does their maintenance, they do maintenance on one node at a time. With the two nodes of the CVO, it can automatically fail over from one node to the node that is staying up. And when the first node comes back online, it will fail back to the first node. We have had issues with everything failing back 100 percent correctly.
We have had tickets open with NetApp to have them look into it and try and resolve it. They've made improvements in some ways, but it's still not 100 percent automated for everything to return back. That's an ongoing thing we have to keep an eye on.
What do I think about the scalability of the solution?
It is definitely scalable. You can add more disk to grow your capacity and you have the ability to add more nodes. There's a limit to how many nodes you can add, but you can definitely scale up.
How are customer service and technical support?
Tech support is good. A lot of it depends on the technician that you get, but if you're not happy with one technician, you can request that it be escalated or you can request that it just be handled by another technician. They're very eager to help and resolve issues.
How was the initial setup?
We had some issues with permissions and with getting the networking correct. But we had a lot of support from NetApp as well as from Azure. As a result, I would not say the setup was straightforward, but we got the help and the support we needed and you can't ask for more than that.
I've always found NetApp support to be accurate and good with their communications. Rolling out this product in Azure, and working with the IT nerds in our company and with Azure nerds, occasionally it does add another layer of who has to be communicated with and who has to do stuff. But my experience with NetApp is that they are responsive and very determined to get situations resolved.
It took us about a week to get everything ironed out and get both nodes functional.
We had done a PoC with a smaller instance of the CVO and the PoC was pretty straightforward. Once we rolled out the production CVO that has two nodes, that's when it was more complicated. We had a plan for getting it deployed and to decide at what point we would say, "Okay, now it's ready for prime time. Now it's ready to be put into production."
For admin of the solution we have less than 10 people, and they're all storage administrator analysts like me.
What's my experience with pricing, setup cost, and licensing?
Our licensing is based on a yearly subscription. That is an additional cost, but because of the storage efficiencies that the NetApp gives, even with the additional cost of the NetApp license, you still end up saving money versus straight Azure native for storage. It's definitely worth it.
What other advice do I have?
Make sure that you can stay operational when Azure is doing their maintenance. Make sure you fully understand how the failover and the give-back process works, so that you can deal with your maintenance.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sr Service Engineer at Evolent
Allows customers to manage SAN and NAS data within a single storage solution
Pros and Cons
- "The tool's most valuable features are the SnapLock and SnapMirror features. If something goes wrong with the data, we can restore it. This isn't a mirror; we store data in different locations. If there's an issue on the primary site, we can retrieve data from the secondary site."
- "NetApp Cloud Volumes ONTAP should improve its support."
What is our primary use case?
The solution helps to keep production data.
What is most valuable?
The tool's most valuable features are the SnapLock and SnapMirror features. If something goes wrong with the data, we can restore it. This isn't a mirror; we store data in different locations. If there's an issue on the primary site, we can retrieve data from the secondary site.
Multiprotocol support in NetApp Cloud Volumes ONTAP is beneficial because it allows customers to manage SAN and NAS data within a single storage solution. This feature eliminates the need to purchase different types of storage.
What needs improvement?
NetApp Cloud Volumes ONTAP should improve its support.
For how long have I used the solution?
I have been working with the product for five years.
What do I think about the stability of the solution?
The solution's performance is good. It depends on your chosen model and configuration, but even the lower-end models perform well. I rate its stability a nine out of ten.
What do I think about the scalability of the solution?
I rate NetApp Cloud Volumes ONTAP's scalability as ten out of ten.
How was the initial setup?
NetApp Cloud Volumes ONTAP's deployment is easy, and I rate it a ten out of ten. It can be completed in half an hour and depends on customer configurations.
What's my experience with pricing, setup cost, and licensing?
The solution's pricing is reasonable.
What other advice do I have?
I've worked in the IT industry for over ten years, dealing with various storage solutions from vendors like HPE and OEMs. The tool stands out due to its unique features and functions that protect and manage customer data.
I rate the overall product a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer:
Solutions Architect at a tech services company with 201-500 employees
Provides deduplication, compression, and compaction that should result in cost savings
Pros and Cons
- "It gives a solution for storage one place to go across everything. So, the customer is very familiar with NetApp on-prem. It allows them to gain access to the file piece. It helps them with the training aspect of it, so they don't have to relearn something new. They already know this product. They just have to learn some widgets or what it's like in the cloud to operate and deploy it in different ways."
- "I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper."
What is our primary use case?
Desktop-as-a-service is a PoC that I'm doing for our customers to allow them to use NetApp for their personal, departmental, and profile shares. This connects their desktop-as-a-service that we're building for them.
This is for training. The customer has classrooms that they have set up. They have about 150,000 users coming through. They want to have a way to do a secure, efficient solution that can be repeated after they finish this class, before the next class comes in, and use a NetApp CVO as well as some desktop services off of the AWS.
It is hosted by AWS. Then, it hosted by CVO who sets out some filers, as well Cloud Volumes Manager as well. We were looking at it with Azure as well, because it doesn't matter. We want to do a multicloud with it.
How has it helped my organization?
We haven't put it into production yet. However, in the proof of concept, we show the use of it and the how you can take it in Snapshot daily coverage, because we're doing it for a training area. This allows them to return back to where they were. The bigger thing is if they need to reset up for a class, then we can have a goal copied or flip back where they need to be.
It gives a solution for storage one place to go across everything. So, the customer is very familiar with NetApp on-prem. It allows them to gain access to the file piece. It helps them with the training aspect of it, so they don't have to relearn something new. They already know this product. They just have to learn some widgets or what it's like in the cloud to operate and deploy it in different ways.
The customer knows the product. They don't have to train their administrators on how to do things. They are very familiar with that piece of it. Then, the deduplication, compression, and compaction are all things that you would get from moving to a CVO and the cloud itself. That is something that they really enjoy because now they're getting a lot of cost savings off of it. We anticipate cloud cost savings, but it is not in production yet. It should be about a 30 percent savings. If it is a 30 percent or better savings, then it is a big win for the customer and for us.
What is most valuable?
- Dedupe
- Compression
- Compaction
- Taking 30 gig of data and reducing it down to five to 10 gig on the AWS blocks.
What needs improvement?
I would some wizards or best practices following how to secure CVO, inherit to the Cloud Manager. I thought that was a good place to be able to put stuff like that in there.
I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper.
This is where I would like to see changes, primarily around security and performance matrices.
For how long have I used the solution?
We are still in the proof of concept stage.
What do I think about the stability of the solution?
It is a good system. It is very stable as far as what I've been using with it. I find that support from it is really good as well. It is something that I would offer to all of my customers.
What do I think about the scalability of the solution?
It is easy to scale. It is inherent to the actual product. It will move to another cloud solution or it can be managed from another cloud solution. So, it's taken down barriers which are sometimes put out by vendors in different ways.
How was the initial setup?
We use NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. Its configuration wizards and ability to automate the process are easy, simple, and straightforward. If you have any knowledge of storage, even to a very small amount, the wizards will click through and help to guide you through the right things. They make sure you put the right things in. They give some good examples to make sure you follow those examples, which makes it a bit more manageable in the long run.
Which other solutions did I evaluate?
They use some native things that are inherent to the AWS. They have looked at those things.
NetApp has been one of the first ones that they looked at, and it is the one that they are very happy with today.
What other advice do I have?
Work with your resources in different ways, as far as in NetApp in the partner community. But bigger than that, just ask questions. Everybody seems willing to help move the solution forward. The biggest advice is just ask when you don't know, because there is so much to know.
I would rate the solution as a nine (out of 10).
We're not using inline encryption right now.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner.
Staff System Administrator at a comms service provider with 10,001+ employees
Good visibility, useful migration capabilities, and helpful support
Pros and Cons
- "The ability to see things going back and forth has been quite useful."
- "The solution could be better when we're connecting to our S3 side of the house. Right now, it doesn't see it, and I'm not sure why."
What is our primary use case?
We use it to monitor our on-prem and our SnapMirror between one and the other.
How has it helped my organization?
It's a single pane of glass where we can see our applications running.
What is most valuable?
The ability to see things going back and forth has been quite useful.
Its migration capabilities are very good.
What needs improvement?
The solution could be better when we're connecting to our S3 side of the house. Right now, it doesn't see it, and I'm not sure why.
For how long have I used the solution?
I've used the solution for a little over a year. Before, it was called Cloud Manager.
What do I think about the stability of the solution?
The stability is pretty good. There was an instance though where we were trying to delete a CVO instance off of it, and it took me a while to get it to release. It took a while to delete one of the instances since we already had taken it out, and it was asking to delete it. It couldn't connect to it to delete it. We ended up trading a new workspace and then deleting the old workspace.
What do I think about the scalability of the solution?
We only have a few systems in it. We have two on-prem clusters, one CVO instance, and an S3 instance, which I don't have a connection to yet. There's something wrong with the configuration or firewall risk or going into S3. Then, we'll have our FSX one built into it when we get to that point.
How are customer service and support?
Technical support is good. I get all the answers I need. I've never had any trouble with it. A lot of the time, we don't use it, though. Typically, we just Google something.
Of course, it would be ideal if they improved response time.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
We did not use another solution previously. We always used the Cloud Manager, and then Cloud Manager became BlueXP. We've actually used the solution for about two years now, under two different names.
How was the initial setup?
I was involved in the initial setup. When we deployed it, we had to add all the systems to it. It was really easy to set up.
Once we had the workspaces in there, it was really easy just to add systems.
What about the implementation team?
There's another admin that helped me with it who was the primary at the time.
What was our ROI?
I have not seen any ROI.
What's my experience with pricing, setup cost, and licensing?
The pricing doesn't matter as it comes with the license that we have. It's free of charge with the bundle.
Which other solutions did I evaluate?
We did not evaluate any other solution.
What other advice do I have?
We haven't done an actual migration from on-prem to the cloud. We're using it to drag and drop Volumes.
I'd rate the solution nine out of ten since I had issues with deleting it, and I had to recreate a workspace.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Vice President at DWS Group
Helps us to save on the costs of backup products
Pros and Cons
- "Its features help us to have a backup of our volumes using the native technology of NetApp ONTAP. That way, we don't have to invest in other solutions for our backup requirement. Also, it helps us to replicate the data to another geographic location so that helps us to save on the costs of backup products."
- "They have very good support team who is very helpful. They will help you with every aspect of getting the deployment done."
- "The automated deployment was a bit complex using the public APIs. When we had to deploy Cloud Volumes ONTAP on a regular basis using automation, It could be a bit of a challenge."
- "We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full."
What is our primary use case?
Our use case is to have multitenant deployment of shared storage, specifically network-attached storage (NAS). This file share is used by applications that are very heavy with a very high throughput. Also, an application needs to be able to sustain the read/write throughput and persistent volume. Cloud Volumes ONTAP helps us to get the required performance from our applications.
We just got done with our PoC. We are now engaging with NetApp CVO to get this solution rolled out (deployment) and do hosting for our customers on top of that.
How has it helped my organization?
Using this solution, the more data that we store, the more money we can save.
What is most valuable?
- CIFS volume.
- The overall performance that we are getting from CVO.
- The features around things like Snapshots.
- The performance and capacity monitoring of the storage.
These features help us to have a backup of our volumes using the native technology of NetApp ONTAP. That way, we don't have to invest in other solutions for our backup requirement. Also, it helps us to replicate the data to another geographic location so that helps us to save on the costs of backup products.
Cloud Volumes ONTAP gives us flexible storage.
What needs improvement?
There are a few bugs in the system that they need to improve on the UI part. Specifically, its integration of NetApp Cloud Manager with CVO, which is something they are already working on. They will probably provide a SaaS offering for Cloud Manager.
We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full.
For how long have I used the solution?
Six months.
What do I think about the stability of the solution?
I cannot comment on stability right now because we have not been using it in production as of now.
What do I think about the scalability of the solution?
We still have CVO running on a single VM instance. As an improvement area, if CVO can come up with a scale out that will help so we will not be limited by the number of VMs in GCP. Behind one instance, we are adding a number of GCP disks. In some cases, we would like to have the option to scale out by adding more nodes in a cluster environment, like Dell EMC Isilon.
How are customer service and technical support?
Get NetApp involved from day one if you are thinking of deploying Cloud Volumes ONTAP. They have a very good support team who is very helpful. They will help you with every aspect of getting the deployment done.
Which solution did I use previously and why did I switch?
We previously used OpenZFS Cloud Storage. We switched because we were not getting the performance from them. The performance tuning is a headache. There were a lot of issues, such as, the stability and updates of the OpenZFS. We had it because it was a free, open source solution.
We switched to NetApp because I trust their performance tool and file system.
How was the initial setup?
We did the PoC. Now, we are going to set up a production environment.
The initial setup was a bit challenging for someone who has no idea about NetApp. Since I have some background with it, I found the setup straightforward. For a few folks, it was challenging. It is best to get NetApp support involved for novices, as they can give the best option for setting to select during deployment.
The automated deployment was a bit complex using the public APIs. When we had to deploy Cloud Volumes ONTAP on a regular basis using automation, It could be a bit of a challenge.
What about the implementation team?
My team of engineers works on deploying this solution. There are five people on my team.
What was our ROI?
We have not realized any money or savings yet because we are still in our deployment process.
What's my experience with pricing, setup cost, and licensing?
They give us a good price for CVO licenses. It is one of the reasons that we went with the product.
Which other solutions did I evaluate?
We did consider several options.
In GCP, we also considered NetApp's Cloud Volumes services as well, but it did not have good performance.
Another solution that we tried was Qumulo, which was a good solution, but not that good. From a scaling out perspective, it can scale out a file system, whereas NetApp is not like that. NetApp still works with a single VM. That is the difference.
We also evaluated the native GCP file offering. However, it did not give us the performance for the application that we wanted.
We do use the cloud performance monitoring, but not with a NetApp product. We use Stackdriver. NetApp provides a separate thing for the monitoring of NetApp CVO, which is NetApp Cloud Manager.
What other advice do I have?
I would rate this solution as an eight (out of 10).
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Google
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Consultant at I.T. Blueprint Solutions Consulting Inc.
Easy to manage with good storage optimization but the cloud deployment needs to be improved
Pros and Cons
- "The fast recovery time objective with the ability to bring the environment back to production in case something happens."
- "The integration wizard requires a bit of streamlining. There are small things that misconfigure or repeat the deployment that will create errors, specifically in Azure."
What is our primary use case?
The primary use case is for files, VMware storage, and the DR volume on the cloud. They also use this solution to move data between on-premises and the cloud volume ONTAP.
How has it helped my organization?
It's difficult to say if it has helped to reduce the company's data in the cloud right now without running it for a while. It's the same for the cloud costs.
We are going through testing right now, and can't tell if it will affect their operations until we validate it.
What is most valuable?
The most valuable features are the ease of management, the deduplication, storage optimization, SnapMirror, it has flexible in testing for different scenarios, rapid deployment of the test environments, and rapid recovery.
The fast recovery time objective with the ability to bring the environment back to production in case something happens.
The ability to go back in time. It's easy to restore the data that we need and it has good stability with CIFS. When a client is using CIFS to access their files, it is pretty stable without knowing Microsoft issues.
The simplicity and ease of usage for VMware provisioning are also helpful.
What needs improvement?
Some of the area's that need improvement are:
- Cloud sync
- Cloud Volume ONTAP
- Deployment for the cloud manager
These areas need to be streamlined. They are basic configuration error states to acquire late provisioning.
I would like to see the ability to present CIFS files that have been SnapMirrorroed to the Cloud Volume ONTAP and the ability to serve them similarly to OneDrive or Web interfaces.
We are talking about DR cases, customers who are trying to streamline their environments. In the case of DR, users can easily access that data. Today, without running it as file services fully and presenting it through some third party solution, there is no easy way for an end-user to access the appropriate data. This means that we have to build the whole infrastructure for the end-user to be able to open their work files.
The integration wizard requires a bit of streamlining. There are small things that misconfigure or repeat the deployment that will create errors, specifically in Azure.
As an example, you cannot reuse that administrator name, because that object is created in Azure, and it will not let you create it again. So, when the first deployment fails and we deploy for a second time, we have to use a new administration name. Additionally, it requires connectivity from NetApp to register the products and the customer is notified that Network access is not allowed, which creates a problem.
This issue occurs during the time of deployment, but it isn't clear why your environment is not deploying successfully. For this reason, more documentation is needed in explaining and clarification steps of how it needs to be done.
What do I think about the stability of the solution?
We are just validating the cloud for a couple of our clients, so we haven't had it affect our client storage operations.
What do I think about the scalability of the solution?
Scalability remains to be seen. At this time the NetApp limits on the levels of premium, standard, and the basic one are unreasonably incorrect.
It is hard to go from ten terabytes to three hundred and sixty-eight terabytes and leave everyone in between there hanging. Nobody is interested in going with the limit of ten terabytes to test this solution.
I am talking specifically about Azure, Cloud Volume ONTAP and the differentiator between three levels of provisioning storage.
How are customer service and technical support?
I have used technical support and it's mediocre.
They gave their best effort, however, at the point they couldn't figure out the problem, they simply said that we would have to deal with Professional Services. I was not impressed, but I understand that it is a new product.
How was the initial setup?
It can be straightforward if everything is perfect, but if there are any glitches on the customer's side then potentially it could require long-term troubleshooting without knowing where to look for the problem.
We have deployed on-premises, but currently, we are testing it on cloud volumes.
For the initial deployment, I used the NetApp file manager to get it up and running.
Which other solutions did I evaluate?
When it comes to choosing the right solution for our clients, they trust our judgment in recommending something that they know is going to work for them.
Most of our clients are looking for availability in disaster recovery data and centralizing it into one cloud location. In some cases, a customer doesn't want to go with multiple clients, they want to have it all in one place. They are also looking for simplification in management of the entire solution, provisioning, managing copywriting from a similar interface and a company that can be responsible for the support.
Our customers evaluate other vendors as well. They have looked at AWS, several from Veeam, and partners from ASR for different replication software.
Customers decide to go with NetApp because of our recommendations.
I have experience with other application services including Commvault, Veeam, and ASR.
What other advice do I have?
If Snapshot copies and FlexClones are licensed they work great. The challenge is that the client will not always get the FlexClone license, then it is more difficult to provide it in the future.
Some of our older clients do not have a license for FlexClone, so the recovery of snapshot data can be problematic.
In some cases, they use inline encryption using SnapMirror, but not often.
Inline encryption addresses concerns of data security, as well as using Snapshot. If it is encrypted and it's not near encrypted traffic, then it has less chance of being accessed by someone.
I don't work with application development, so I can't address whether or not snapshot copies and Flexcone affect their application, but for testing environments where we have to update with batches made for maintenance, yes, it allows you to provision, to test, and it validates the stability of the testing and updates releases.
The clients included me in the decision making.
Each has its pros and cons, but with NetApp, this is a NetApp to NetApp product. With Windows backup solutions, it can be from any storage platform to any cloud also. In different ways, they have different workflows with different approaches, but you know each of them is meeting with its business objective, giving you a good balance.
My advice would be to try it first, figure out all of the kinks that might come up, have the proper resources from NetApp lined up to provide you support, and don't give up because it works in the end.
I would rate this solution a six out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.

Buyer's Guide
Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros
sharing their opinions.
Updated: May 2025
Product Categories
Cloud Software Defined Storage Cloud Migration Cloud Storage Cloud Backup Public Cloud Storage ServicesPopular Comparisons
Veeam Data Platform
Commvault Cloud
Veeam Data Cloud for Microsoft 365
Amazon EFS (Elastic File System)
Google Cloud Storage
N-able Cove Data Protection
Nutanix Unified Storage (NUS)
Portworx Enterprise
Azure NetApp Files
Buyer's Guide
Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links