Try our new research platform with insights from 80,000+ expert users
Senior Systems Administrator at a computer software company with 10,001+ employees
Real User
Simplifies our tasks, provides good storage savings, and offers a standard storage interface
Pros and Cons
  • "This solution has made everything easier to do."
  • "Multipathing for iSCSI LUNs is difficult to deal with from the client-side and I'd love to see a single entry point that can be moved around within the cluster to simplify the client configuration."

What is our primary use case?

We use this solution both on-premises and in the cloud.

Our primary use case for our on-premises implementation is production data and DR. In our cloud implementation, we use this solution for DR.

Moving to the cloud version was something that was different for us, but it was a fairly easy transition. Once we got comfortable with it, now it's second nature. There are many new features and I find that it is more valuable.

In terms of operational recovery, the solution’s Snapshot copies and thin clones are easy to do. It greatly simplifies DR testing or application testing because we can very quickly clone a volume provided to the application team. They can use it, and if they want to keep it then we'll split it off and they have their own volume. Or, if they don't want to use it then we just throw it away.

With respect to using inline encryption using SnapMirror, this is something that we are interested in but our version does not support it. Once we upgrade to a supporting version, we plan to deploy it.

The solution's unified file and block storage access give us a standard common interface and a set of tools that we use regardless of whether we're dealing with the cloud or on-premises.

The solution’s Snapshot copies and thin clones have greatly improved our application development speed. The DBAs can create clones on their own and do whatever they want with them. They can keep them, destroy them, split them, etc. It takes a load off of the storage administrators and puts it where it really should be.

The consistency of storage management across clouds has made our storage operations a lot simpler. We didn't have to learn new interfaces and new command sets. Everything that we're used to using on-premises works for us in the cloud.

With respect to our data footprint in the cloud, we are seeing all of the storage benefits being extended from what we have on-premises. We're just getting into the cloud now, and we're probably seeing between a 30 and 50 percent reduction in our data footprint using compression, compaction, and deduplication.

How has it helped my organization?

This solution has made everything easier to do. The most basic operations are very simple and we've been using NetApp tools, plus some of our in-house tools, to automate a lot of the processes. It saves us a lot of time and effort.

What is most valuable?

ONTAP is extremely reliable.

What needs improvement?

The inclusion of onboard key management in CBL would simplify the way we have to do our security.

Multipathing for iSCSI LUNs is difficult to deal with from the client-side and I'd love to see a single entry point that can be moved around within the cluster to simplify the client configuration.

Buyer's Guide
NetApp Cloud Volumes ONTAP
June 2025
Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
861,490 professionals have used our research since 2012.

For how long have I used the solution?

I have been using this solution for eighteen years.

What do I think about the stability of the solution?

In terms of stability, this is a rock-solid solution.

What do I think about the scalability of the solution?

The scalability is great. You don't have to add controllers to add storage space and you can scale out if you need to add more horsepower to your cluster.

How are customer service and support?

NetApp's technical support is outstanding.

Which solution did I use previously and why did I switch?

We have not moved off of another solution. Rather, we are expanding to implement a new solution for a problem that hasn't been addressed yet. Specifically, we are looking to use CBO for replication that up to this point, had not been done yet.

How was the initial setup?

The initial setup of this solution is very simple. I don't remember there being any problems that we looked at and had to research an answer for. It just worked.

What about the implementation team?

We use Tego Data to assist us with this solution. They've been working with us for years on NetApp, and they're just great. They work with us hand in glove on any projects that we reach out to them for, and they know our environment just about as well as we do.

What's my experience with pricing, setup cost, and licensing?

Our licensing costs are folded into the hardware purchases and I have never differentiated between the two.

Which other solutions did I evaluate?

We've looked at other storage solutions and we just keep coming back to NetApp because they provide us with everything we need. They have great support and the hardware has drastically improved in horsepower and capacity, so we're happy to stay with them.

What other advice do I have?

I have no problems with this solution at all.

My advice for anybody who is researching this type of solution is to take a serious look at NetApp. They have products that are very flexible, extremely reliable, they're cost-competitive with other storage solutions, and their support is outstanding.

There is always room for enhancement, but what it does, it does very well.

I would rate this solution a ten out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
reviewer1223382 - PeerSpot reviewer
Sr Systems Engineer at a healthcare company with 1,001-5,000 employees
Real User
The native filer capabilities are baked right there on the system
Pros and Cons
  • "The solution’s Snapshot copies and thin clones in terms of operational recovery are the best thing since sliced bread. Rollback is super easy. It's just simple, and it works. It's very efficient."

    What is our primary use case?

    The primary use is virtualization as well as filer storage, pretty much all the features of the ONTAP suite.

    We don't have any cloud footprint for contractual obligations. So, it's all pretty much on-prem, but it's in a co-location.

    How has it helped my organization?

    We use it to replicate between data centers. It is for our DR site as well. We use it to create redundancy.

    We do on-prem S3 for StorageGRID. The on-prem infrastructure is cheap. It works just the same. It's S3, so it works very well as far as integration and things that use S3 in our environment.

    What is most valuable?

    The most valuable features are the native filer capabilities because a lot of SAN providers don't do that. When they do it, they do it with an appliance or a secondary. With this, it is just baked in right there on the system that you require. You don't have to have anything extra.

    The solution’s Snapshot copies and thin clones in terms of operational recovery are the best thing since sliced bread. Rollback is super easy. It's just simple, and it works. It's very efficient.

    What do I think about the stability of the solution?

    The stability is good. I've been with NetApps for a long time, so I've seen them fall and come back. However, with cDOT and all this new stuff, it is great. It just works.

    What do I think about the scalability of the solution?

    We're not that big, storage footprint-wise. However, it's simple. You just add nodes. So, it works.

    How are customer service and technical support?

    We have not really used the technical support.

    Which solution did I use previously and why did I switch?

    We had previous experiences with deploying ONTAP at other companies successfully.

    ONTAP makes our storage solutions more flexible. Traditionally, that's hard to do. ONTAP gives you those features which you typically have to build yourself.

    How was the initial setup?

    It's straightforward. But you do have to know what you're doing. Things do what you expect them to do. There is quite a bit of initial setup, but with things like Ansible and all this new stuff that they're doing, it makes it much easier and automated. So, it's simple.

    What about the implementation team?

    I did the deployment myself with a little help from our vendor's professional services.

    What was our ROI?

    We have had less downtime.

    What's my experience with pricing, setup cost, and licensing?

    Cost is a big factor, because a lot of companies can't afford enterprise grade equipment all the time. They skimp where they can. I would recommend that they improve the cost.

    What other advice do I have?

    This company that I work for now is just acquiring quite a bit of NetApp equipment. We will be doing SnapMirror. I have done it in the past at another company.

    It does exactly what it does, and it does it well. It works, and that's what really matters at the end the day: uptime, functionality, and scalability.

    I would rate it a nine out of 10. There is always room for improvement. No one is ever going to be a 10.

    Which deployment model are you using for this solution?

    Hybrid Cloud
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    Buyer's Guide
    NetApp Cloud Volumes ONTAP
    June 2025
    Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
    861,490 professionals have used our research since 2012.
    Storage Architect at NIH
    Real User
    Critical data is snapshotted more frequently making it easier to restore
    Pros and Cons
    • "The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted a lot more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc."
    • "How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically."

    What is our primary use case?

    The primary use case is to move age old data to the cloud.

    It is deployed on the cloud.

    How has it helped my organization?

    The tool saves us time and money. Now, it's easy to retrieve data back, where you can go back and look at the statistics to study them. Because my company is focused on healthcare, there's no time limit on the retention of information. It's infinite. So, instead of having all our data on tapes and things, which takes many hours to try to retrieve information back. This is a good solution.

    What is most valuable?

    The migration is seamless. Basically, we shouldn't be spending a whole lot budget-wise. We would like to have something reasonable. What's happening right now is when we try to develop a cloud solution, we don't see the fine print. Then, at the end of the day, we are getting a long bill that says, "Okay, this is that, that is what." So, we don't want those unanticipated costs.

    We use the solution’s inline encryption using SnapMirror. We did get Geoaudits and things like that. In other words, everything put together is a security. It's not like just storage talking to the cloud, it's everything else too: network, PCs, clients, etc. It's a cumulative effort to secure. That's where we are trying to make sure there are no vulnerabilities. Any vulnerabilities are addressed right away and fixed.

    The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc. 

    The solution’s Snapshot copies and thin clones have affected our application development speed positively. They have affected us in a very positive way. From Snapshots, copies, clones, and things, they were able to develop applications, doing pretty much in-house development. They were able to roll it out first in the test environment of the R&D department. The R&D department uses it a lot. It's easy for them because they can simulate production issues while they are still in production. So, they love it. We create and clone for them all the time.

    The solution helped reduced our company's data footprint in the cloud. They're reducing it by two petabytes of data in the cloud. All of the tape data, they are now writing to the cloud. It's like we have almost reached the capacity that we bought even before we knew we were going to reach it. So it's good. It reduces labor, because with less tapes, you don't have to go around buying tapes and maintaining those tapes, then sending them offsite, etc. All that has been eliminated.

    What needs improvement?

    Right now, we're using StorageGRID. Obviously, it is a challenge. Anything that you're writing to the cloud or when you get things from the cloud, it is a challenge. When we implemented StorageGRID, like nodes and things like that, we implemented it on our bare-metal. So the issue is that they're trying to implement features, like erasure coding and things like that, and it is a huge challenge. It's still a challenge because we have a fine node bare-metal Docker implementation, so if you lose a node for some reason, then it's like it stops to read from it or write to it. This is because of limitations within the infrastructure and within ONTAP.

    How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically. 

    It needs better reporting. Right now, we had to put everything one to the other just to figure out what could be the issue. We get a random error saying, "This is an error," and we have to literally dig into it, look to people, lock files, look through our loads, and look through the Docker lock files, then verify, "Okay, this is the issue." We just want it to be better in alerting and error handling reports. Once you get an error, you don't want to sit trying to figure out what that error means in the first two hours. It should be fixable right away. Then, right away you are trying to work on it, trying to get it done. That's where we see the drawbacks. Overall, the product is good and serves a purpose, but as an administrator and architect, nothing is perfect.

    What do I think about the stability of the solution?

    There's always room for improvement. Overall, it's still stable.

    What do I think about the scalability of the solution?

    60 percent of our tape data is sitting in the cloud now.

    There's a limitation to scalability. Right now, when you want to expand the initial architecture, we have to add additional loads just so it can handle the data without hurting the performance. Then, we have to go back and request for more licensing. It adds to our licensing, thus adding to the cost. In regards to scalability, unless you have a five to six year plan ahead, we can't say, "Great, we have run out of space. Okay, let's try to increase space." It's not like increasing volume.

    How are customer service and technical support?

    Unless a much more experienced person comes, I think the print and tech guy is only reading what he sees on the website. He pulls up their code or whatever, because what we see when we open a case is already there is an automatic case that's opened. We see typical questionnaires, but nothing pertaining to the case. For example, you run out of space or high nodes, the technical support is sitting there asking us something else. Nothing to do with high nodes and the volume being down or offline. It's not relevant. It is a generalized thing. You have to sit down and explain to them, "This has nothing to do with the questions you're asking. It's out of context, so you might want to look again and get back with the proper input." That's a pain.

    However, the minute we say, "It's very critical," we see a good, solid SME on the line who is helping us.

    I'm not experienced as many of my colleagues. They're really frustrated. We did convey this concern to our account person and have seen a lot of change.

    Which solution did I use previously and why did I switch?

    The company has always been a NetApp shop even before I entered the company. We continue to use it because of the good products. We do market research, obviously. We do see good products, and every year there is improvement. When we want to do hardware upgrades, it's still very good. The way we are trying to develop, it's very seamless for us and not a pain. 

    We have never felt, "We are done with NetApp. Let's move onto something else." I love to introduce other vendors into the mix, just so it's not a monopoly. We still love NetApp as our primary.

    How was the initial setup?

    It is a little complex. It's completely different from the regular standard ONTAP, and how you manage and the learning code. Half the time you get confused and try to compare it with a standard cloud. You start to say, "Oh, this feature was here. How come it's not there? That was very good there. How come it's not here?"

    We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. The configuration wizards and its ability to automate the process was good. We liked it. It's all in one place, so you don't have to go around trying to use multiple tools just to get things worked out. You see what you have on the other side plus what do you have on your end, and you're able to access it.

    What about the implementation team?

    Mostly, we did it ourselves. When we went to MetroCluster, we used their Professional Services. For the rest of ONTAP, we deployed it ourselves. It is pretty much self-explanatory and has good training.

    What's my experience with pricing, setup cost, and licensing?

    Cloud is cloud. It's still expensive. Any good solution comes with a price tag. That's where we are looking to see how well we can manage our data in the cloud by trying to optimize the costs.

    I do know our licensing cost to some extent, but not fully. E.g., I don't know overall how much we have gone over the budget or where did we put costs down just to maintain licensing on it. That part of it, I don't know. 

    I know the licensing is a bit on the high-end. That's when we had to downsize our MetroCluster disks and just migrate to disks that were half used. We migrated into those just to reduce maintenance costs.

    Which other solutions did I evaluate?

    We use Caringo. It's object storage migration for age old data. It is a cheap solution for us, so that's why we use that. When we compared prices, Caringo was much cheaper.

    Once we migrated everything to Caringo, there were challenges because it's another vendor, and then you're working with two different vendors. We started having issues, so now we use StorageGRID.

    We chose NetApp because we already had the infrastructure. Adding additional resources and features into the mix is much easier because it's one vendor, and they understand the product. If we needed to add something and improve on the solution, it's much easier.

    What other advice do I have?

    I would recommend NetApp any day, at any time, because there's so much hard work in it. It's more open and transparent. Nobody is coming from NetApp, saying, "We're going to sell this gimmick." Then, you view all the good stuff but begin to realize, "This is not what they promised." For this reason, I would recommend NetApp.

    They make sure the solution fits our needs. It's not, "Okay, we'll go to the customer site and tell what we feel like regarding their products." Even if it fits or not, it doesn't affect that they've gone through the door. A lot of people do that. NetApp makes an assessment, then they make sure, "Okay, it does fit in."

    The product: I would give it an eight (out of 10). The company: It's a six (out of 10).

    We have not yet implemented the solution to move data between hyperscalers and our on-premises environment. It's just from our NetApps to the cloud, not from the hybrid. The RVM team is planning on that. So, they can have the whole untouched thing put on the cloud rather than being hosted on our data stores.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    SeniorMa9b1f - PeerSpot reviewer
    Senior Manager, IT CloudX at a manufacturing company with 1,001-5,000 employees
    Real User
    Cloud Manager enables us to automate scheduling of data synchronization
    Pros and Cons
    • "We're using snapshots as well and it's a pretty useful feature. That is one of the main NetApp benefits. Knowing how to use snapshots in the on-prem environment, using snapshots on the cloud solution was natural for us."
    • "The DR has room for improvement. For example, we now have NetApp in Western Europe and we would like to back up the information to another region. It's impossible. We need to bring up an additional NetApp in that other region and create a Cloud Manager automation to copy the data... I would prefer it to be a more integrated solution like it was in the NetApp solution about a year ago. I would like to see something like AltaVault but in the cloud."

    What is our primary use case?

    We are using it for storing files, to get high-performance access to files. We are also using NetApp for DR. We copy the information to the same system in other regions.

    How has it helped my organization?

    The solution's high-availability features are cost-effective for us because we are able to use the cloud benefits to reduce the cost of DR. For example, if we have it in one region, we can copy the data to another region. They keep it powered off and then they power it on for a few minutes, copy the data, send the data again, and shut it down again. That reduces the costs by approximately 80 percent.

    Similarly, the data protection provided by the solution's disaster recovery technology is cost-effective and simple.

    We're using Cloud Manager to automate some of the management. We use it for bringing the DR environment up and down as well as for scheduling data synchronization between different regions, worldwide. It's almost impossible to do that manually. Compared to an engineer doing it manually, it's about 90 percent faster. That's specifically for this kind of operation. In reality, the automation is enabling such capabilities. It's not actually reducing the time taken. If it didn't exist, we would never do it. That's even better than saving time.

    Overall, NetApp has standardized and certified file services, both on-prem and in the cloud, corporate-wide. In addition, by using the automation, it has provided us cost-effective DR and management. In the cloud it has enabled us to provide tailor-made storage solutions for each of our cloud customers. The storage efficiency has reduced our storage footprint because we are offloading all the data to the storage account. So it has reduced the cost of corporate storage. And the data-tiering has also saved us money.

    What is most valuable?

    What is most valuable is that the system is the same as what we use on-prem. So the guys who are responsible here for managing NetApp feel comfortable with it& and that they have enough knowledge to manage the system in the cloud. We are able to& keep the same standards that we have on-prem in the cloud.

    The usability is& great. We don't have any issues with it.

    We're using snapshots as well and it's a pretty useful feature. That is one of the main NetApp benefits. Knowing how to use snapshots in the on-prem environment, using snapshots on the cloud solution was natural for us.

    What needs improvement?

    The DR has room for improvement. For example, we now have NetApp in Western Europe and we would like to back up the information to another region. It's impossible. We need to bring up an additional NetApp in that other region and create a Cloud Manager automation to copy the data. So we do that once, at night, to another region and then shut down the destination. It's good because it's using Cloud Manager and its automation, but I would prefer it to be a more integrated solution like it was in the NetApp solution about a year ago. I would like to see something like AltaVault but in the cloud.

    For how long have I used the solution?

    We have been using it for about half a year in production; longer when we include the PoC.

    What do I think about the stability of the solution?

    The stability has been great. We haven't had any issues.

    What do I think about the scalability of the solution?

    We still haven't needed to scale up, but I think the scalability is good.

    We are using it for a system which stores files and parts of databases, but the system is used by hundreds of customers. NetApp is not used directly by them, rather through the system. We may plan to increase NetApp according to the usage of the system but we still have no specific plans.

    How are customer service and technical support?

    We are using NetApp engineers and they are great.

    Which solution did I use previously and why did I switch?

    Before NetApp we used a home-grown server in the cloud, a Linux server with a big disk. It was less simple to manage.

    We're also using Avere, a storage solution that was purchased by Microsoft a month or two ago. It's mainly responsible for real-time data synchronization between on-prem and the cloud environment. It's different than NetApp which doesn't provide the kind of synchronization solution that Avere does. It's two-way, real-time data synchronization between the Oracle storage solutions which we have on-prem and the Avere solution that we have in Azure. NetApp does not help with such requirements.

    How was the initial setup?

    The initial setup was very simple. It was quite easy to set up the environment in just one day. We started with a small implementation and then added more and more parts of the solution. We started with just one desktop and then added additional ones and then added tiering.

    It required a small number of staff members. That's all we needed because it was pretty simple. We did a few sessions online and one or two onsite, for the entire solution. For our specific case it requires almost no maintenance. It only requires management to expand the disk capacity or perform the management operations, per-request. Generally we wouldn't require an increase to our storage team to manage the solution.

    What about the implementation team?

    We used a NetApp engineer to help us.

    What's my experience with pricing, setup cost, and licensing?

    In addition to the standard licensing fees, there are fees for Azure, the VMs themselves and for data transfer. The DR environment is billed by the hour and paid to Azure directly and NetApp is paid on a yearly license.

    Which other solutions did I evaluate?

    We checked Dell EMC and HPE but we chose NetApp. The Storage team made the decision. One of the main reasons they chose NetApp was the existence of NetApp on-prem and the knowledge of it the team had. We are familiar with NetApp and the products are good, so we decided to extend the success to the cloud as well.

    What other advice do I have?

    Implement it. Do not think about it. It's very simple and very useful.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner.
    PeerSpot user
    Lead Storage Engineer at a insurance company with 5,001-10,000 employees
    Real User
    Enables us to manage multiple petabytes of storage with a small team, including single node and HA instances
    Pros and Cons
    • "Unified Manager, System Manager, and Cloud Manager are all GUI-based. It's easy for somebody who has not been exposed to this for years to pick it up and work with it."
    • "We use the mirroring to mirror our volumes to our DR location. We also create snapshots for backups. Snapshots will create a specified snapshot to be able to do a DR test without disrupting our standard mirrors. That means we can create a point-in-time snapshot, then use the ability of FlexClones to make a writeable volume to test with, and then blow it away after the DR test."
    • "Some of the licensing is a little kludgy. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgy. They're working on it right now."

    What is our primary use case?

    For the most part, we're using it to move data off-prem. We have the ability to do mirrors from on-prem to Cloud Volumes ONTAP and we also have both single-node instances and HA instances. We are running it in both AWS and Azure.

    We're using all of the management tools that go along with it. We're using both OnCommand Cloud Manager and OnCommand Unified Manager, which means we can launch System Manager as well.

    Unified Manager is what monitors the environment. OnCommand Cloud Manager allows you to deploy and it does have some monitoring capabilities, but it's not like Unified Manager. And from OnCommand Cloud Manager you can launch System Manager, which gives you the lower-level details of the environment.

    Cloud Manager will allow you to create volumes, do CIFS shares, NFS mounts, and create aggregates. But the rest of the networking components and other work for the SVMs and doing other configurations are normally done at that lower level. System Manager is where you would do that, whereas Unified Manager allows you to monitor the entire environment.

    Say I have 30 instances running out there. Unified Manager allows me to monitor all 30 instances for things like volume-full alerts, near volume-full alerts, I-nodes, full network components being offline, paths, back-end storage paths, aggregate fulls. All those items that you would want to monitor for a healthy environment are handled through Unified Manager.

    How has it helped my organization?

    We're sitting at multiple petabytes of storage on our NetApp infrastructure. We're talking hundreds of thousands of shares across thousands of volumes. Even with that size of infrastructure, it's being supported by three people. And it's not like we're working 24/7. It gives us the ability to do a lot, to do more with less. Those three people manage our entire NAS environment. I've got two intermediate and one senior storage engineer in our environment who handle things. They're handling those multiple petabytes of on-prem and I'm just starting to get them involved in the cloud version, Cloud Volumes ONTAP. So, for the most part, it's just me on the Cloud Volume side.

    In terms of the storage efficiency reducing our storage footprint, the answer I'd like to say is "yes." The problem I have is that nobody ever wants to delete anything. We have terabytes of data on-prem in multiple locations, in both primary and DR backed-up. And now, we're migrating it to the cloud. But eventually, the answer will be yes.

    What is most valuable?

    I'm very familiar with working from the command line, but Unified Manager, System Manager, and Cloud Manager are all GUI-based. It's easy for somebody who has not been exposed to this for years to pick it up and work with it. Personally, for the most part, I like to get in with my secure CRT and do everything from the command line.

    We do a lot of DR testing of our environment, so we're using a couple of components. We use Unified Manager to link with WFA, Workflow Automation, and we do scripted cut-overs to build out. We use the mirroring to mirror our volumes to our DR location. We also create snapshots for backups. Snapshots will create a specified snapshot to be able to do a DR test without disrupting our standard mirrors. That means we can create a point-in-time snapshot, then use the ability of FlexClones to make a writeable volume to test with, and then blow it away after the DR test.

    We could also do that in an actual disaster. All we would do is quiesce and break our mirrors, our volumes would become writeable, and then we would deploy our CIFS shares and our NFS mounts. We would have a full working environment in a different geographic location. Whether you're doing it on-prem or in the cloud, those capabilities are there. But that's all done at a lower level.

    The data protection provided by the Snapshot feature is a crucial part of being able to maintain our environment. We stopped doing tape-based backups to our NAS systems. We do 35 days of snapshots. We keep four "hourlies," two dailies, and 35 nightly snapshots. This gives us the ability to recover any data that's been accidentally deleted or corrupted, from an application perspective, and to pull it out as a snapshot. And then there are the point-in-time snapshots, being able to create one at a given point in time. If I want to use a FlexCone to get at data, which are just pointers to the back-end data, right now, and use that as a writeable volume without interrupting my backup and DR capabilities, those point-in-time snapshots are crucial.

    The user can go and recover the file himself so we don't have to have a huge number of people working on recovering things. The user has the ability to get to that snapshot location to recover the file and go however many days back. Being that it's a read-only a file to the user community, users can get at that data, as long as they have proper rights to that file. Somebody else could not get to a file for which they don't have rights. There's no security breach or vulnerability. It just provides the ability for a user who owns that data to get to a backup copy of that data, to recover it, in case they've deleted or had a file corruption.

    We also use their File Services Solutions in the cloud, CIFS and NFS. It works just as well as on-prem. The way we configure an environment, we have the ability to talk back to our domain controllers, and then it uses the standard AD credentials and DNS from our on-prem environments.

    Cloud Volumes ONTAP in the cloud, versus Data ONTAP on-prem, are the exact same products. If you have systems on-prem that you're migrating to the cloud, you won't have to retrain your workforce because they'll be used to everything that they'll be doing in the cloud as a result of what they've been doing on-prem. In that sense, Cloud Volumes ONTAP is the exact same product, unless you're using a really old version of Data ONTAP on-prem. Then there's the standard change between Data ONTAP versions.

    What needs improvement?

    Some of the licensing is a little kludgy. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgy. They're working on it right now. We're working with them on straightening it out.

    We're moving a grid environment to Azure and the way it was set up is that we have eight SVMs, which are virtual environments. Each of those has its own CIFS servers, all their CIFS and NFS mounts. The reason they're independent of one another is that different groups of business got pulled together, so they had specific CIFS share names and you can't have the same name in the same server more than once on the network. You can't have CIFS share called "Data" in the same SVM. We have eight SVMs because of the way the data was labeled in the paths. God forbid you change a path because that breaks everything in every application all down the line. It gives you the ability to port existing applications from on-prem into cloud and/or from on-prem into fibre infrastructure.

    But that ability wasn't there in Cloud Volumes ONTAP because they assume that it was going to be a new market and they licensed it for a single SVM per instance built out in the cloud. They were figuring: New market and new people coming to this, not people porting these massive old-volume infrastructures. In our DR infrastructure we have 60 SVMs. That's not how they build out the new environments. 

    We're working with them to improve that and they're making strides. The licensing is the only thing that I can see they can improve on and they're working on it, so I wouldn't even knock them on that.

    For how long have I used the solution?

    I've been using it since its inception. Prior to it being called Cloud Volumes ONTAP, it was named a couple of different things as it went along. I've been working with the on-prem Data ONTAP for about 16 years now. When they first offered the Cloud Volumes ONTAP, I started testing that out in a Beta program. It's been a few years now with Cloud Volumes ONTAP. I'm our lead storage engineer, but I'm also on a couple of our cloud teams and I'm a cloud administrator for our organization. We started looking at it when AWS ( /products/amazon-aws-reviews ) first started coming on the scene, at what we could do in the cloud. And as a company direction, we're implementing cloud-first, where available.

    What do I think about the stability of the solution?

    We've had no issues.

    What do I think about the scalability of the solution?

    In an HA environment, it will scale up to 358 terabytes. That's not bad per-system. We've had no difficulties.

    We will be moving more stuff off-prem into the cloud. Right now it's at about 15 percent of our entire environment, and we plan on at least 10 percent, or more, per quarter, over the next few years.

    We'll be doing the tiering and using the Cloud Sync as well. We're a financial and insurance company, so some things have to remain on-prem, and some things, from a PCI perspective, have a lot of different requirements around them. And because we're across multiple countries worldwide, there are all sorts of HIPAA and other types of legal and financial ramifications from a security perspective. In the UK and in Europe there are the privacy components. There are different things in Hong Kong and Singapore, in Spain, etc. Each country unit requires different types of policies to be adhered to. Everything we have is encrypted at rest, as well as encrypted in-flight.

    Cloud Volumes ONTAP will also support doing data encryption at a volume level, a software encryption. But from a PCI perspective, we use the NSE drives, which give us hardware encryption. So they're double encrypted. They are hardware encrypted. We're having to use a management appliance to keep and maintain the encryption keys, and we do quarterly encryption-key replacement. But there are also the volumes that are encrypted as well. We also use TLS for transporting the data, doing encryption in-flight. There are all sorts of things that it supports which allow you to be compliant.

    Another feature it has is disk sanitize, a destruction component which allows you to do a DoD wipe of the data. Once you've decommissioned an environment, it is completely wiped so nobody can get access to the data that was there previously. That's all built into Data ONTAP, including Cloud Volumes.

    NSE drives are a little different because you are not getting physical drives in the cloud environment, so you couldn't do that. But you can do the volume encryption, from Cloud Volumes. In terms of a DoD wipe, you wouldn't be doing that on Azure's or AWS's environments because it's a virtual disk.

    How are customer service and technical support?

    I've rarely used tech support. I've got so much experience deploying these environments that it's like breathing. It's second nature. And when they first came out with OnCommand Cloud Manager, I was doing beta testing and debugging with the group out of Israel to build the product.

    How was the initial setup?

    The initial setup was very straightforward. If you use an OnCommand Cloud Manager to deploy it into AWS or Azure, it's point-and-click stupid-simple. It takes less than 15 minutes, depending upon your connectivity and bandwidth. That 15 minutes is to build out a brand-new filer and create CIFS shares on it. It automatically deploys it for you: the back-end storage, the EC2 instances, if you're in an AWS. In Azure, it creates the Blob space. It creates the VMs. 

    It's all done for you with just a couple of screens. You tell it what you want to call it, you tell it what account or subscription you're using, depending upon whether it's AWS or Azure. You tell it how big you want the device to be, how much storage you want it to have, and what volumes you want it to create; CIFS shares, etc. You click next, next, next. As long as you have the ability to provision what you've gone into, whether it's AWS or Azure, and turned on programmatic deployment, it gives you the access. The only thing you have to do outside Cloud Volumes ONTAP under OnCommand Cloud Manager is turn it on to allow it to run. It picks up everything else. It'll pick up what VPC you have, what subnet you have. You just tell it what security group you want it to use. It's fairly simple.

    If somebody hasn't utilized or isn't familiar with how to deploy anything in either AWS or Azure, it might be a tad more complicated because they'd need to get that information to begin with. You have to have at least moderate experience with your infrastructure to know which VPC and subnet and security group to specify.

    What was our ROI?

    In my opinion, we're getting a good return on investment.

    Which other solutions did I evaluate?

    I always try new products. I've used the SoftNAS product, and a couple of other generic NAS products. They don't even compare. They're not on the same page. They're not even in the same universe. I might be a little biased but they're not even close. 

    I have looked at Azure NetApp Files, which is another product that NetApp is putting out. Instead of Cloud Volumes it's cloud files. You don't have to deploy an entire NetApp infrastructure. It gives you the ability to do CIFS at file level without having to manage any of the overhead. That's pre-managed for you.

    What other advice do I have?

    For somebody who's never used it before, the biggest thing is ease of use. In terms of advice, as long as you design your implementation correctly, it should be fine. I would do the due diligence on the front-end to determine how you want to utilize it before you deploy.

    We have over 3,000 users of the solution who have access to snapshots, etc. but only to their own data. We have multiple SVMs per business unit and a locked-down security on that. Only individuals who own data have access to it. We are officially like a utility. We give them storage space. We give them the ability to use it and then they maintain their data. From an IT perspective, we can't really discern what is business-critical and what isn't to a specific business unit. We're global, we're not just U.S., we're all over the world.

    We've gone into doing HA. It's the same as what's on-prem, and HA on-prem is something we've always done. When we would buy a filer for on-premise, we'd always buy a two-node HA filer with a switch back-end to be able to maintain the environment. The other nice thing, from an on-prem perspective with a switched environment, is that we can inject and eject nodes. We can do a zero-downtime lifecycle. We can inject new nodes and mirror the data to the new nodes. Once everything's on those new nodes, eject the old nodes and we will have effectively lifecycled the environment, without having to take any downtime. Data ONTAP works really well for that. The only thing to be aware of is that to inject new nodes into an existing cluster, they have to be at the same version of Data ONTAP.

    In terms of provisioning, we keep that locked down because we don't want them running us out of space. We have a ticketing system where users request storage allocation and the NAS team, which supports the NetApp infrastructure, will allocate the space with the shares, to start out. After that, our second-level support teams, our DSC (distributed service center) will maintain the volumes from a size perspective. If something starts to get near-full, they will automatically allocate additional space. The reason we have that in place is that if it tries to grow rapidly, like if there's an application that's out of control and just keeps spinning up and eating more and more of the utilization, it gives us the ability to stop that and get with the user before they go from using a couple a hundred gigs to multiple terabytes, which would cost them X amount. There is the ability to auto-grow. We just don't use it in our environment.

    In terms of the data protection provided by the solution's disaster recovery technology, we use that a lot. Prior to clustered ONTAP - this is going back to 7-Mode - there was the ability to auto-DR with a single command. That gave us the ability to do a cut-over to another environment and automatically fail. We're currently using WFA to do that because, when they first came out with cluster mode, they didn't have the ability to auto-DR. I have not looked into whether they've made auto-DR a feature in these later versions of Data ONTAP.

    OnCommand Cloud Manager doesn't allow you to do DR-type stuff. There are other things within the suite of the cloud environment that you can do: There's Cloud Sync which allows you to create a data broker and sync between CIFS shares or NFS mounts into an S3 bucket back-end. There's a lot of stuff that you can do there, but that's getting into the other product lines.

    As for using it to deploy Kubernetes, we are working through that right now. That process is going well. We've really just started getting through it and it hasn't been overly complicated. Cloud Volumes ONTAP's capabilities for deploying Kubernetes means it's been fairly easy.

    In terms of the cloud, one thing that has made things a little easier is that previously, within the AWS environment, we used to have to create a virtual filer in each of our subscriptions or accounts because we really wanted the filer to be close to the database instances or the servers within that same account, without traversing VPCs. Now, since they have given us the ability to do VPC peering, we can create an overarching primary account and then have it talk to all the instances within that storage account, or subscription in Azure, without having to have one spun up in every single subscription or account. We have a lot of accounts so it has allowed us to reel that back by creating larger HA components in a single account and then give access through VPCs to the other accounts. All that traffic stays within Azure or AWS. That saves money because we don't have to pay them for multiple subscriptions of Cloud Volumes ONTAP and/or additional virtual filers.

    For my use, Cloud Volumes ONTAP is a ten out of ten.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Service Architecture at All for One Group AG
    Real User
    High availability enables us to run two instances so there is no downtime when we do maintenance
    Pros and Cons
    • "NetApp's Cloud Manager automation capabilities are very good because it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well."
    • "Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations."
    • "Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair."
    • "One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have."

    What is our primary use case?

    The primary use case is for SAP production environments. We are running the shared file systems for our SAP systems on it.

    How has it helped my organization?

    It's helped us to dive into the cloud very fast. We didn't have to change any automations which we already had. We didn't have to change any processes we already had. We were able to adopt it very fast. It was a huge benefit for us to use the same concepts in the cloud as we do on-premise. We're running our environment very efficiently, and it was very helpful that our staff, our operators, didn't have to learn new systems. They have the same processes, all the same knowledge they had before. It was very easy and fast.

    We did a comparison, of course, and it was cheaper to have Cloud Volumes ONTAP running with the deduplication and compression, compared to storing everything, for example, on HA disks and have a server running all the time as well. And that was not even for the biggest environment.

    The data tiering saves us money because it offloads all the code data to the Blob Storage. However, we use the HA version and data tiering just came to HA with version 9.6 and we are not on 9.6 in our production environment. It's still on RC, the pre-release, and not on GA release. In our testing we have seen that it saves a lot of money, but our production systems are not there yet.

    What is most valuable?

    The high availability of the service is a valuable feature. We use the HA version to run two instances. That way there is no downtime for our services when we do any maintenance on the system itself.

    For normal upgrades or updates of the system - updates for security fixes, for example - it helps that the systems and that the service itself stay online. For one of our customers, we have 20 systems attached and if we had to ride that customer all the time and say, "Oh, sorry, we have to take your 20 systems down just because we have to do maintenance on your shared file systems," he would not be amused. So that's really a huge benefit.

    And there are the usual NetApp benefits we have had over the last ten years or so, like snapshotting, cloning, and deduplication and compression which make it space-efficient on the cloud as well. We've been taking advantage of the data protection provided by the snapshot feature for many years in our on-prem storage systems. We find it very good. And we offload those snapshots as well to other instances, or to other storage systems.

    The provisioning capability was challenging the first time we used it. You have to find the right way to deploy but, after the first and second try, it was very easy to automate for us. We are highly automated in our environment so we use the REST API for deployment. We completely deploy the Cloud Volumes ONTAP instance itself automatically, when we have a new customer. Similarly, deployment on the Cloud Volumes ONTAP for the Volumes and access to the Cloud Volumes ONTAP instance are automated as well.

    But for that, we still use our on-premise automations with WFA (Workflow Automation). NetApp has a tool which simplifies the automation of NetApp storage systems. We use the same automation for the Cloud Volumes ONTAP instances as we do for our on-premise storage systems. There's no difference, at the end of the day, from the operating system standpoint.

    In addition, NetApp's Cloud Manager automation capabilities are very good because, again, it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well. It's pretty good.

    Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations. We're just using it, deploying volumes and using them. We see that, in some way, as being the future of storage services, for us at least: completely managed.

    What needs improvement?

    Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face.

    One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.

    For how long have I used the solution?

    We've been using Cloud Volumes for over a year now.

    What do I think about the stability of the solution?

    The stability is very good. We haven't had any outages.

    What do I think about the scalability of the solution?

    Right now, the scalability is sufficient in what it provides for us, but we can see that our customer environments are growing. We can see that it will reach its performance end in around a year or so. They will have to evolve or create some performance improvements or build some scale-up/scale-out capabilities into it.

    In terms of increasing our usage, the tiering will be definitely used in production as soon as its GA for Azur. They're already playing with the Ultra SSDs, for performance improvements on the storage system itself. As soon as they become generally available by Microsoft, that will probably a feature we'll go to.

    As for end-users, for us they are our customers. But the customers have several hundred or 1,000 users on the system. I don't really know how many end-users are ultimately using it, but we have about ten customers.

    How are customer service and technical support?

    Technical support has been very good. The technical people who are responsible for us at NetApp are very good. If we contact them we get direct feedback. We often have direct contact, in our case at least, to the engineers as well. We have direct contacts with NetApp in Tel Aviv.

    It's worth mentioning that when we started with Cloud Volumes ONTAP in the past, we did an architecture workshop with them in Tel Aviv, to tell them what our deployments look like in our on-premise environment, and to figure out what possibilities Cloud Volumes ONTAP could provide to us as a service provider. What else could we do on it, other than just running several services? For example: disaster recovery or doing our backups. We did that at a very early stage in the process.

    Which solution did I use previously and why did I switch?

    We only used native Azure services. We went with Cloud Volumes ONTAP because it was a natural extension of our NetApp products. We have a huge on-premise storage environment from NetApp and we have been familiar with all the benefits from these storage systems for several years. We wanted to have all the benefits in the cloud, the same as we have on-premise. That's why we evaluated it, and we're in a very early stage with it.

    How was the initial setup?

    To say the initial setup was complex is too strong. We had to look into it and find the right way to do it. It wasn't that complex, it was just a matter of understanding what was supported and what was not from the SAP side. But as soon as we figured that out, it was very straightforward to figure out how to build our environment.

    We had an implementation strategy: Determining what SAP systems and what services we would like to deploy in the cloud. Our strategy was that if Cloud Volumes ONTAP made sense in any use case, we would want to use it because it's, again, highly automated and we could use it with our scripting already. Then we had to look at what is supported by SAP itself. We mixed that together in the end and that gave us our concept.

    Our initial deployment took one to two weeks, maximum. It required two people, in total, but it was a mixture of SAP and storage colleagues. In terms of maintenance, it doesn't take any additional people than we already have for our on-premise environment. There was no additional headcount for the cloud environment. It's the same operating team and the same people managing Cloud Volumes ONTAP as well as our on-premise storage systems. It requires almost no maintenance. It just runs and we don't have to take care of updating it every two months or so for security reasons.

    What about the implementation team?

    We didn't use a third-party.

    What was our ROI?

    We have seen return on investment but I don't have the numbers. 

    What's my experience with pricing, setup cost, and licensing?

    The standard pricing is online. Pricing depends. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company. We have an Enterprise Agreement or something similar to that. So we get a different price for it.

    In terms of additional costs beyond the standard licensing fees, you have to run instances in Azure, virtual machines and disks. You still have to pay for the Azure disks, and Blob Storage if you're using tiering. What's also important is to know is the network bandwidth. That was the most complicated part in our project, to figure out how much data would be streamed out of our data center into the cloud and how much data would have to be sent back into our data center. It's more challenging than if you have a customer who is running only in Azure. It can be expensive if you don't have an eye on it.

    Which other solutions did I evaluate?

    We have a single-vendor strategy.

    What other advice do I have?

    Don't be afraid of granting permissions because that's one of the most complex parts, but that's Azure. As soon as you've done that, it's easy and straightforward. When you do it the first time you'll think, "Oh, why is it so complicated?" That's native Azure.

    The biggest lesson I've learned from using Cloud Volumes ONTAP is that from an optimization standpoint, our on-premise instance was a lot more complex than it had to be. That's was a big lesson because Cloud Volumes ONTAP is a very easy, light, wide service. You just use it and it doesn't require that much configuring. You can just use the standards which come from NetApp and that was something we didn't do with our on-premise environment.

    In terms of disaster recovery, we have not used Cloud Volumes ONTAP in production yet. We've tested it to see if we could adopt Cloud Volumes ONTAP for that scenario, to migrate all our offloads or all our storage footprint we have on-premise to Cloud Volumes ONTAP. We're still evaluating it. We've done a lot of cost-comparison, which looks pretty good. But we are still facing a little technical problem because we're a CSP (cloud service provider). We're on the way to having Microsoft fix that. It's a Microsoft issue, not a NetApp Cloud Volumes ONTAP issue.

    I would rate the solution at eight out of ten. There are improvements they need to make for scale-up and scale-out.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Sr Systems Engineer at Ucare
    Real User
    Simple to get up and running, and our data is readily available when we need it
    Pros and Cons
    • "The most valuable feature of this solution is that it makes our data readily available and we don't have to go through a lot of trouble to access it."
    • "We would like to have support for high availability in multi-regions."

    What is our primary use case?

    Our primary use case is data replication to the cloud.

    How has it helped my organization?

    Using Snapshot copies and thin clones for operational recovery is convenient. This technology makes things very easy.

    The unified file and block-storage access across clouds and on-premises infrastructure have made things easier for us. It means that we do not face significant roadblocks.

    What is most valuable?

    The most valuable feature of this solution is that it makes our data readily available and we don't have to go through a lot of trouble to access it.

    What needs improvement?

    We would like to have support for high availability in multi-regions.

    There is no support for Microsoft Azure.

    For how long have I used the solution?

    I have been using this solution for three years.

    What do I think about the stability of the solution?

    The stability is very impressive and we have had no issues with it.

    What do I think about the scalability of the solution?

    Scalability is not an issue because it is really expandable. If you don't know the structure of the business you can scale up, scale down, and do everything graphically.

    How are customer service and technical support?

    We have not used NetApp technical support directly. We have been speaking with partners who are in our region.

    How was the initial setup?

    We used the NetApp Cloud Manager to get up and running, and we found it very simple. It was very easy, and you don't have to be an engineer to get it working.

    What about the implementation team?

    Partners from our region assisted us with the deployment. CW did a good job starting from scratch and getting everything up and running. When I would give a requirement, they would come up with all of the options that were available.

    Which other solutions did I evaluate?

    I have tried Pure Storage and EMC RecoverPoint, but ONTAP is easier to use.

    What other advice do I have?

    I love this solution. They have a lot of features and they explore the market really well, whereas other vendors fail to do those things. ONTAP keeps evolving with the needs of the market and follows the trends.

    I would rate this solution a ten out of ten.

    Which deployment model are you using for this solution?

    Private Cloud
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    Storage Specialist at a comms service provider with 1,001-5,000 employees
    Real User
    Offers good replication to the cloud and good deduplication
    Pros and Cons
    • "Replication to the cloud is the most valuable feature. Deduplication and compression are also very important to us. We are in the process of adopting to the cloud. We are going to AWS and we are trying to do a safety technician call out with integration to the cloud. NetApp allows us to move some of the volume to the cloud, at the same time that we continue providing the cloud services that we have on premises."
    • "I would like to see something from NetApp about backups. I know that NetApp offers some backup for Office 365, but I would like to see something from NetApp for more backup solutions."

    What is our primary use case?

    We use this primarily to consolidate our services and block services.

    How has it helped my organization?

    We are using Linux and eventually, we are going to use SnapMirror. So far, we have seen benefits from using this solution. When we started this process there were some very specific goals about log and files being stored in a single static device. This is achieved with a RAM solution. We are also able to integrate with the cloud, which is another goal we achieved. The solution has also saved us on costs, of course. We calculated that we are saving $1,000,000 across three years.

    The consistency of storage management across clouds has effected our storage operations. Essentially, one of the benefits of open NetApp is that ONTAP is pretty much the operating system for any mirrored device, so it doesn't matter if it is in the cloud or on-premises, or whether you use other NetApp products, you pretty much have a safe interface with ONTAP. We like that.

    One of our goals is to unify file our block file services into a single storage device. At the same time, we want to replicate on-site services to the cloud. That's also a benefit for us because that way we can move it to the cloud if we need to.

    What is most valuable?

    Replication to the cloud is the most valuable feature. Deduplication and compression are also very important to us. We are in the process of adopting the cloud. We are going to AWS and we are trying to do a safety technician call out with integration to the cloud. NetApp allows us to move some of the volumes to the cloud, at the same time that we continue providing the cloud services that we have on-premises.

    We are in the process of doing various plans for all equipment in order to do acceptable recovery of products in the new environment.

    What needs improvement?

    Maybe I need more speed, but so far, I don't have any feedback for improvements.

    I would like to see something from NetApp about backups. I know that NetApp offers some backup for Office 365, but I would like to see something from NetApp for more backup solutions.

    What do I think about the stability of the solution?

    The stability is great. We have been doing different scenarios about errors from controllers, to disks, and so far it is very stable. We have not had any issues. We upgraded our own version and did not have any issues there, either.

    What do I think about the scalability of the solution?

    This is another issue that we like from ONTAP. There are products for different scales. It is very easy to use.

    How are customer service and technical support?

    When we deployed everything, we opened a case with support for two minor issues we had with some servers. They're great. They were willing to help, easy to communicate with, and respond very quickly. They already found the issue and resolved it.

    How was the initial setup?

    We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. That is how we deployed it. Their configuration wizards and ability to automate the process were very easy. The wizard is very easy to follow. There are videos, so you don't really need a lot of skill. If you understand integrations and have a basic knowledge of the cloud, you can quickly connect your equipment. It's good.

    Which other solutions did I evaluate?

    We did evaluate other solutions.  We evaluated the main players in this area, like EMC.

    There are some features that we really liked from NetApp. One of them is the ability to consolidate files and blocks. Other vendors have some mirror solutions, but they are not in the maturity level that NetApp is. We also really like that NetApp has a product for the cloud that is really working and is proven and valuable. Other vendors do not have that, or if they have it, you need to deploy something in the middle. That is something that we like. We don't need to deploy anything. We can just run the backup directly from the OS and spin out the solution.

    What other advice do I have?

    Try not to focus only on the current issues, but also look into the innovation process of NetApp. It is very impressive how they have been able to develop and continue trying to develop products for the cloud. Try to gain a deeper understanding of established needs and requirements for files and blocks.

    I would rate this solution as ten out of ten.

    Which deployment model are you using for this solution?

    Hybrid Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    Buyer's Guide
    Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros sharing their opinions.
    Updated: June 2025
    Buyer's Guide
    Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros sharing their opinions.