Our primary use case of this solution is for SAN block storage.
We don't use AFF for artificial intelligence or machine learning applications.
Our primary use case of this solution is for SAN block storage.
We don't use AFF for artificial intelligence or machine learning applications.
It has improved the way my organization functions because it has enabled us to host a very fast, multi-tenant private cloud solution.
AFF has improved application response time by a lot.
This solution has helped us to stop worrying about storage as a limiting factor. We know we've got enough storage left and it's easy to manage, so we can tell how much real storage we do have left.
We use SapMirror a lot but the speed of the AFF is also very valuable.
The overall latency in our environment is very low because it's All Flash and we've got 10 Giga dedicated to the storage network
AFF's simplicity around data protection and data management is pretty good. With the NetApp volume encryption, we're getting data at rest encryption right now. It was very easy to turn on and very easy to manage with the onboard key manager.
It has enabled us to add new applications, without having to purchase additional storage. We've over-provisioned our storage quite a bit, simply because we know we've got time before people will grow into it.
It has not reduced our data center costs. NetApp charges a pretty penny for their stuff.
The next release desperately needs NFS4, extended attributes.
In terms of what needs improvement, the NAS areas are a little behind on technologies. For example, SMB 3 is not quite up to speed with a lot of the storage spaces stuff. NFS4 doesn't support some of the features that we need.
It's rock solid.
Scalability is expensive.
Their technical support is very good. We use them quite a bit and we have had good experiences with them.
We've been with NetApp since I came on the project and because I had NetApp experience before I brought it with me.
I've set up a NetApp network previously. The setup was pretty straightforward.
We used an integrator and we had a very good experience with them.
We've looked at EMC and Microsoft storage spaces. Neither one of them really compares.
My advice to someone considering this solution is that if you can afford it and you will be using it a lot, go for it.
I would rate it an eight out of ten. To make it a perfect ten it would need to be cheaper.
This solution provides storage for our entire company.
We have a unified architecture with NAS and SAN from both NetApp ONTAP AFF clusters.
This solution reduced our costs by consolidating several types of disparate storage. The savings come mostly in power consumption and density. One of our big data center costs, which was clear when we built our recent data center, is that each space basically has a value tied to it. Going to a flash solution enabled us to have a lower power footprint, as well as higher density. This essentially means that we have more capacity in a smaller space. When it costs several hundred million dollars to build a data center, you have to think that each of those spots has a cost associated with them. This means that each server rack in there is worth that much at the end. When we look at those costs and everything else, it saved us money to go to AFF where we have that really high density. It's getting even better because the newer ones are going to come out and they're going to be even higher.
Being able to easily and quickly pull data out of snapshots is something that benefits us. Our times for recovery on a lot of things are going to be in the minutes, rather than in the range of hours. It takes the same amount of time for us to put a FlexClone out with a ten terabyte VM as it does a one terabyte VM. That is really valuable to us. We can provide somebody with a VM, regardless of size, and we can tell them how much time it will take to be able to get on it. This excludes the extra stuff that happens on the back end, like vMotion. They can already touch the VM, so we don't really worry about it.
One of the other things that helped us out was the inline efficiencies such as the deduplication, compaction, and compression. That made this solution shine in terms of how we're utilizing the environment and minimizing our footprint.
With respect to how simple this solution is around data protection, I would say that it's in the middle. I think that the data protection services that they offer, like SnapCenter, are terrible. There was an issue that we had in our environment where if you had a fully qualified domain name that was too long, or had too many periods in it, then it wouldn't work. They recently fixed this, but clearly, after having a problem like this, the solution is not enterprise-ready. Overall, I see NetApp as really good for data protection, but SnapCenter is the weak point. I'd be much more willing to go with something like Veeam, which utilizes those direct NetApp features. They have the technology, but personally, I don't think that their implementation is there yet on the data production side.
I think that this solution simplifies our IT operations by unifying data services across SAN and NAS environments. In fact, this is one of the reasons that we wanted to switch to this solution, because of the simplicity that it adds.
In terms of being able to leverage data in new ways because of this solution, I cannot think of anything in particular that is not offered by other vendors. One example of something that is game-changing is in-place snapshotting, but we're seeing that from a lot of vendors.
The thin provisioning capability provided by this solution has absolutely allowed us to add new applications without having to purchase additional storage. I would say that the thin provisioning coupled with the storage efficiencies are really helpful. The one thing we've had to worry about as a result of thin provisioning is our VMware teams, or other teams, thin provisioning on top of our thin provisioning, which you always know is not good. The problem is that you don't really have any insight into how much you're actually utilizing.
This solution has enabled us to move lots of data between the data center and cloud without interruption to the business. We have SVM DR relationships between data centers, so for us, even if we lost the whole data center, we could failover.
This solution has improved our application response time, but I was not with the company prior to implementation so I do not have specific metrics.
We have been using this solution's feature that automatically tiers data to the cloud, but it is not to a public cloud. Rather, we store cold data on our private cloud. It's still using object storage, but not on a public cloud.
I would say that this solution has, in a way, freed us from worrying about storage as a limiting factor. The main reason is, as funny as it sounds because our network is now the limiting factor. We can easily max out links with the all-flash array. Now we are looking at going back and upgrading the rest of the infrastructure to be able to keep up with the flash. I think that right now we don't even have a strong NDMP footprint because we couldn't support it, as we would need far too much speed.
The most valuable features of this solution are snapshotting and cloning. For example, we make use of FlexClone. We're making more use of fabric pools, which is basically tiering of the storage. That way, instead of having just ONTAP with this expensive cost, if we want to roll off to something cheaper, like object storage, we can do that as well.
The cost of this solution should be reduced.
SnapCenter is the weak point of this solution. It would be amazing from a licensing standpoint if they got rid of SnapCenter completely and offered Veeam as an integration.
This solution is very stable. We have had downtime, but only on specific nodes. We were always able to failover to the other nodes. We had downtime from a power outage in our data centers that was mainly because we didn't want the other side to actually have to take a load of an SVM DR takeover because we knew it was going to be back up in a certain amount of time. Other than that, we have had no downtime.
It seems to be almost infinitely scalable. Being an organization as large as we are, it definitely meets our needs.
We have onsite staff that is a purchased service from NetApp, so we do not directly deal with technical support.
Prior to this solution, we had all these different disparate types of storage. It was a problem because, for example, but we'd be running on low NAS but there was all the extra storage in our SAN environment. The solution seems a little cheaper, but when you added the whole cost up, it was cheaper for us to just have a single solution that could do everything.
We have seen ROI, but I can't quantify how much.
This is a really good solution that definitely meets our needs. It integrates well with all of the software that we're using and they have a lot of good partnerships that enable that. There are a lot of things that can bolt right in and talk to it natively, like Veeam and other applications. That can really make the product shine. I just wish that NetApp would buy Veeam.
I would rate this solution an eight out of ten.
We are in the process of moving to AWS and we are using this solution to help move all of our data to the cloud, using the tiering and other functionality.
We have approximately fifty AFF clusters spread across three locations.
We plan to use this solution for artificial intelligence and machine-learning applications, but we are still in the PoC right now. It is something that my team is working on.
Our DR and backup are done using SnapMirror.
This solution has helped simplify our IT operations. We can easily move data from on-premises to the cloud, or from one cloud to another cloud. NetApp SnapShots and SnapMirror are also helpful.
The thin provisioning has allowed us to add new applications without having to purchase additional storage. We are shrinking the data with functions like deduplication and giving almost two hundred percent. It is very helpful.
This solution has allowed us to move very large amounts of data without affecting IT operations. We have moved four petabytes to the cloud. We have moved data from on-premises to the cloud, and also between clouds. It is easy to do. For example, if you want DR or a backup in a second location, then you just use SnapShot. If you have a database that you want to have available in more than one location then you can synchronize them easily. We are very happy with these features.
Our application response time has been improved since implementing this solution. The AFF cluster is awesome. Our response time is now below two milliseconds, whereas it used to be four or five milliseconds. This is very useful.
The costs of our data center have definitely been reduced by using this solution. The power consumption and space, obviously, because this solution is very small, have been reduced.
We have been using this solution to automatically tier cold data to the cloud. I would not say that it has affected our TCO.
This solution has not changed our position in terms of worrying about storage as a limiting factor.
The most valuable features of this solution are the deduplication and the ability to move data to different clouds. We have been using Cloud Sync and Cloud Volumes, and we have moved four petabytes using Cloud Sync.
It would be very useful if we could do the NFS to CIFS file transfer, but it is not supported at this time.
We are finding limitations when it comes to moving data to AWS.
We have been using this solution for ten years.
The stability of this solution is fine. We have not experienced any downtime or any issues.
Scalability is something that we are spending time on, but it is an internal issue related to seeking financial approval. The scalability of the solution is not a technical issue.
The technical support for this solution has always been number one. There is no doubt that they are getting more responsive and more technical.
We performed a PoC using Cloud Volumes and Cloud Sync, and we were happy with the time, durability, and availability.
The initial setup of this solution is straightforward.
We can install this solution ourselves.
We have seen ROI from this solution.
We evaluated a solution by EMC, but we found they their filesystem was not as robust. That is the reason that we chose NetApp.
We are really happy customers and this is a solution that I can recommend.
I would rate this solution a nine out of ten.
We use it primarily for CIFS and NFS shares, e.g., Windows shares and network shares for Linux-based systems.
It has been very helpful for us. Data mobility is big. Being able to move data between different locations quickly and easily. This applies to data protection and replication. The hardware architecture has been very good as far as easily being able to refresh environments without any downtime to our applications. That's been the biggest value to us from the NetApp platforms.
The solution simplifies IT operations by unifying data services across SAN and NAS environments on-premise.
We are working on a lot of efforts right now where environments need multiple copies of data. Today, those are full copies of data, which require us to have a lot of storage. Our plans are that you'll be able to leverage NetApp Snapshot technology to lessen the amount of capacity that we require for those environments, primarily like our QA and dev environments.
We've done full data center migrations. The ease of replication and data protection has made moving large amounts of data from one data center to another completely seamless migrations for us.
Early on, the clustered architecture was a little rough, but I know in the last four years, the solution has been absolutely rock solid for us.
Something I've talked to NetApp about in the past is going more to a node-based architecture, like the hyper-converged solutions that we are doing nowadays. Because the days of having to buy massive quantities of storage all at one time, have changed to being able to grow in smaller increments from a budgetary standpoint. This change would be great for our business. This is what my leadership would like to see in a lot of things that they purchase now. I would like to see that architecture continue to evolve in that clustered environment.
I would like to see them continue to make it simpler, continuing to simplify set up and the operational side of it.
I can't remember the last time we had an issue or an outage.
It is one of the best solutions out there right now. It is extremely simple, reliable, and seldom ever breaks. It's extremely easy to set up. It's reliable, which is important for us in healthcare. It doesn't take a lot of management or support, as it just works correctly.
Our NetApp environment has been fairly stable and simple that we don't have a lot of resources allocated to support it right now. For our entire infrastructure, we probably have three engineers in our entire enterprise to support our entire NetApp infrastructure. So, we haven't necessarily reallocated resources, but we already run pretty thin as it is.
Scalability has been great. There have been some things I would like to see them do differently, but overall, the scalability has been wonderful for us.
The solution’s thin provisioning has allowed us to add new applications without having to purchase additional storage. We use thin provisioning for everything. We use the deduplication compression functionality for all of our NetApps. If we weren't using thin provisioning, we'd probably have two to times more storage on our floor right now than we do today.
We use all-flash arrays for our network shares. We have a couple of other platforms that we also have used in the past. I really wanted to move away from those for simplicity. Another big reason is automation. NetApp has done a great job with their automation The Ansible modules along with all the PowerShell command lists that they have developed, make it very consumable for automation, which is very big for us right now. That was one of the big driving forces is having a single operating environment, regardless if I'm running an all-flash array or hybrid array. It's the same look and feel. Everything works exactly the same regardless. That definitely speaks to the simplicity and ease of automation. I can automate and use it everywhere, whether it's cloud, on-prem, etc. That was one of the real decisions for us to decide to go that direction.
The overall setup is very easy. Deploying a new cDOT system is the hardest part. On our business side, because our environment is very complex, there was some complexity that came up. In general, that is one nice thing about Netapp. Regardless of how simple or complex your environment is, it can fit all of those needs. Especially on the network side, it can fit into those environments to take advantage of all the technologies that we have in our data centers, so it's been really nice like that.
We did the deployment ourselves.
The solution has improved application response time. We are using the All Flash FAS boxes of the AFS and our primary use case is around file shares. These aren't really that performance intensive. Therefore, overall, response times have improved, but it's not necessarily something that can be seen.
From a sheer footprint savings, we're in the process of moving one of our large Oracle environments which currently sits on a VMAX array, taking up about an entire rack, to an AFF A800 that is 4U. From just the sheer power of cooling and rack-space savings, there have been savings.
I haven't seen ROI on it yet, but we're working on it.
We did RFIs with the different solutions. We were looking at a NetApp, Isilon, and Nutanix. Those were three that we were looking at. NetApp won out primarily around simplicity and ease of automation. It's the different deployment models where you can deploy in the cloud or on-prem, speaks to its simplicity. Our environment is very complex already. Anything that we can do to simplify it, we will take it.
When you are evaluating solutions:
You will be looking at things, like cloud, automation, and simplicity, regardless of how big you are. The NetApp platform gives you all of these things in a single operating system, regardless of where you deploy.
The solution has freed us from worrying about storage as a limiting factor. I'm very confident that the NetApp platform will do what they say it's going to do. It's very reliable. I know that if there is an issue, I can quickly move that data wherever I need to move it with almost no downtime. It gives me a lot of data flexibility and mobility. In the event that I did need to move my workloads around, I can do that.
I would give it a nine out of 10. The only reason I wouldn't give it a 10 is because I would like to see some architectural changes. Other than that, its simplicity and the ability to automate are probably the two biggest things. Being able to move data in and out of the cloud, if and when we decide to do that, it gives us the most flexibility of anything out there.
We do not use this solution for AI or machine learning applications.
We are talking about automatically tiering cold data to the cloud, but we are not doing it yet.
We have a pretty amazing story about using AFS. When I went into this organization, we had a 59% uptime ratio, and at the time we were looking at how to improve on efficiency, and how to bring good technology initiatives together to make this digital transformation happen. When the Affordable Care Act came out, it started mandating a lot of these health care organizations to implement an electronic medical record system. Of course, since health care has been behind the curve when it comes to technology, it was a major problem when I came into this organization that had a 59% uptime ratio. They also wanted to implement an electronic medical record system throughout their facility, and we didn't have the technology in place.
One of my key initiatives at the time was to determine what we wanted to do as a whole organization. We wanted to focus on the digital transformation. We needed to determine if we could find some good business partners in place so we selected NetApp. We were trying to create a better, efficient process, with very strong security practices as well. We selected an All-Flash FAS solution because we were starting to implement virtual desktop infrastructure with VMware.
We wanted to throw out zero clients throughout the whole organization for the physicians, which allowed them to do single sign-on. The physician would be able to go to one specific office, tap his badge, sign in to the specific system from there. That floating profile would come over with him, and then you just created some great efficiencies. The security practices behind the ONTAP solution and the security that we were experiencing with NetApp was absolutely out of this world. I've been very impressed with it. One of the main reasons I started with NetApp was because they have a strong focus on health care initiatives. I was asked to sit on the neural network, which was a NetApp-facilitated health care advisory group that focused and looked at the overall roadmap of NetApp. When you have a good business partner like NetApp, versus a vendor where a vendor's going to come in, sell me a solution and just call me a year later and say that they want us to sign something, I'm not looking for people like that. I'm looking for business partners. What I like to say is, "My success is your success, and your success is ours." That's really a critical point that NetApp has demonstrated.
Everyone looks at health care because health care has been an amazing organization to be in. We're seeing the transformation of how we're becoming a digital company. Every organization is becoming a digital company, and we're starting to see the advancements of technology really come in to place. Your new CEO is the patient, and that's the bottom line. That's my CEO. As an organization and as a technologist, I have to build a very strong patient-centric strategy that focuses the technology on the patient's needs, because at the end of the day, that patient could choose to either go to your organization or to another. We want to keep that good loyalty and that good specific patient in our organization, and we want to make sure that we are creating very strong, asynchronous tools that benefit a patient both inside and outside the organization. That's why I always say patient care is number one. AFS has supported our overall business initiatives.
Applications are a critical point. I think that All Flash FAS is an amazing thing when it comes to speed, efficiency in what it's doing. We've been very impressed with regards to it as well. We look at different initiatives, and we're starting to focus on different initiatives when it comes to data analytics and data mining. Having that specific availability, and making sure that we can focus on those initiatives and those strategies, we're very confident that the solutions that we are choosing with NetApp are going to give us the edge advantage of moving forward into the future.
I think when you look at artificial intelligence and at machine learning, you look at predictive analytics. You have to have very strong data silo in order to get that clean data. I think with all the data that we're creating in this health care organization, we need to make sure that we can create well-structured data which will allow us to data mine that information to come out with some good valuables, meaning better patient care, better ways to reduce readmission rates, better ways to increase revenue. There are so many benefits in regards to good, strong data mining that produce great analytic reports.
Right now we do have a very strong cloud initiative. We are moving forward to the cloud because the thing is I think the future of health care, the future of artificial intelligence improvements is really moving a lot of these health care organizations over to the cloud where there is that data mining capability of really bringing in all these algorithms and all of these good collaborations because collaboration is definitely key. If we can collaborate, and if we could start focusing on more of interoperability, meaning that we're sharing information more successfully, because right now, health care, has no interoperability. Everyone talks about interoperability, but we don't have interoperability. You go from one facility to another, it's like you're getting completely different services. I want that information from one facility to another to go and share information, which I think is going to be a success, because, you come to one facility, you get poked for lab results, you get exposed for radiology results, meaning radiation, then you go over into another organization that's saying that they can't retrieve your lab or radiology results and now we're going to have to re-poke you and re-expose you to radiation. Those are problems.
Another one of my main focuses is on cybersecurity initiatives and cybersecurity improvements. I think NetApp has really focused a lot on cybersecurity. I was really impressed on some of the cybersecurity sessions that they had because you figure health care's one of the most attacked sectors out there and we hear about these health care organizations being ransomed all of the time. If we do get ransomed, we need to think about how we are going to restore that information and making sure that we have the capabilities that are in place. NetApp has done a great job with it. They do see a huge priority when it comes to cyber security, so it's very important for them to continue to focus on those initiatives.
The user experience has been absolutely amazing. We're about 80% virtualized on the desktop standpoint, so we do utilize VDI very highly. Using the All-Flash FAS solution, we had to basically determine that there was going to be some efficiencies and some speed as well, too, because you figure we're giving all of these health care users a virtual desktop, plus the utilization of All-Flash FAS, we need to make sure that their specific process is really rolling and moving in an efficient way, because the health care industry is a fast-paced organization. We're basically taking care of patients' lives. The technology that we bring has to be very efficient to provide the best patient care that we can have, and NetApp All-Flash FAS has really proven that point.
Considering that NetApp has health care view and that really strong health care initiative, they really need to consider what they need to do next to improve better data sharing and to make sure that the information that we are sharing with one another is fully encrypted, meeting HIPAA and HITECH regulations as well.
Stability has been pretty amazing as well. I came to an organization that was 59% uptime which was throughout the whole enterprise. That's a major problem because when you start measuring downtime, that is a loss of revenue for the organization. Since I've implemented a lot of these new strategies, we have done a complete 360. We've implemented these strong technology initiatives that have really produced better business efficiencies. We went from a 59% uptime to a 99.9% uptime ratio, which is absolutely mind-blowing. If you look at the before and after pictures, it's going to blow minds because we've been able to do some amazing things. We're a three-time Most Wired winner, which is given to health care organizations, top health care organizations making the most progress of health information technology. It's been an honor to have been able to design the team that I have, the very strong core team, and the good initiatives that we've had together because I always say that we must leave our egos at home. Collaboration is definitely the key to digital transformation, and we need to come together to make a difference in the future.
Scalability, the improvements that we see with AFS, and the reliability has been such a critical element. I think the technology that NetApp has, especially when you look at a disaster recovery standpoint because you figure we're a health care organization and any type of outage is considered revenue loss, we really want to try to avoid those specific elements.
Tech support has been absolutely amazing. I think on the technical aspects as well, my staff is able to get great support from the NetApp technical support resources that we have. What I love about NetApp is they have a health care division. At times, it's such an amazing thing because if we have a healthcare-related issue, there's no one better than having prior CIOs from health care organizations that NetApp has hired, and that are part of the healthcare team, to help out with any of those initiatives and support problems. Support has been absolutely phenomenal.
We could definitely spin something up pretty quickly. It takes about ten minutes which is pretty quick. We have a very good team that does that as well.
The total cost of ownership has increased a little. When I look at building very strong, good strategies that get presented to the board of directors and the additional executive teams, I look at two things: I look at ROI and I look at total cost of ownership. At times, my overall goal is that I want to get out of the data center business. I know that TCO really does increase because you have that on-prem solution, but I think moving forward into the cloud-based initiatives that we have, we're going to definitely start seeing a decrease within that TCO because now we don't have all of this inventory to take care of. We're being a lot more efficient and a lot more agile as well too.
I am part of the NetApp A-Team. I've been a huge advocate towards NetApp. I would say that nothing is perfect, but NetApp is leading the way when it comes to digital transformation and digital efficiencies as well. Their focus towards health care has been out of this world. I would give that specific product a nine, moving forward to almost perfect ten.
My primary use case for All Flash FAS that we have is pretty much everything. It is the go-to storage device that we use for block fiber channel devices on our heavy SAP workloads as well as user base files and file shares for databases.
AFF improves how our organization functions because of its speed. Reduction in batch times means that we're able to get better information out of SAP and into BW faster. Those kinds of things are a bit hard to put my finger on. Generally, when we start shrinking the times we need to do things, and we're doing them on a regular basis, it has a flow on impact that the rest of the business can enjoy. We also have more capacity to call on for things like stock take.
AFF is supporting new business because we've got the capacity to do more. In the past, with a spinning disc and our older FAS units, we had plenty of disc capacity but not enough CPU horsepower and the controllers to drive it and it was beginning to really hurt. With the All Flash FAS, we could see that there are oodles of power, not only from disc utilization figures on the actual storage backend but also from the CPU consumption of the storage controllers. When somebody says "we want to do this" it's not a problem. The job gets done and we don't have to do a thing. It's all good.
All Flash FAS has improved performance for our enterprise applications, data analytics, and VMs which are enterprise applications. It powers the VM fleet as well. It does provide some of our BW capabilities but that's more of an SAP HANA thing now. Everything runs off it, all of our critical databases also consume storage off of the All Flash FAS for VMs.
For us TCO has definitely decreased, we pay less in data center fees. We also have the ability with the fabric pool to actually save on our storage costs.
The valuable features are the fabric pool. We are taking our cold data and pumping it straight into an estuary bucket. Also, efficiency. We're getting about two and a half times upwards of data efficiency through compaction, compression, deduplication, and it's size. When we refreshed from two or three racks of spinning discs down into 5U of rack space, it not only saved us a whole heap of costs in our data center environment but also it's nice to be green. The power savings alone equated to be about 50 tons of CO2 a year that we no longer emit. It's a big game changer.
The user experience from my point of view, as the person who drives it most of the time, is a really good one. The toolsets are really easy to use and from the service offered we're able to offer non-disruptive upgrades. It just works and keeps going. It's hard to explain good things when we have so few bad things that actually occur within the environment. From a user's point of view, the file shares work, everyone's happy, and I'm happy because it's usually not storage that's causing the problem.
I would like for them to develop the ability to detach the fabric pool. Once we've added it to an aggregate it's there for life and it would be nice to disconnect it if we ever had to.
One to three years.
Stability with AFF has been really great. We blew an SSD drive which we thought may never actually happen and it just kept on going. We've not had any issues with it even though we actually went to a fairly recent release of data on tap as well that just works.
Scalability is a really cool part of the product in terms of growing. We don't see that we'll actually need to do much of that. We'll take more advantage of fabric pool and actually push that data out to a lower tier of storage at AWS and our initial projections on that suggest that we've got a lot of very cold data we're actually storing today.
AFF tech support we've had a couple of calls open and it's always been brilliant. I really like the chat feature because one of the things that annoys me is the conference calls that usually come when you have to contact the hardware vendor. You get stuck on a webex or a conference call for hours on end where it's just easier to chat to the techo at NetApp in real time and if he isn't able to help you he'll just pass you on to the next one and you end up staying in the chat which means that I continue working while dealing with a problem.
We knew it was time to switch to this solution because it was costing us a fortune in maintenance, especially when our hardware was getting over the three to five year old mark. With spinning disc, it's not like we can neglect that because drives fail all the time and the previous iteration of storage we had was a NetApp FAS, so we've gone from NetApp to NetApp.
We implemented in-house. It was dead easy. All you have to do is throw it in the rack, plug in the network and fiber cables, give it a name, and away you go. There is very little that actually needs to happen to make it all work. I think we managed to get one of them up in two or three hours.
We also considered Dell EMC and Pure Storage. The biggest reason we picked NetApp was the ease of actually getting the data to the next iteration but also the other vendors don't have a product that supports everything we needed which is file services and block services. It's a one stop shop and I didn't really want to have to manage another box and a storage device at the same time.
I would rate AFF a ten out of ten. If I was in the position to tell someone else about All Flash FAS and why they should get it I would simply say just do it. I think everybody in the storage community is pressured to live on more with less and this product basically enables that to happen.
We have deployed NetApp AFF with four nodes; two of these are in our primary data center, and the remaining two are in the second data center. We are using Cluster Mode configurations.
Our organization has improved because this solution provides a Highly Available storage system with DR configurations, deployed across two data centers.
The features that I found most valuable are SnapMirror and SnapVault; these provide DR and backup for data redundancy. The High Availability and Cluster-mode Setup are also very useful.
I would like to see an improvement in the High Availability of the NFS and CIFS sharing during upgrade and patching; this would help to avoid downtime.
Snapmirror is one of the greatest invention by Netapp. Simple to setup and use. We currently have it installed across multiple data centres and being used for Disaster recovery, virtual Data center as a traditional datastore, vvol, and now the benefit of using storage grid to move cold data with auto tiering.
VMware datastores over NFS for DL585 G7 hosts on a 10G switch.
NetApp FAS was unable to keep up with the I/O. A200 has performed without a problem.
Having separate storage virtual machines with completely different setups for NFS and Windows solves problems the FAS has when the domain controllers are unreachable.
The system commander web management is good, but it is easy to make bad configurations, and it takes a lot of jumping around to work a single issue.

Great review! Please do consider also regular patching specially that resolves security risks. Newly improved Active IQ can help you provide this very important dashboard, analytics, alerts etc.