- Reliability
- Rich features
- Ease of management
- Excellent support
A reliable and easily managed storage system is a key performance factor. The system also has more features than we require.
A reliable and easily managed storage system is a key performance factor. The system also has more features than we require.
Naturally, there would be room for improvement. As I see it, there could be more interfaces, more cache, etc., but those challenges are solved by just getting some other model.
Four years.
None whatsoever.
No issues, as expansion was a breeze.
We do use third party support. On a scale of one to 10, I would rate the support to be an eight.
During the years, we have had quite a few storage solutions, none of which did give us the same level performance, reliability, and manageability as the FAS-series has.
The initial setup was quite easy and pleasing. Just enter some key values and there you go.
For a number of years now, purchasing a storage system has been actually purchasing software. There is no plain storage anymore, more or less intelligent software solutions. Thus, licenses are required to fulfill the business demands. One considering between different storage system should carefully investigate what software options they get bundled in and what optional software they actually would need. Most storage vendors also have software, or licensing bundles, which may offer the required licenses considerably cheaper, but do also maybe offer licenses, which are not needed.
No other solutions were evaluated at the time. Actually, this system was familiar to use and fulfilled the business demands.
You really can't go wrong with NetApp products, They perform well, are rock solid, offer good space saving technologies, and the support is above par.
The performance allows me to provide backend storage for large number of VMs and databases at a competitive price point.
Unified Monitoring v6.2 loses a bunch of functionality that previous versions had. For example, I took a cluster out of Unified Monitor, but Storage Monitor was still alerting me about it. 6.2 is not as comprehensive, but Unified Monitoring 6.2 will only be useful when it does everything. Insight’s price is just too expensive and unreasonable.
It's pretty stable, even if it runs into something freaky, it keeps going. For example, mysterious a reboot, and nobody notices. It keeps working.
It scales to a point, and then you buy more hardware. Doing a head swap (swapping out controllers) is not as easy as it used to be.
It's better than Oracle, but actually pretty good. They're responsive, and help resolve situations. We have had a couple of issues, but 99% of the time, they get me an answer, although it may not be what I like, but it’s a definite answer within a reasonable time frame.
It's complex, not a trivial task. We can unbox it and deploy. There are many unpublished tech tips that NetApp engineers get that customers don’t (for example, how to save a disk).
The price-per-gig makes it the most expensive storage, more than EMC VMAX. So I’d like to see more aggressive pricing.
It's losing points on its value. The performance is nearly perfect, but it’s really expensive.
They should make it faster and cheaper, but it does what we need it to do.
Good overall. We’ve hit some bugs in the ONTAP code that’s caused it to crash. We’re just coming off of seven-mode, and I'm looking forward to the capabilities of CDOT.
It's highly scalable, especially with CDOT. We can scale out quickly.
It's very good, generally first tier are wiling to help us or get us to right person pretty quickly.
It was complicated. We were coming off an IBM system five years ago. We got help for everything from cabling, terminology, and we had to relearn how we reconfigured storage. We got help from both NetApp and our VAR.
Just do it. Chances are the functionality that comes with the ONTAP software will be better than other products at a similar price.
The integration with VMware is the most valuable feature for us because we run a lot of VMs and the backup is very good when you run your VM in NFS.
We had a case when they had to restore a lot of data. We went back one hour and got back everything. The restore itself only took about an hour.
Some of the tools could be improved like NetApp OnCommand. This has been a lot better recently, but they could make it faster.
We've been using it since 2005.
It's very stable and I’ve never experienced any problems in 10 years.
It scales to our needs.
8/10 - only because it is impossible to have a 10, as there is no one that good. We’ve had a good experience with their customer service.
Technical Support:The solutions that are present on NetApp’s website are enough usually, but when it is tough for me to resolve it on my own I go to our consultant.
We did a long time ago.
Initial setup was pretty straightforward. We started on a small scale and built it up.
We implemented it in-house, but we use a consulting company to help. Now, we run it on our own.
It fulfills the needs we have for storing data well. We had a lot of storage spread out over many devices from many vendors and now everything is consolidated. It saves a lot of time.
Ask other people who use it as references are really valuable.
Snapshot, because so much of it is on our end-user storage, our users often delete things they’re not supposed to. Having snapshots to revert these deletes quickly and easily is very valuable.
Our greatest advantage with it is ease of use, flexibility, and reliability.
Knowing what’s coming down the pipe, NetApp is headed in the right direction. In their five year roadmap, it provides what I need it to do.
It's extraordinarily stable. We had one outage one-and-a-half years ago when batteries were bad, but that was a known defect on that particular model. However, that was our fault for knowing this was an issue. We've had two outages in 10 years due to something other than operator’s error.
Incredibly scalable. Not even touching what it could do. Between scale up and scale out, we’re not even close to reaching its highest potential. We have a four node NAS with the potential for 24 nodes.
It's fantastic.
Once you’ve done one, it seems very intuitive. However, the first time seems very complicated.
Of all storage technologies I work on, it’s the easiest to learn and one of the most powerful. But you need to spend your time taking classes before digging in too deep. Get educated.
NAS functions, as it's primarily used for all our file shares. We have other NAS devices, but this is easier.
Also, High Availability is a valuable feature.
Snapshots are good, especially the snap mirror, which we use for disaster recovery and backups. Also, we have a lot of data centers (seven primary centers) and we deploy at each of them.
I miss their old support structure. We used to be able to call up and get an answer pretty quickly, but now it’s more arduous.
It could be cleaner for dedupe, and I wish we could do dedupe for the entire system and not just a specific volume.
It's highly reliable, but has had the occasional bug. We install patches or shut off features.
Depends on how you’re scaling. If wide, it works well. Vertical scaling not so well because we’re primarily SMB. No matter how brief, people don’t like being offline (e.g. baby monitors).
I’ve worked with them for over 10 years. They used to be stellar, but in the last three to five years, not as reliable. The quality of information you get from them is less specialist, and they've not broken it up so that you get routed to a particular technology, it used to be one senior guy who knew everything.
There’s always networking issues, but not related to NetApp.
Other than tech support, it loses points because it could always be better.
It depends on what you’re implementing. Consider carefully what you want to do – for example, have enough vLANs because you don’t want to be adding more later.
I think that the flexibility with the volume, resizing, and performance.
I think that our performance has definitely increased.
I think that they are upgrading the performance monitoring tool, which is the main thing I think needs improvement. From version to version they are changing, and you want to see things improve – I think we will continue to see more and more benefits.
We have been using it since 2013.
Pretty solid in terms of stability.
We haven’t really grown it but I see a roadmap, the only problem there may be cost. It’s not an expensive product per se, but because of budget issues. People sometimes don’t evaluate the cost correctly.
NetApp overall has been really good in terms of technical support.
Initial setup was hard a year ago, but now we just did another setup and everything was smooth. It’s gotten a lot better in the last year we’ve been using it.
If you are on the fence it’s been a very good product, you don’t want to build your own solution, you want to use the appliance for the flexibility. Overall performance has gotten a lot better.
More information on VVOLs is being released every week and it is only now that we are getting a chance to play with the full release code that we are able to dig into the detail of how it works. Let’s start off by exploring the benefits of VVOLs that are likely to make it game changing technology:
Granular Control of VMs
Enhanced Efficiency and Performance
Automated Policy Based Management
To get VVOLs up and running you need cDOT 8.2.3 or above, Virtual Storage Console 6.0 and VASA Provider 6.0 – for more background information see A deeper look into NetApp’s support for VMware Virtual Volumes.
The On-Demand engine
One of the best kept secrets of cDOT 8.3 was the inclusion of the On-Demand engine which consists of the following new commands:
When a command is triggered, data access at the destination begins immediately, while in the background the data is copied or moved from source to destination. The commands cannot be directly invoked, rather other operations take advantage of them (i.e. VVOLs and LUN moves). So when the policy of a VVol is changed that results in it needing to be moved from one volume to another (even across controllers) the On-Demand engine non-disruptively moves data access from the source to the destination instantly. All writes go to the new destination and, while the data is being copied from the source, reads are redirected back to the original volume as required. If a VVOL is migrated elsewhere in the cluster, a rebind operation automatically changes the I/O path to the new closest PE, maintaining optimum performance and reducing complexity and latency.
Not all VVOLs implementations will be equal
The interesting thing about VVOLs is that not all implementations will be equal, as it puts more responsibility on the array by moving many storage operations to it that were previously handled by vSphere – you therefore need an array that provides efficient:
The current snapshot technology in VMFS is to say the least very poor – best practice is to have no more than 2-3 snapshots in a chain (even though the maximum is 32) and to use no single snapshot for more than 24-72 hours – the reason is simple, storage performance will suffer if you create a snapshot on a VM. So if an array supports VVOLs and we can off-load snapshot and clone creation to the array then we have surely solved the problem and we can then keep 100s of snapshots. As always it is not so simple – if the array uses inefficient CoW snapshots then you will not gain much over the standard vSphere snapshots. Thin-provisioning is another area whereby some arrays do it very efficiently, but many suffer a significant performance drop unless thick LUNs are used.
The nice thing about FAS is that it has excelled at the first three points above for many years and the last point has been introduced with the On-Demand engine in cDOT 8.3 – there are plenty of arrays on the market that will be enabled for VVOLs, but they will not be able to claim efficient support for these features without massive re-engineering work.
Other points of note
It is essential to backup the VASA provider VM, this can be achieved using the in-built backup capabilities of the array using one of the following options:
NetApp All-Flash FAS has emerged as the first storage array to successfully complete validation testing with Horizon View 6 with VVols.
The VADP APIs backup vendors use are fully supported on VVOLs therefore backup software using VADP should be unaffected.
For a detailed breakdown of vSphere product and feature interoperability with VVOLs click here
Get hands on with VVOLs on FAS
If you would like to gain a detailed understanding of how the technology works we have created, in conjunction with VMware and NetApp, a series of demo café events – to find-out more click here.
VVOLs is certainly interesting technology and I am sure what we have today is only the beginning of the journey and it is going to be interesting to see how it develops over the coming years – we know for sure that NetApp will be making improvements to cDOT to enable things like replication to be set at a VVOL level.
What do you think – is VVOLs as game changing as VMware thinks?
Do they support smb 3, nfs 4, object based storage? Are there tiering?