What is our primary use case?
Our primary use case is DR and backup.
The performance has been pretty good. We have installed three of the TSM large Blueprints in the past couple of years. We are continuing to scale them. It is all disk replication and we are working on eliminating our tape. We have three tape libraries across three data centers and we are continuing to reduce the reliance on tapes, as we move more things into compress, dedupe, and container pool stuff.
From a Spectrum Protect perspective, AIX has been largely our footprint for a long time with TSM, and those are the large Blueprints that we purchased, the 824s with 256GB of memory. We do F900 for the database, and a V5030 large petabyte back-end on each of those. We have gone into the new Blueprints with a Linux OS. That is a change in direction from our management team which has spun us in a little bit of a different direction than the standard stuff we have done, which is fine. Some of our new Blueprints have been built on Linux, and they are more of a medium scale.
If we back out and we start floating up to a 10,000-foot view of data centers, we have 4.5 petabytes of SVC Spectrum Virtualize. We have been using it for about 14 years and been very successful with it. We use Easy Tier with a good healthy mix of flash, in the neighborhood of 400 terabytes. Spinners, 10K drives, 15K drives are all but gone in our data center at this point. As far as server OS, we are an AIX pSeries shop for our big iron. VMware for our x86 virtualization, and hypervisor choice across UCS Dell.
It is used in two data centers in northwest Arkansas, and looked at as a single data center. We own our own dark fiber between the two. We do a stretch cluster topology across a couple of different clusters in that environment, and support everything with VDisk mirroring between the two.
How has it helped my organization?
The storage admins: If we are standardizing on a Blueprint spec, and it has been blessed by IBM, it helps the supportability of everything. We are not "cowboying" things. It is, "This is the spec, it will perform to this standard,” and we know what our expectations are going into it.
We have been fortunate to roll out a few of the Blueprints, and they have been successful, so supportability is one of the benefits.
What is most valuable?
Reliability: Our tape library has aged. It is an old 3584. We have had it for many years, probably 10-plus years. It is problematic at times. We have a single robot, so it gives us the ability to get away from that mechanical robot and just do disk-based replication and backup.
Performance and recoveries are better. Our clients and customers are happier with the performance of it. They can just spin something up, take off, and they don't have to say, “Hey, this tape is busted or this tape is marked write-protected.” Operators kick tapes across the floor in the data center every now and then. Now, I don't have to worry about that.
What needs improvement?
Right now, I can't say exactly what the feature would be, but it would be cloud-based. Our new leadership has pushed us to go more toward the cloud, so we are definitely going to leverage anything we can in that direction (public cloud). I imagine that this is a pretty common piece of feedback for this question.
We're just not sure what the real world results will be when we get there. That is the big question mark. Ideally, I would want to spin up a host on-premise with Spectrum Protect on it and have no storage. I would have my database running locally on flash, and all the pools would be remote. However, I don't think this is realistic from a performance perspective, dumping all my data in the data center over a 40-millisecond hop to the next nearest region of whatever public cloud is available.
There are definitely some things in that area that we can address and work toward, but I don't know if they are achievable.
What do I think about the stability of the solution?
We have been happy with the stability. We have been going through a lot of upgrades. There are cycles of upgrades which have been a lot more frequent lately, because we are in a new compression world with the container pools, and we have hit a few APARs working with support.
Regarding the upgrades, it has been alright, though a little bit of a challenge. However, the guys on the team have been well supported by IBM in this endeavor, so that has been nice.
What do I think about the scalability of the solution?
Out of a large Blueprint, the advertisements are for 80 terabytes of ingest a day, then there are the replication pieces, and the scalability of your database capacity, etc. We have not quite topped out any of those maximums yet, but we have hit our maximum of what the server will do. So we have starting scaling horizontally across a couple of other environments.
As far as how it scales, it maybe didn't quite meet what we thought, but everybody is different. Everybody's shop is different. Our Oracle Databases, our workloads, Exchange Servers, etc. are going to be different than others'. We understand that, which is why you get the “it depends” answer from everybody when you talk to them about how much it can do.
I want to see it work, touch it, feel it, and PoC it, then we can know how it works for us. Everybody is different. All shops are different, even though they run a lot of the same gear.
How is customer service and technical support?
Technical support is getting better. We have given them some good feedback, and they are listening, which is nice. The guys on the team can see that. Overall, there are still areas for them to improve in, as they still have quite a bit of work to do, but they're making steps in the right direction.
This would be a good question for my team to answer. I just see the PMRs bounce back and forth. We have a local rep who is really good about helping quarterback these things and get attention where it is needed, especially if it is a high-priority deal for us.
Overall, I am satisfied with the support. There are definitely places where things could be better, but that is the same with everybody. Nobody is perfect.
How was the initial setup?
Setup was pretty good. With the Blueprint design that they put out, you do the runbook, and run through it. There are a few "gotchas", but overall it was pretty straightforward. We like having a standard setup.
We know with this pool of resources, if you dump it somewhere, it will be protected offsite. That is the mode we want to be in, rather than having to go back and double check. We do not want to say, “This one thing happened, it should have went here, and it didn't.” We are trying to get more generic with these services.
What's my experience with pricing, setup cost, and licensing?
When we did this, migrated to this new disk-based footprint, we did a reclassification. We worked our licensing into a capacity model, which is nice. Now, everything is much simpler to manage from a licensing perspective.
Which other solutions did I evaluate?
The evolution off of tape is one that we have looked to do for a long time. We've had BTLs pitched that have never been viable solutions. We took knocks for overlooking those back then, for cost or other reasons, but it has paid off that we didn't invest in one of those platforms that we would now have to get off of. We kept tape around for a long time.
Yet, now, we can move straight to a disk-based solution with the container pools, which is working well, and we are looking into cloud.
What other advice do I have?
Overall, we are satisfied with it. Our primary table library was running 72 tape drives and that solution was busy 24/7. Now, in the past six months, since we began this journey with Blueprint, we are down to about half that amount. Those amounts average about 36 tapes now on a 24-hour basis, which is good. We are going to continue to hopefully reduce this and, eventually, get rid of the hardware.
We have taken a lot of knocks over using TSM. Our customers suggesting reasons why it does not work, but it does. It is just not an out-of-the-box solution. Our customers struggle with that, at times. For example, Microsoft SQL. The guys on that team pushed back against TSM and, finally won, over the years, to just do native database dumps and get away from the platform. However, they have come to find out that the new bed that they've made has its own problems.
Overall, it is a well-rounded solution. It provides anything any enterprise would need. We use it. I would hang my hat on it. I'll stand on tape, even, for a lot of things. I've done numerous disaster recovery exercises in my career and done it successfully off the tape.
When you need the data, it is there. It is reliable. It is a tool which works. I think people expect easy, and it may not be easy. However, that is what we get paid to do as admins.
This solution is a viable candidate. It depends on your environment. As a platform, we have ridden on it for many years.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Spectrum Protect Plus 10.1.4 is GA as of Aug 2019. Cloud options exist for PaaS feeds to SPP and/or Amazon BYOL, for SPP, We support many applications without agents or complexity. OpenShift and Redhat will become significant partner, Aquisition for MultiCloud.