Try our new research platform with insights from 80,000+ expert users
it_user784038 - PeerSpot reviewer
IT Architect
Real User
We integrated it once and can use it for several technologies: Hadoop, Ceph, and more
Pros and Cons
  • "It's pretty flexible. You can choose how much storage you put on the server. You can have one to three nodes, depending on whether you want more CPU or storage."
  • "we can use the same platform for several use cases: Hadoop, Ceph, and we are considering the server for another use case right now. It's a single solution, we only have to integrate it once and we can use it for several technologies."
  • "There is a shared battery for all cache controllers in the node. When you have to replace that element, you have to take down all three nodes and not just one."

What is our primary use case?

We're using it for big data and storage servers. So mostly Hadoop for big data, Hadoop elastic search, and Ceph storage for our OpenStack private cloud.

The Apollo is performing fairly well. We've run into minor issues, but overall it does the job and we feel it's a good product for the money. 

How has it helped my organization?

It's allowed us to benefit from IP-based storage instead of using only fiber channel SAN storage. Also, I don't think we could have afforded that quantity of storage in a SAN array.

What is most valuable?

It's pretty flexible. You can choose how much storage you put on the server. You can have one to three nodes, depending on whether you want more CPU or storage. And we can use the same platform for several use cases: Hadoop, Ceph, and we are considering the server for another use case right now. It's a single solution, we only have to integrate it once and we can use it for several technologies.

What needs improvement?

There should be truly independent nodes for your rack, which can contain three different servers. I like to make sure when a component fails, I don't have to take down all three nodes. This is especially true as we usually have replication between these nodes. It would be a great asset to be able to contain the downtime to one of the nodes.

Buyer's Guide
HPE Apollo Systems
June 2025
Learn what your peers think about HPE Apollo Systems. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
860,632 professionals have used our research since 2012.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

It's pretty stable. We've only had very minor issues with it. No major downtime. 

The only issues we've really run into so far is that there is a shared battery for all cache controllers in the node. When you have to replace that element, you have to take down all three nodes and not just one. That's something of a design flaw, but it's the only real issue we've had so far.

How are customer service and support?

Yes, we've called tech support. Mostly for hardware faults.

What other advice do I have?

When selecting a vendor the most important criteria include

  • overall trust in the company
  • the financial side, of course, the price of the hardware 
  • the quality of the support we can expect.

I rate it at eight out of 10. As I said, true independence between the nodes would be an improvement. At least make sure that the nodes aren't dependent on each other. Also, we've had a few difficulties integrating it at first, so I'll stay with an eight.

Test the solution and do a proof of concept until it works with your own integration procedures, the way you install systems, that kind of thing.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
SeniorAc7315 - PeerSpot reviewer
Senior Account Manager
Real User
Certified for use with Linux, it enables us to easily implement software defined solutions
Pros and Cons
  • "It enables us to implement software defined solutions very easily, because Apollo servers are certified for use with Linux systems"
  • "Apollo Systems provide stuff that standard services do not. More HTDs, more compute power, at very reasonable pricing."
  • "We would like to see improved cooling because that is quite an issue. If you put that much compute power into a single rack, cooling really becomes an issue. And there is room for improvement there."

What is our primary use case?

We primarily use it for high-performance computing. Our customers really do like it because of the density they can achieve in the racks. Apollo provides so much compute power and storage as well.

It's performing extremely well.

How has it helped my organization?

It enables us to implement software defined solutions very easily, because Apollo servers are certified for use with Linux systems, which is really a big thing for us.

What is most valuable?

High compute density and high storage density at a reasonable cost

What needs improvement?

Obviously I would like to see the cost go down. That speaks for itself. 

We would like to see improved cooling because that is quite an issue. If you put that much compute power into a single rack, cooling really becomes an issue. And there is room for improvement there.

What do I think about the stability of the solution?

Extremely reliable. We've been using it for three years now, and it's been in production without any downtime yet.

What do I think about the scalability of the solution?

Especially if you use software defined storage, for instance, scalability is just great.

How are customer service and technical support?

We have not use HPE support. We have our own engineers, so we're really proficient enough. And it's really easy to use. So it's not a big deal.

Which solution did I use previously and why did I switch?

We actually had a business case. We were looking to address this business case with standard IT storage solutions but they were way too pricey for us. So we figured we needed a way to use a standard service, make the most of these standard services, and came across Apollo Systems. Apollo Systems provide stuff that standard services do not. More HTDs, more compute power, at very reasonable pricing.

How was the initial setup?

It was straightforward.

Which other solutions did I evaluate?

We do look to Super Micro whenever price is king. But if we are looking for reliability, then HPE is the way to go.

What other advice do I have?

Our most important criterion when selecting a vendor is reliability. We need a vendor to be there for us, even when the product is already three or four years old. That's a big thing for us.

I give it an eight out of 10. It does what we expect it to do. As I said, cooling is still an issue, you really have to keep that in mind if you implement the solution. But aside from that, we're really happy with it.

Talk to a partner who has implemented a solution with HPE Apollo, talk to customers who have actually used it in the field. It's really simple to do.

Disclosure: My company has a business relationship with this vendor other than being a customer. Partner
PeerSpot user
Buyer's Guide
HPE Apollo Systems
June 2025
Learn what your peers think about HPE Apollo Systems. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
860,632 professionals have used our research since 2012.
it_user784059 - PeerSpot reviewer
Data Center Manager at Maples And Calder
Real User
Helped me address a need for DPM, to back up to a specific location in my datacenters
Pros and Cons
  • "It's very reliable. I haven't had a single failure at all in the year and a half; not the slightest problem with it."
  • "One drawback which I had: When I needed to expand storage on the Apollo, I had significant problems getting disks for it. It was a very long wait-time. So, if I were to give any advice in regards to improving this product, I would say make more of the 8TB disks available quicker."

What is our primary use case?

I specifically purchased it to address a need I have for DPM. I needed DPM to back up to a specific location in both of my datacenters that I have in Ireland. I needed just a lump of slow storage, but that was big, to take 30-day disk backups before they were offloaded to tape. In that sense, it ticked all the boxes and it's been working fine for that.

Now, I'm moving on to StoreOnce, but I'm going to repurpose the Apollos after this. I don't know what I'm going to use them for after this, because DPM is gone. Moving on to Veeam and StoreOnce.

What is most valuable?

It's really very clever the way it manages to hide the disks away. This idea of pulling out the little trays, I just think that's really, really clever. It's very reliable. I haven't had a single failure at all in the year and a half; not the slightest problem with it. It's been a pretty good product so far.

What needs improvement?

One drawback which I had: When I needed to expand storage on the Apollo, I had significant problems getting disks for it. It was a very long wait-time. So, if I were to give any advice in regards to improving this product, I would say make more of the 8TB disks available quicker. I ended up having a few issues because I ran out of space. There was a huge lead time while I waited for new disks to arrive here. It left me a bit exposed there for awhile.

But that's the only criticism. Other than that, I think it's a great product. It's really good. Really reliable. Very cleverly designed and I can't think of what better way they could pack more disks into such a small space, so all around it's a good product.

For how long have I used the solution?

One to three years.

How is customer service and technical support?

If I get through to the right person, support is very, very good. If I don't get through to the right person, it can be irritating and it can be cumbersome. So to me, the key is getting straight through to the person that's going to be able to help. I don't ring up for Mickey Mouse things. I just ring up when I need something bulked. I try my best to automate as much of the call logging because I have a lot of calls; it's much easier for me to do that online.

So that element generally works quite well, and generally I like the way it works. If I get a call logged online, it usually goes through to the right person, and I usually get a call back. I get actions done pretty quickly on that.

If, however, for whatever reason I have to ring up, I might get through to the wrong section. I've had some hit and miss affairs that have just irritated me. But when I do get through to the right person, I've found in the past, they're very good, generally speaking.

How was the initial setup?

The Apollo was very straightforward. That was nice and easy. Some of my other products, my 3PARs and so on, a lot more complex. But the Apollo, that was nice and straight, easy.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user784011 - PeerSpot reviewer
Network End Data Center Architect at a tech services company with 1,001-5,000 employees
Real User
A compact system with a powerful CPU and powerful hard drives, perfect for our branches
Pros and Cons
  • "We usually use three blades for two-rack units, and with enough storage, it's really a small system with a powerful CPU, powerful hard drives, powerful disks."
  • "We would like to see SimpliVity on top of the Apollo."

What is our primary use case?

We use the Apollo system for most of our branch offices. Our roadmap is to implement Apollo in all our branch offices by the end of 2018. So we will have something like 50 branch offices with Apollo.

We performed a PoC. We were very happy with it, so we decided to implement it in all the branches.

What is most valuable?

It's a compact system. We usually use three blades for two-rack units, and with enough storage, it's really a small system with a powerful CPU, powerful hard drives, powerful disks. So it provides enough performance in terms storage value. And the internal network, we are also very happy with it. So, for the branches for us, it's perfect.

How has it helped my organization?

The benefit is, as I said, we are compressing everything. In the past, we used StorageWorks P2000, plus SAN switches, plus three or four servers and so on. Now, we have two-rack units for everything. 

For a branch it's perfect because it's simplifying our life.

What needs improvement?

We would like to see SimpliVity on top of the Apollo.

What do I think about the stability of the solution?

Touch wood, it's perfect until now. Nothing to complain about.

What do I think about the scalability of the solution?

We are not using it in that manner. We are not using it for the scalability. So the size, one Apollo for each branch, is perfect for us. We are not thinking about scalability.

How are customer service and technical support?

As usual, with HPE, we are very happy with the support. Honestly, we used it only once for the Apollo system, but all our kits are HPE. So we use their support often and we haven't noticed any difference between Apollo versus C7000 or DL servers. So it's in line with the standard HPE support and we are happy with that.

Which solution did I use previously and why did I switch?

We have a strong relationship with HPE. So HPE was proactive in proposing this solution. We had a PoC, as I said, and we were happy with it and decided to implement it. It satisfies all our needs and is the perfect solution.

How was the initial setup?

It was straightforward.

We always have an HPE engineer on our site, close to us. But usually, we prefer to do this kind of setup, at least the first time, to put our hands on the device itself, by ourselves. So the setup was done 95% without the support of this engineer. And maybe 5% for optimization with the support of this guy.

What other advice do I have?

Our most important criteria when selecting a vendor include, of course, the experience of the technician, then the support. With HPE as I said, we have a strong relationship. So there is a priority channel for HPE versus other vendors. We always perform a PoC, we compare the vendors. But we were happy with HPE so we have no reason to change right now.

I rate it eight out of 10 right now. It will be a 10 when SimpliVity will be on top of it.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user683202 - PeerSpot reviewer
Professor at a university with 5,001-10,000 employees
Real User
Enables us to do the world's leading superhuman AI research.
Pros and Cons
  • "It's going to meet our needs moving forward, it is scalable."
  • "Lustre seems to be just a little bit unstable overall."

How has it helped my organization?

We have been working with the Pittsburgh Supercomputing Center for around ten years. They are picking the hardware and they had picked this hybrid system. It has several different kinds of components in the system and we had worked with them for a long time. We knew that they were picking the stake of that stuff so that's why we selected this solution.

What is most valuable?

It's very hard for a professor to amass the supercomputing resources, so I've been very fortunate to have that level of supercomputing at our disposal and that has really enabled us to do the world's leading superhuman AI research. That is what we did, we actually beat the best heads up in all Texas, holding human players in the world this January. So, we're at a superhuman level in the strategic reasoning.

What needs improvement?

One thing that we are looking for is the better stability of the Lustre file system, it could be improved. I have heard that they are coming out with a better memory bandwidth, so that's good or maybe, it's already there in System 10.

In that case, of course, then there is need for more CPUs, more storage and all of that.

What do I think about the stability of the solution?

It has been fairly reliable. In the beginning, of course not, but then we were a “baiter customer”, so in the beginning, there was nothing, literally there was nothing in the racks. We've been with it from the beginning and of course, in the beginning, it was less stable. However, it became more stable over time.

If there's anything that hasn't been that stable, then it is the Lustre file system. I would say that they have made some improvements with that but this is not just a problem with bridges. We have computed the other supercomputing centers like San Diego Supercomputing Center in the past as well and Lustre seems to be just a little bit unstable overall.

What do I think about the scalability of the solution?

It's going to meet our needs moving forward, it is scalable. Having said that, our algorithms are very compute-hungry and storage-hungry, so more is more and there's no limit as to how much our algorithms can use. The more compute and the more storage they have, the better they will perform.

How is customer service and technical support?

I would support the Pittsburgh Supercomputing Center (PSC) support; they gave us the support and their support has been awesome. We don't directly contact HPE, they contact HPE if needed.

How was the initial setup?

The PSC installed everything, i.e., both hardware and software. So we didn't do any of that; from our perspective, it has been easy to use.

What other advice do I have?

Whilst looking for a vendor, we do not look at the brand name at all. Instead what we look for are just reliability and raw horsepower.

It has been great. The Pittsburgh Supercomputing Center guys have been great in supporting us very quickly and sometimes even at night or on weekends. I've been very fortunate as a professor to get this level of supercomputing, so we've been able to do the world's leading research in this area. The only things that I would improve are the ones that I have mentioned before, i.e., the Lustre file system, and maybe, the memory access from the CPU.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user680184 - PeerSpot reviewer
Senior Director of Research at PSC
Consultant
Has the flexibility to run dual CPU nodes or add GPUs to other nodes.
Pros and Cons
  • "Absolutely being able to mount into Omni-Path architecture, HFIs on those nodes, because we were the very first site in the world"
  • "What's coming out in Gen 10 is very strong in terms of additional security."

How has it helped my organization?

A primary benefit is high reliability. They have very good price performance and configuration options. Being able to configure them in different ways, for different node types, was something we needed.

What is most valuable?

In referring to the Apollos, what we liked about them was:

  • A combination of the density
  • The flexibility to run dual CPU nodes or add GPUs to other nodes
  • Absolutely being able to mount into Omni-Path architecture, HFIs on those nodes, because we were the very first site in the world
  • Being able to connect those in large quantities
  • In Bridges, we have 800 Apollo 2000 nodes, and they have been running extremely well for us

What needs improvement?

I think it's on a good track. What's coming out in Gen 10 is very strong in terms of additional security. Overall, I think those are well architected. They're a very flexible form factor for scale-out. Assuming ongoing support for the latest generation CPUs and accelerators, that will be something we'll keep following for the foreseeable future.

In Bridges we combine the different node types to create a heterogeneous, ideal system. Rather than wishing we had more features in a given node type, we integrate different types. We choose different products from the spectrum of HPE offerings to serve those needs optimally, rather than trying to push any given node in a direction it doesn't belong.

What do I think about the stability of the solution?

Stability has been extremely good. Jobs run for days to many weeks at a time. We recently supported a campaign for a research group in Oklahoma, who were forecasting severe storms, doing this for 34 days. They were running on 205 nodes.

The example we're featuring was a breakthrough in artificial intelligence where an AI first beat the world's best poker players. And for that one, we ran 20 days continuously, and of course, the nodes had to be up because players are playing the games and we were running that on 600 nodes of Apollos. That was just as seamless, and it was a resounding victory. So, I think that's the strongest win through Apollos in our system so far.

What do I think about the scalability of the solution?

Scalability for us is limited only by budget. Using Omni-Path, we can scale our topology out with great flexibility. And so, scaling out workloads across Apollos has been seamless. We're running various protocols across them. We're running a lot of MPI, and they do spark their workloads. So the scalability has just been limited only by the size of our system.

How are customer service and technical support?

We have an arrangement with HPE technical support. As our system does call on them on occasion, but the stability has been very high. Over the past year and four months that we've been running bridges, I think we have only had under 70 calls on the whole system.

Which solution did I use previously and why did I switch?

We knew we had to invest in a new solution as we were looking at designing a system to serve the national research community. We knew what their application needs are, and what their scientific goals will be. So we were imagining what that system would have to deliver to meet those needs. So that's when they told us the kinds of servers we needed in the system. We have the Apollos, we have the L580s, with three terabytes of RAM, we have Superdome integrity with 12 terabytes of RAM, and we have a number of GL360 and other service nodes.

But it was really looking at the users requirements and looking at where high performance computing, high performance data analytics and artificial intelligence are going through about 2019, that that's what caused us to select the kinds of servers that we did, the ratios we did, and the topology we chose to connect them in.

How was the initial setup?

It was the first Omni-Path installation in the world, so people were very careful. With that caveat, I think it was straightforward.

Which other solutions did I evaluate?

We always look at all vendors before reaching a conclusion. I don't want to name them here, but we're always aware of what's in the market. We evaluate these for each procurement. We pick the solution that's best. The competitive edge for HPE involves several things. These are not in any specific order, as they are hard to rank.

  • HPE's strategic position in the marketplace. Being a close partner with Intel, we trust them when there's a new CPU. We can get it in an HPE server very early on.
  • When something new comes out, like Omni-Path, it was brand new then. We trusted that HPE would be able to deliver that in a validated product very, very early.
  • We are always pushing the time envelope. Their strategic alliance with other strong partners, gave us trust that we would be able to deliver on time, and we were. That's unusual in this field.
  • They uniquely had very large memory servers so the Superdomes, and the bandwidth in those servers, was extremely good compared to anything else on the market. We wanted that, especially, for large scale genomics. Putting that in the solution was a big plus. I'd say these items together were the strongest determining factors, from a technical perspective.

What other advice do I have?

I think the advice is to look at the workload very closely, understand what you want it to do, look at the product spectrum that's available here, and do the mix and match like we did. Build them together. There are software frameworks now that actually make it easier than when we did it, to stand up this sort of collection of resources, and to just go with what the workload needs.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user568143 - PeerSpot reviewer
Head of Industrial Automation & Modeling at a mining and metals company with 1,001-5,000 employees
Vendor
Stable solution for management and monitoring.

What is most valuable?

It's a stable product; very reliable. It is a good basis upon which to build further. You see some evolution, but not too much. If you go to their events every year, you see an incremental evolution which is normal in that road.

How has it helped my organization?

I'm just a general manager and I’m not really technical. However, it gives you a nice, better flavor of the monitoring. I have heard that it provides better management and you can see the possibilities.

What needs improvement?

OpenView is a new product which does not support older versions of the hardware. This is an issue. That's why we cannot switch to the newer one. We continue using the older product, and that's working fine. I would like to see a bit more integration. This is the major topic.

What do I think about the scalability of the solution?

It is stable and scalable, but the new product has some advantages which we like. However, we cannot switch because we have an issue between non-supported and supported devices.

What other advice do I have?

When choosing another vendor, we look at the overall product and then the software product on top of that. Switching to another vendor is always a big step. We normally don't do that because it presents issues. Every solution will migrate to the same functionality. There is not a great difference between various solutions, but only an incremental one.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user364197 - PeerSpot reviewer
Network Administrator at CSC Finland
Consultant
The storage area density is the best thing about them. Outside connectivity needs to keep pace with network improvements.

What is most valuable?

We are running Apollo with SL-series servers and the best thing about them is the density of the storage area available. Regarding TCO, total cost of ownership, per terabyte, they are now the best on the market.

What needs improvement?

Connectivity to the outside of the server needs to be improved at the same time the network is improving. This would give us more IO. Of course, this is a firmware lifecycle management issue; there is work to do. Vendors should test the firmware before they are delivered to customers.

What do I think about the stability of the solution?

Stability is good enough.

What do I think about the scalability of the solution?

Scalability is fine because with this kind of service we can easily scale horizontally. We are more or less satisfied.

How are customer service and technical support?

The technical support in Finland is fine.

Which solution did I use previously and why did I switch?

We made a transformation from enterprise storage to an open-source distributed storage architecture. We switched because the pricing is better.

How was the initial setup?

The initial setup was business as usual. It's not so complicated, but of course it takes time.

What's my experience with pricing, setup cost, and licensing?

The price is not significantly lower than the competition, but it's lower than the standard price.

Which other solutions did I evaluate?

We looked at Dell and Super Micro. They are both on the market in Finland.

What other advice do I have?

You should run the stable firmwares on a test platform for about a month before you roll them out. This is something we have to do that right now.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free HPE Apollo Systems Report and get advice and tips from experienced pros sharing their opinions.
Updated: June 2025
Buyer's Guide
Download our free HPE Apollo Systems Report and get advice and tips from experienced pros sharing their opinions.