Having projections as a parallel for indexes in a simple MySQL helped keep our data access fast and optimized.
Software Engineer at a marketing services firm with 51-200 employees
Having projections as a parallel for indexes in a simple MySQL helped keep our data access fast and optimized. More insight into what the product is doing would help debugging.
Pros and Cons
- "This product has enabled us to keep very large amounts of data at hand for fast querying."
- "We have found multiple issues with deployment. Deployment was by far the hardest step in the process."
What is most valuable?
How has it helped my organization?
This product has enabled us to keep very large amounts of data at hand for fast querying. With enough hardware force behind it, we were able to use Vertica as our primary reporting database without having to aggregate data, thus enabling us to provide many reports without having duplicated data or large aggregation steps.
What needs improvement?
We would like to see better documentation and examples as well as further simplicity in creating clusters, adding nodes, etc. I understand the GUI is very simple but sometimes more insight into what the product is doing and where errors are occurring would help debugging.
For how long have I used the solution?
We have used HP Vertica for three years.
Buyer's Guide
OpenText Analytics Database (Vertica)
April 2026
Learn what your peers think about OpenText Analytics Database (Vertica). Get advice and tips from experienced pros sharing their opinions. Updated: April 2026.
893,244 professionals have used our research since 2012.
What was my experience with deployment of the solution?
We have found multiple issues with deployment. Deployment was by far the hardest step in the process. We have very little knowledge of how to set up projects, how they affect query times, and how much additional storage they require.
What do I think about the stability of the solution?
We have had no stability issues.
What do I think about the scalability of the solution?
Scalability was a problem given we had to host the solution ourselves. It would be great to have a cloud-based solution around Vertica. Also, we found it difficult to modify and update our schema as we grew. Part of the problem may have been that when we first started using Vertica we were inexperienced.
How are customer service and support?
We paid for technical support for one year but did not use it very much so we discontinued its use.
Which solution did I use previously and why did I switch?
Choosing Vertica was the first time we used a data warehouse solution for handling the large amounts of data we were starting to gather. Since then, we have switched from an internally hosted Vertica to Spark managed externally.
How was the initial setup?
The initial setup was complex.
What about the implementation team?
We implemented it in-house. I would advise anyone to use a vendor unless you have an in-house expert.
What was our ROI?
I do not have an ROI. It is fair to say that we could not have provided our product to customers without Vertica.
What's my experience with pricing, setup cost, and licensing?
I found paying for the amount of storage we used simple. It was a surprise because we underestimated how much storage projections use and definitely did not purchase the correct license for the amount of data we estimated we would be handling.
What other advice do I have?
The product is great to use, but there is a steep learning curve initially. Also, we found limited resources for basic operations such as setup and deployment. Most tutorials and documentation were regarding how to run queries and use external tools such as Pentaho, which we weren’t using. We just wanted good explanations of how to optimize using projections, etc. I think it can be a great product if used correctly and implemented by a team who is familiar with the product.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Technical Team Lead, Business Intelligence at a tech company with 501-1,000 employees
The most valuable feature is the merge function, which is essentially the upsert function. We've had issues with query time taking longer than expected for our volume of data.
Pros and Cons
- "The most valuable feature is the merge function, which is essentially the upsert function, and it has become our ELT pattern because, unlike when we used the ETL tool to manage upserts, the load time is now pretty much flat relative to the volume of records processed."
- "I'd rate technical support as low to average. The tech support provides the usual canned response."
What is most valuable?
The most valuable feature is the merge function, which is essentially the upsert function. It's become our ELT pattern. Previously, when we used the ETL tool to manage upserts, the load time was significantly longer. The merge function load time is pretty much flat relative to the volume of records processed.
How has it helped my organization?
HP Vertica has helped us democratize data, making it available to users across the organization.
What needs improvement?
We've had issues with query time taking longer than expected for our volume of data. However, this is due to not understanding the characteristics of the database and how to better tune its performance.
For how long have I used the solution?
We've been using HP Vertica for three years, but only in the last year have we really started to leverage it more. We're moving to a clustered environment to support the scale out of our data warehouse.
We use it as the database for the our data warehouse. In it's current configuration, we use it as a single node, but we're moving to a clustered environment, which is what the vendor recommends.
What was my experience with deployment of the solution?
We had no issues with the deployment.
What do I think about the stability of the solution?
We've had no issues with the stability.
What do I think about the scalability of the solution?
We've had no issues scaling it.
How are customer service and technical support?
I'd rate technical support as low to average. The tech support provides the usual canned response. We've had to learn most of how to harness the tool on our own.
Which solution did I use previously and why did I switch?
I haven't used anything similar.
How was the initial setup?
HP Vertica was in place when I joined the company, but it wasn't used as extensively as it is now.
What about the implementation team?
We implemented it in-house, I believe.
What other advice do I have?
Loading into HP Vertica is straightforward, similar to other data warehouse appliance databases such as Netezza. However, tuning it for querying requires a lot more thought. It uses projections that are similar to indexes. Knowing how to properly use projections does take time. One thing to be mindful of with columnar databases is that the fewer the columns in your query, the faster the performance. The number of rows impacts query time less.
My advice would be to try out the database connecting to your ETL tools and perform time studies on the load and query times. It's a good database. It works similar to Netezza from my experience but it is a lot cheaper. Pricing is based on the size of the database.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
OpenText Analytics Database (Vertica)
April 2026
Learn what your peers think about OpenText Analytics Database (Vertica). Get advice and tips from experienced pros sharing their opinions. Updated: April 2026.
893,244 professionals have used our research since 2012.
Big Data, Analytics and Hadoop Expert, Vertica DBA (Technical Leader), Architecture Group at a tech vendor with 5,001-10,000 employees
Simple setup and responsive support.
Pros and Cons
- "HP Vertica is an outstanding backend for Big Data-scale interactive dashboards/BI."
- "I really would like to see Vertica able to use heterogeneous storage (RAM, SSD, HDD). Another issue I have seen is that the SQL optimizer fails to make optimizations that competing products are able to do."
Valuable Features
Ability to get top performance for in-advance known aggregative SQL queries.
Improvements to My Organization
HP Vertica is an outstanding backend for Big Data-scale interactive dashboards/BI. Achieving top performance however requires a deep understanding of the product architecture and experience in fine tuning of Vertica.
Room for Improvement
I really would like to see Vertica able to use heterogeneous storage (RAM, SSD, HDD). Another issue I have seen is that the SQL optimizer fails to make optimizations that competing products are able to do. That’s something that should be improved as well.
Use of Solution
I've been using it for two years.
Deployment Issues
We have had no issues with deployment.
Stability Issues
They should provide HA with Vertica, the cluster must be put behind Load Balancers.
Scalability Issues
There have been no issues scaling it for our needs.
Customer Service and Technical Support
I have no complaints, the HP guys were very responsive.
Initial Setup
The initial Vertica setup was really simple.
Implementation Team
In-house. The vendor team had many persons working on our project and we got an impression that it is difficult for them to focus on our requirements.
Other Solutions Considered
I have evaluated numerous competing products. HP Vertica was chosen for the top performance of aggregative queries.
Other Advice
It is very easy to start using Vertica, however getting the maximum performance from it is a fine art.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Architect with 501-1,000 employees
You don’t have to worry about “load time slots” since you can load data into reporting tables at all times without worrying about their query load.
Pros and Cons
- "It provides very fast query performance after good designs of projections."
- "Stability is good, however the database crashed once because a query ran against a large XML data element."
What is most valuable?
It provides very fast query performance after good designs of projections.
It's easy to implement for 24/7 data load and usage because you don’t have to worry about “load time slots” since you can load data into reporting tables at all times without worrying about their query load.
It just keeps up and running all the time.
How has it helped my organization?
We have been able to move from nightly batch loads to continuous data flow and usage. This hasn’t happened just because of Vertica, we have renewed our data platform pretty thoroughly, but definitely Vertica is one major part of our new data platform.
What needs improvement?
We are running our data transformations as an ELT process inside Vertica; we have data at least on the landing area, temporary staging area, and final data model. Data transformations require lots of deletes and updates (which are actually delete/insets in Vertica). Delete in Vertica doesn’t actually delete data from tables, it just marks them as deleted. For us to keep the performance up, purge procedures are needed and a good delete strategy needs to be designed and implemented. This can be time consuming and is a hard task to complete, so more ‘out-of-the-box’ delete strategies would be a nice improvement.
For how long have I used the solution?
We've been using it since January 2015.
What was my experience with deployment of the solution?
We haven't had any issues with the deployment.
What do I think about the stability of the solution?
Stability is good, however the database crashed once because a query ran against a large XML data element.
What do I think about the scalability of the solution?
We haven’t yet scaled out our system. So far performance has been good (taking into consideration that delete strategy mentioned in the Areas for Improvement question).
How are customer service and technical support?
We haven’t needed tech support too much. So far so good.
Which solution did I use previously and why did I switch?
We used Oracle for our DWH. When selecting a new database, we evaluated -- based both on written documentation and hands-on experimenting -- quite a lot of databases, such as Exadata, Teradata, and IBM Netezza. We selected HP Vertica as it runs on bulk hardware since it has “open interfaces”. It performed really well during hands-on experimenting and its “theories in practice” is good. Performance is excellent, development is easy (however, you need to re-think some things that you may have gotten used to when using other SQL databases), and its license model is simple.
How was the initial setup?
It seemed to be very straightforward. However, we had an experienced consult to do the setup.
What about the implementation team?
We had a joint team consisting of both an in-house team and external consultants. It’s very important to build up the internal knowledge by participating in actual project work.
What was our ROI?
We have ran so little time in production that we don’t yet have a decent ROI or other calculations done.
What's my experience with pricing, setup cost, and licensing?
The license model of HP Vertica is simple and transparent.
What other advice do I have?
Just go for it and try it out; you can download the free Community edition from the HP Vertica website.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
COO at a tech services company with 11-50 employees
A Good Option for Big Data
Pros and Cons
- "SQLs querys 10 to 1 more fast that another commercial databases"
Valuable Features
Easy Installation, Easy to add and quit nodes...
Improvements to My Organization
SQLs querys 10 to 1 more fast that another commercial databases
Use of Solution
1 Year
Deployment Issues
Vertica support only SQL ANSI 99
Stability Issues
None
Scalability Issues
None
Customer Service and Technical Support
Customer Service: 10/10Technical Support: 5/10
Initial Setup
Easy
ROI
30%
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Chief Datamonger at a media company with 51-200 employees
100,000x faster: gnarly queries reduced from 22 hours to 800 milliseconds
Part I: The Pilot
A/B testing is part of our company’s DNA; we test every change to our platform and games. When we were small, this was easy, but as we grew into the tens and hundreds of millions of users, query speed ground to a halt. (Familiar story, right?)
So in 2011 we piloted Vertica for our A/B testing suite. Our nastiest query used to take up to 22 hours to run on [name of old vendor - but don't want to mention them and be mean]. On Vertica, it ran in… 800 ms. That’s right, a scan and aggregation of over 100 billion records could be done in under one second. We were hooked!
Part II: The Rollout
Yeah we rolled it out. Boring. No interesting story here.
Part III: The Impact
Not having to worry about speed or data volume changes you. Suddenly we began logging and reporting on everything. Where did users click? How long between clicks? How long does it take to type in a credit card number when you’re ready to pay? How much free memory does an iPad 1 have, and how does that change every second?
Like all software engineers, we solve problems under constraints, and we had conditioned ourselves to think of logged data volume as a constraint. Suddenly that was no longer a constraint, but I would say it took us a full year to fully appreciate how powerful that was.
Part IV: Today
Today we record every customer interaction with our games and platforms – on phones, tablets, Facebook, and the web. Every department at the company consumes this data.
Marketing: Monitor ad campaigns in realtime, and throttle campaigns up/down based on performance of the users who are acquired via those campaigns.
Game design: Monitor game difficulty and tune in realtime.
Operations: Monitor for changes in customer service volume, exception logging, etc.
Creative services: Test different artwork and themes and monitor impact on game KPIs
Finance: How much money did we make in the last 60 seconds? (Bonus tip: finance gets very happy when they see this, and a happy finance department makes for a happy company. Me: “Hey Bob, can I buy an Oculus Rift for my team to play with?” Bob: “Hold on let me check the reports… whoopee! Sure thing, request approved”.)
Part V: Conclusion
We love speed, unlimited data, and Vertica!
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Architect at a tech services company with 51-200 employees
Vertica allows for thousands of users to run an analysis at the same time. Great aggressive compression.
At the tech company I work , we were looking for new ways to allow end users (a couple of thousand external users) to crunch through their detailed data in real time as well as enabling internal users and data analysts to gain the information they needed to run and optimize their business processes.
Unfortunately our current system had become slower and slower over time due to the tremendous increase in data to be managed so a new approach had to be taken to accomplish this goal. Our existing data warehouse/data management infrastructure just could not handle big data.
We evaluated a variety of different solutions such as Amazon Redshift, Infobright and Microsoft. Vertica won out above all these other solutions. Our dataset is several hundred million rows and our avg. response time goal was less than 5 secs. We are building our environment for the future so another requirement was to be able to scale horizontally.
Redshift came close in response time but failed in concurrency, meaning multiple users running an analysis at the same time. Infobright came close in response time and concurrency but didn’t provide sufficient scalability. Vertica checked all boxes at a very competitive price-point.
We found that the extreme speed, performance and flexibility is superior to all the other solutions out there. The massive scalability on industry-standard hardware, standard SQL interface and database designer and administration tools are excellent features of Vertica. I also really value the simplicity, concurrency for hundreds or thousands of users, and aggressive compression.
This new environment allowed us to implement applications such as clickstream and predictive analysis which have added tremendous value for us. Currently there is about 500 GB – 1 TB of data that I am managing and I have found that Vertica is able to be integrated very well with a variety of Business Intelligence (BI), visualization, and ETL tools in their environment. I use Hadoop, Tableau and Birst and using all these solutions with Vertica has been overall quite smooth.
Our query performance has increased by 500 – 1,000% through improvements in response time and I am now able to compress our data by more than 50%. The simultaneous loading and querying and aggressive compression has helped us become more efficient and productive. Furthermore the high availability without hardware redundancy, optimizer and execution engine, and high availability for analytics systems has saved us both time and money.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
It seems you were mainly focused on how Vertica is good and did not run a benchmark, otherwise it could be nice if you could publish loading and query performance between all above DB's. 500GB - 1TB is not a lot of data .
Chief Data Scientist at a tech vendor with 10,001+ employees
We're using Vertica, just because of the performance benefits. On big queries, we're getting sub-10 second latencies.
My company recognized early, near the inception of the product, that if we were able to collect enough operational data about how our products are performing in the field, get it back home and analyze it, we'd be able to dramatically reduce support costs. Also, we can create a feedback loop that allows engineering to improve the product very quickly, according to the demands that are being placed on the product in the field.
Looking at it from that perspective, to get it right, you need to do it from the inception of the product. If you take a look at how much data we get back for every array we sell in the field, we could be receiving anywhere from 10,000 to 100,000 data points per minute from each array. Then, we bring those back home, we put them into a database, and we run a lot of intensive analytics on those data.
Once you're doing that, you
realize that as soon as you do something, you have this data you're starting to
leverage. You're making support recommendations and so on, but then you realize
you could do a lot more with it. We can do dynamic
cache sizing. We can figure out how much cache a customer needs based on
an analysis of their real workloads.
We found that big
data is really paying off for us. We want to continue to increase how much it's
paying off for us, but to do that we need to be able to do bigger queries
faster. We have a team of data scientists and we don't want them sitting here
twiddling their thumbs. That’s what brought us to Vertica.
We have a very tight feedback loop. In one release we put out, we may make some changes in the way certain things happen on the back end, for example, the way NVRAM is drained. There are some very particular details around that, and we can observe very quickly how that performs under different workloads. We can make tweaks and do a lot of tuning.
Without the kind of data we have, we might have to have multiple cases being opened on performance in the field and escalations, looking at cores, and then simulating things in the lab.
It's a very labor-intensive, slow process with very little data to base the decision on. When you bring home operational data from all your products in the field, you're now talking about being able to figure out in near real-time the distribution of workloads in the field and how people access their storage. I think we have a better understanding of the way storage works in the real world than any other storage vendor, simply because we have the data.
I don’t remember the exact year, but it may have been eight years ago roughly that I became aware of Vertica. At some point, there was an announcement that Mike Stonebraker was involved in a group that was going to productize the C-Store Database, which was sort of an academic experiment at UC Berkeley, to understand the benefits and capabilities of real column store.
I was immediately
interested and contacted them. I was working at another storage company at the
time. I had a 20 terabyte
(TB) data
warehouse,
which at the time was one of the largest Oracle on Linux data
warehouses in the world.
They didn't want to
touch that opportunity just yet, because they were just starting out in alpha
mode. I hooked up with them again a few years later, when I was CTO at a
different company, where we developed what's substantially an extract,
transform, and load (ETL) platform.
By then, they were
well along the road. They had a great product and it was solid. So we tried it
out, and I have to tell you, I fell in love with Vertica because of the
performance benefits that it provided.
When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store. Either you're going to have very narrow tables and a lot of them or else you're going to be wasting a lot of I/O overhead, retrieving entire rows where you just need a couple fields.
That was what piqued my
interest at first. But as I began to use it more and more, I realized that the
performance benefits you could gain by using Vertica properly were another
order of magnitude beyond what you would expect just with the column-store
efficiency.
That's because of
certain features that Vertica allows, such as something called pre-join
projections. At a high-level, it lets you maintain the normalized logical
integrity of your schema, while having under the hood, an optimized denormalized query
performance physically on disk.
Can you be
efficient if you have a denormalized structure on disk because Vertica allows
you to do some very efficient types of encoding on your data. So all of the low
cardinality columns that would have been wasting space in a row store end up
taking almost no space at all.
It's been my
impression, that Vertica is the data warehouse that you would have wanted to
have built 10 or 20 years ago, but nobody had done it yet.
Nowadays, when I'm evaluating other big data platforms, I always have to look at it from the perspective of it's great, we can get some parallelism here, and there are certain operations that we can do that might be difficult on other platforms, but I always have to compare it to Vertica. Frankly, I always find that Vertica comes out on top in terms of features, performance, and usability.
I built the environment at
my current company from the ground up. When I got here, there were roughly 30
people. It's a very small company. We started with Postgres. We started with
something free. We didn’t want to have a large budget dedicated to the backing
infrastructure just yet. We weren’t ready to monetize it yet.
So, we started on
Postgres and we've scaled up now to the point where we have about 100 TBs on
Postgres. We get decent performance out of the database for the things that we
absolutely need to do, which are micro-batch updates and transactional
activity. We get that performance because the database lives here.
I don't know what
the largest unsharded Postgres instance is in the world, but I feel
like I have one of them. It's a challenge to manage and leverage. Now, we've
gotten to the point where we're really enjoying doing larger queries. We really
want to understand the entire installed base of how we want to do analyses that
extend across the entire base.
We want to understand the
lifecycle of a volume. We want to understand how it grows, how it lives, what
its performance characteristics are, and then how gradually it falls into
senescence when people stop using it. It turns out there is a lot of really
rich information that we now have access to to understand storage lifecycles in
a way I don't think was possible before.
But to do that, we
need to take our infrastructure to the next level. So we've been doing that and
we've loaded a large number of our sensor data that’s the numerical data I have
talked about into Vertica, started to compare the queries, and then started to
use Vertica more and more for all the analysis we're doing.
Internally, we're
using Vertica, just because of the performance benefits. I can give you an
example. We had a particular query, a particularly large query. It was to look
at certain aspects of latency over a month across the entire installed base to
understand a little bit about the distribution, depending on different factors,
and so on.
We ran that query in
Postgres, and depending on how busy the server was, it took anywhere from
12 to 24 hours to run. On Vertica, to run the same query on the same data takes
anywhere from three to seven seconds.
I anticipated that
because we were aware upfront of the benefits we'd be getting. I've seen it
before. We knew how to structure our projections to get that kind of
performance. We knew what kind of infrastructure we'd need under it. I'm really
excited. We're getting exactly what we wanted and better.
This is only a
three node cluster. Look at the performance we're getting. On the smaller
queries, we're getting sub-second latencies. On the big ones, we're getting
sub-10 second latencies. It's absolutely amazing. It's game changing.
People can sit at
their desktops now, manipulate data, come up with new ideas and iterate without
having to run a batch and go home. It's adramatic productivity increase. Data
scientists tend to be fairly impatient. They're highly paid people, and you
don’t want them sitting at their desk waiting to get an answer out of the
database. It's not the best use of their time.
When it comes to the cloud
model for deployment, there's the ease of adding nodes without downtime, the
fact that you can create a K-safe
cluster. If my cluster is 16 nodes wide now, and I want two nodes
redundancy, it's very similar to RAID. You can specify that,
and the database will take care of that for you. You don’t have to worry about
the database going down and losing data as a result of the node failure every
time or two.
I love the fact that you don’t have to pay
extra for that. If I want to put more cores or nodes on it or I want to
put more redundancy into my design, I can do that without paying more for it.
Wow! That’s kind of revolutionary in itself.
It's great to see a database company incented
to give you great performance. They're incented to help you work better with
more nodes and more cores. They don't have to worry about people not being able
to pay the additional license fees to deploy more resources. In that sense,
it's great.
We have our own private cloud -- that’s how I
like to think of it -- at an offsite colocation facility. We do DR here. At the same time, we have a K-safe cluster. We had a hardware
glitch on one of the nodes last week, and the other two nodes stayed up, served
data, and everything was fine.
Those kinds of features are critical, and that
ability to be flexible and expand is critical for someone who is trying to
build a large cloud infrastructure, because you're never going to know in
advance exactly how much you're going to need.
If you do your job right as a cloud provider,
people just want more and more and more. You want to get them hooked and you
want to get them enjoying the experience. Vertica lets you do that.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
Download our free OpenText Analytics Database (Vertica) Report and get advice and tips from experienced pros
sharing their opinions.
Updated: April 2026
Popular Comparisons
Azure Data Factory
VMware Tanzu Data Solutions
Oracle Exadata
Amazon Redshift
Microsoft Azure Synapse Analytics
IBM Netezza Performance Server
Oracle Autonomous Data Warehouse
Buyer's Guide
Download our free OpenText Analytics Database (Vertica) Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- Which is the best RDMBS solution for big data?
- What is the biggest difference between Amazon Redshift and Vertica
- Oracle Exadata vs. HPE Vertica vs. EMC GreenPlum vs. IBM Netezza
- When evaluating Data Warehouse solutions, what aspect do you think is the most important to look for?
- At what point does a business typically invest in building a data warehouse?
- Is a data warehouse the best option to consolidate data into one location?
- What are the main differences between Data Lake and Data Warehouse?
- Infobright vs. Exadata vs. Teradata vs. SQL Server Data Warehouse- which is most compatible with front end tools?
- What is the best data warehouse tool?
- Which Data Strategy solution have you used?















I think your description is very trivial ! And also Big Data is not all about the database tech sitting in the background.