Try our new research platform with insights from 80,000+ expert users
Enterprise Data Architect at a manufacturing company with 201-500 employees
Real User
Jan 3, 2022
It's flexible and can do almost anything I want it to do
Pros and Cons
  • "Lumada has allowed us to interact with our employees more effectively and compensate them properly. One of the cool things is that we use it to generate commissions for our salespeople and bonuses for our warehouse people. It allows us to get information out to them in a timely fashion. We can also see where they're at and how they're doing."
  • "Some of the scheduling features about Lumada drive me buggy. The one issue that always drives me up the wall is when Daylight Savings Time changes. It doesn't take that into account elegantly. Every time it changes, I have to do something. It's not a big deal, but it's annoying."

What is our primary use case?

We mainly use Lumada to load our operational systems into our data warehouse, but we also use it for monthly reporting out of the data warehouse, so it's to and from. We use some of Lumada's other features within the business to move data around. It's become quite the Swiss army knife.

We're primarily doing batch-type reports that go out. Not many people want to sift through data and pick it to join it in other things. There are a few, but again, I usually wind up doing it. The self-serve feature is not as big a seller to me because of our user base. Most of the people looking at it are salespeople.

Lumada has allowed us to interact with our employees more effectively and compensate them properly. One of the cool aspects is that we use it to generate commissions for our salespeople and bonuses for our warehouse people. It allows us to get information out to them in a timely fashion. We can also see where they're at and how they're doing. 

The process that Lumada replaced was arcane. The sentiment among our employees, particularly the warehouse personnel, was that it was punitive. They would say, "I didn't get a bonus this month because the warehouse manager didn't like me." Now we can show them the numbers and say, "You didn't get a bonus because you were slacking off compared to everybody else." It's allowed us to be very transparent in how we're doing these tasks. Previously, that was all done behind the vest. I want people to trust the numbers, and these tools allow me to do that because I can instantly show that the information is correct.

That is a huge win for us. When we first rolled it out, I spent a third of my time justifying the numbers. Now, I rarely have to do that. It's all there, and they can see it, so they trust what the information is. If something is wrong, it's not a case of "Why is this being computed wrong?" It's more like: "What didn't report?"

We have 200 stores that communicate to our central hub each night. If one of them doesn't send any data, somebody notices now. That wasn't the case in the past. They're saying, "Was there something wrong with the store?" instead of, "There's something wrong with the data."

With Lumada's single end-to-end data management, we no longer need some of the other tools that we developed in-house. Before that, everything was in-house. We had a build-versus-buy mentality. It simplified many aspects that we were already doing and made that process quicker. It has made a world of difference. 

This is primarily anecdotal, but there were times where I'd get an IM from one of the managers saying, "I'm looking at this in the sales meeting and calling out what somebody is saying. I want to make sure that this is what I'm seeing." I made a couple of people mad. Let's say they're no longer working for us, and we'll leave it at that. If you're not making somebody mad, you're not doing BI right. You're not asking the right questions.

Having a single platform for data management experience is crucial for me. It lets me know when something goes wrong from a data standpoint. I know when a load fails due to bad data and don't need to hunt for it. I've got a status board, so I can say, "Everything looks good this morning." I don't have to dig into it, and that has made my job easier. 

What's more, I don't waste time arguing about why the numbers on this report don't match the ones on another because it's all coming from the same place. Before, they were coming from various places, and they wouldn't match for whatever reason. Maybe there's some piece of code in one report that isn't being accounted for in the other. Now, they're all coming from the same place. So everything is on the same level.

What is most valuable?

I'm a database guy, not a programmer, so Lumada's ability to create low-code pipelines without custom coding is crucial for me. I don't need to do any Java customization. I've had to write SQL scripts and occasionally a Javascript within it, but those are few and far between. I can do everything else within the tool itself. I got into databases because I was sick and tired of getting errors when I compiled something. 

What needs improvement?

Some of the scheduling features about Lumada drive me buggy. The one issue that always drives me up the wall is when Daylight Savings Time changes. It doesn't take that into account elegantly. Every time it changes, I have to do something. It's not a big deal, but it's annoying. That's the one issue, but I see the limitation, and it might not be easily solvable. 

For how long have I used the solution?

I started working with Lumada long before it was acquired by Hitachi. It's been about 11 years now. I'm the primary person in the company who works with it. A few people know the solution tangentially. Aside from very basic elements, most tasks related to Lumada usually fall in my lap.

Buyer's Guide
Pentaho Data Integration and Analytics
January 2026
Learn what your peers think about Pentaho Data Integration and Analytics. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,757 professionals have used our research since 2012.

What do I think about the stability of the solution?

Lumada's stability and performance are pretty good. The limitations I run into are usually with the database that I'm trying to write to rather than read from. The only time I have a real issue is when an incredibly complex query takes 20 minutes to start returning data. It's sitting there going, "All right. Give me something to do." But then again, I've got it running on a machine that's got 64 gigs of memory.

What do I think about the scalability of the solution?

Scaling out our processes hasn't been a big deal. We're a relatively small shop with only a couple of production databases. We're more of a regional enterprise, and I haven't had any issues with performance yet. It's always been some other product or solution that has gotten in the way. Lumada can handle anything we throw at it. Every night I run reports on our part ledger. That includes 200 million records, and Lumada can chew through it in about an hour and a half. 

I know we can extend processing into the Spark realm if we need to. We've thought about that but never really needed it. It's something we keep in our back pocket. Someone suggested trying it out, but it never really got off the ground because other more pressing needs came up. From what I've seen, it'll scale out to whatever I need it to do. Any limitations are in the backend rather than the software. I've done some metrics on it. It's the database that I have to wait on more than the software. It's not doing a whole lot CPU-wise. My limitations are elsewhere, usually.

Right now, we have about 100 users working with Lumada. About 100 people log in to the system, but probably 200 people get reports from it. Only about 50 use the analysis tools, including the top sales managers and all of the buying group. There are also some analysts from various groups who use it constantly. 

How are customer service and support?

I'd give Lumada support a nine out of 10. It has been exceptional historically, but there was a rough patch about a year and a half ago shortly after Hitachi took over. They were in a transition period, but it has been very responsive since. I usually don't need help. When I do, I get a response the same day, and somebody's working on it. I'm not too worried about things going wrong, like an outage. I've never had that happen.

Sometimes when we do upgrades, and I'm in my test environment, I'll contact them and say, "I ran into this weird issue, and it's not doing what it should. What do you make of it?" They'll tell me, "You got to do this, that, and the other thing." They've been good about it.

Which solution did I use previously and why did I switch?

Before Lumada, we had a variety of homegrown solutions. Most of it was centered on our warehouse management system because that was our primary focus. There were also reports within the point of sale system, and the two never crossed paths. Now they're integrated. There was also an analysis tool they had before I came on board. I can't remember the name of it. The company had something, but it didn't do what they thought it would do, and the project fizzled.

Part of the problem was that they didn't have somebody in-house who understood business intelligence until they brought me on. They were very operationally focused before that. The management was like, "We need more insight into what we're doing and how we're doing it." That was phase two of the big data warehouse push. The management here is relatively conservative in that regard, so they're somewhat slow to say, "Hey. We need to do something along these lines." But when they decide to go, get out of the way because here we come.

I used a different tool at my previous job called Informatica. Lumada has less of a learning curve for deployment. Lumada was similar enough to Informatica that it's like, "Okay. This makes sense," but there were a few differences. Once I figured out the difference, it made a lot of sense to me. The entire chain of steps Lumada allows you to do is intuitive.

Informatica was a lot more tedious to use. You had to hook every column up from its source to its target. With Lumada, it's the name that matters and its position. It made aspects a whole lot easier and less tedious. Every so often, it bites me in the butt. If I get a column out of order, it'll let me know I did something wrong. But it's much less error-prone because I don't have to hook every column up from its source to its target anymore. With Informatica, there were times where I spent 20 minutes just sitting there trying not to drool on myself. It was terrible. 

How was the initial setup?

Setting up Lumada was pretty straightforward. We just rolled it out and went from proof of concept to live in about a year. I was relatively new to the organization at the time and was still getting a feel for it — knowing where data was and what all these things mean. My experience at a shoe company didn't exactly translate to an auto parts business. I went to classes down in Orlando to learn the product, then we went from there and just tried it. We had a few faux pas here and there, but we knew.

What was our ROI?

Lumada has also significantly reduced our ETL development time. It depends on the project, but if someone comes to me with a new data source, I can typically integrate it within a week, whereas it used to take a month. It's a 4-to-1 reduction. It's allowed our IT department to stay lean. I worked at another company with 70 IT people, 50 of which were programmers. My current workplace has 12 people, and six are programmers. The others are UI-type developers, and there are about six database people, including me. We save the equivalent of a full-time employee, so that's anywhere from $50,000 to $75,000 a year.

What's my experience with pricing, setup cost, and licensing?

I think Lumada's price is fair compared to some of the others, like BusinessObjects, which is was the other solution that I used at my previous job. BusinessObject's price was more reasonable before SAP acquired it. They jacked the price up significantly. Oracle's OBIEE tool was also prohibitively expensive. We felt the value was much greater than the cost, and the value for the money was much better than if we had gone with other solutions.

Which other solutions did I evaluate?

We didn't consider other options besides Lumada because we are members of an auto parts trade association, and they were using the Pentaho tool before it was Hitachi to do some ETL tasks. They recommended it, so we started using it. I evaluated a couple of other ones, but they cost more than we were willing to spend to try out this type of solution. Once we figured out what it could do for us, then it's like, "Okay. Now, we can do some real work here."

What other advice do I have?

I rate Lumada nine out of 10. The aspect I like about Lumada is its flexibility. I can make it do pretty much whatever I want. It's not perfect, but I haven't run into a tool that is yet. I haven't used every aspect of it, but there's very little that I can't make it do. I haven't run into a scenario where it couldn't handle a challenge we put in front of it. It's been a solid performer for us. I rarely have a problem that is due to Lumada. The issues I have with my loads are never because of the software.

If you plan to implement Lumada, I recommend going to the classes. Don't be afraid to ask dumb questions of support because many of them used to be consultants. They've all been there, done that. One of the guys I talk to regularly lives about 80 miles to the north of me. I have a rapport with him. They're willing to go above and beyond to make you successful.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Ridwan Saeful Rohman - PeerSpot reviewer
Data Engineering Associate Manager at a tech services company with 1,001-5,000 employees
Real User
Top 20
Jul 4, 2024
Good abstraction and useful drag-and-drop functionality but can't handle very large data amounts
Pros and Cons
  • "The abstraction is quite good."
  • "If you develop it on MacBook, it'll be quite a hassle."

What is our primary use case?

I still use this tool on a daily basis. Comparing it to my experience with other ETL tools, the system I created using this tool was quite straightforward. It involves extracting data from MySQL, exporting it to CSV, storing it on S3, and then loading it into Redshift.

The PDI Kettle Job and Kettle Transformation are bundled by a shell script, then scheduled and orchestrated by Jenkins.

We continue to use this tool primarily because many of our legacy systems still rely on it. However, our new solution is mostly based on Airflow, and we are currently in the transition phase. Airflow is a data orchestration tool that predominantly uses Python for ETL processes, scheduling, and issue monitoring—all within a unified system.


How has it helped my organization?

In my current company, this solution has a limited impact as we predominantly employ it for handling older and simpler ETL tasks.

While it serves well in setting up ETL tools on our dashboard, its functionalities can now be found in several other tools available in the market. Consequently, we are planning a complete transition to Airflow, a more versatile and scalable platform. This shift is scheduled to be implemented over the next six months, aiming to enhance our ETL capabilities and align with modern data management practices.


What is most valuable?

This solution offers drag-and-drop tools with a minimal script. Even if you do not come from an IT background or have no software engineering experience, it is possible to use. It is quite intuitive, allowing you to drag and drop many functions.

The abstraction is quite good.

If you're familiar with the product itself, it has transformational abstractions and job abstractions. We can create smaller transformations in the Kettle transformation and larger ones in the Kettle job. Whether you're familiar with Python or have no scripting background at all, the product is useful.

For larger data, we use Spark.

The solution enables us to create pipelines with minimal manual or custom coding efforts. Even without advanced scripting experience, it is possible to create ETL tools. I recently trained a graduate from a management major who had no experience with SQL. Within three months, he became quite fluent, despite having no prior experience using ETL tools.

The importance of handling pipeline creation with minimal coding depends on the team. If we switch to Airflow, more time is needed to teach fluency in the ETL tool. With these product abstractions, I can compress the training time to three months. With Airflow, it would take more than six months to reach the same proficiency.

We use the solution's ability to develop and deploy data pipeline templates and reuse them.

The old system, created by someone prior to me in my organization, is still in use. It was developed a long time ago and is also used for some ad hoc reporting.

The ability to develop and deploy data pipeline templates once and reuse them is crucial to us. There are requests to create pipelines, which I then deploy on our server. The system needs to be robust enough to handle scheduling without failure.

We appreciate the automation. It's hard to imagine how data teams would work if everything were done on an ad hoc basis. Automation is essential. In my organization, 95% of our data distributions are automated, and only 5% are ad hoc. With this solution, we query data manually, process it on spreadsheets, and then distribute it within the organization. Robust automation is key.

We can easily deploy the solution on the cloud, specifically on AWS. I haven't tried it on another server. We deploy it on our AWS EC2, but we develop it on local computers, including both Windows and MacBooks.

I have personally used it on both. Developing on Windows is easier to navigate. On MacBooks, the display becomes problematic when enabling dark mode.

The solution has reduced our ETL development time compared to scripting. However, this largely depends on your experience.

What needs improvement?

Five years ago, when I had less experience with scripting, I would have definitely used this product over Airflow, as the abstraction is quite intuitive and easier for me to work with. Back then, I would have chosen this product over other tools that use pure scripting, as it would have significantly reduced the time required to develop ETL tools. However, this is no longer the case, as I now have more familiarity with scripting.

When I first joined my organization, I was still using Windows. Developing the ETL system on Windows is quite straightforward. However, when I switched to a MacBook, it became quite a hassle. To open the application, we had to first open the terminal, navigate to the solution's directory, and then run the executable file. Additionally, the display becomes quite problematic when dark mode is enabled on a MacBook.

Therefore, developing on a MacBook is quite a hassle, whereas developing on Windows is not much different from using other ETL tools on the market, like SQL Server Integration Services, Informatica, etc.

For how long have I used the solution?

I have been consistently using this tool since I joined my current company, which was approximately one year ago.

What do I think about the stability of the solution?

The performance is good. I have not tested the product at its bleeding edge. We only perform simple jobs. In terms of data, we extract it from MySQL and export it to CSV. There are only millions of data points, not billions. So far, it has met our expectations and is quite good for a smaller number of data points.

What do I think about the scalability of the solution?

I'm not sure that the product could keep up with significant data growth. It can be useful for millions of data points, but I haven't explored its capability with billions of data points. I think there are better solutions available on the market. This applies to other drag-and-drop ETL tools as well, like SQL Server Integration Services, Informatica, etc.

How are customer service and support?

We don't really use technical support. The current version that we are using is no longer supported by their representatives. We didn't update it yet to the newer version. 

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We're moving to Airflow. The switch was mostly due to debugging problems. If you're familiar with SQL for integration services, the ETL tools from Microsoft have quite intuitive debugging functions. You can easily identify which transformation has failed or where an error has occurred. However, in our current solution, my colleagues have reported that it is difficult to pinpoint the source of errors directly.

Airflow is highly customizable and not as rigid as our current product. We can deploy simple ETL tools as well as machine learning systems on Airflow. Airflow primarily uses Python, which our team is quite familiar with. Currently, only two out of 27 people on our team handle this solution, so not enough people know how to use it.

How was the initial setup?

There are no separations between the deployment and other teams. Each of our teams acts as individual contributors. We handle the entire implementation process, from face-to-face business meetings, setting timelines, developing the tools, and defining the requirements, to production deployment.

The initial setup is straightforward. Currently, the use of version control in our organization is quite loose. We are not using any version control software. The way we deploy it is as simple as putting the Kettle transformation file onto our EC2 server and overwriting the old file, that's it.

What's my experience with pricing, setup cost, and licensing?

I'm not really sure about the pricing of the product. I'm not involved in procurement or commissioning.

What other advice do I have?

We put it on our AWS EC2 server; however, during development, it was on our local server. We deploy it onto our EC2 server. We bundle it in our shell scripts, and the shell scripts are run by Jenkins.

I'd rate the solution seven out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Pentaho Data Integration and Analytics
January 2026
Learn what your peers think about Pentaho Data Integration and Analytics. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,757 professionals have used our research since 2012.
Manager, Systems Development at a manufacturing company with 5,001-10,000 employees
Real User
Aug 7, 2022
An affordable solution that makes it simple to do some fairly complicated things, but it could be improved in terms of consistency of different transformation steps
Pros and Cons
  • "It makes it pretty simple to do some fairly complicated things. Both I and some of our other BI developers have made stabs at using, for example, SQL Server Integration Services, and we found them a little bit frustrating compared to Data Integration. So, its ease of use is right up there."
  • "Its basic functionality doesn't need a whole lot of change. There could be some improvement in the consistency of the behavior of different transformation steps. The software did start as open-source and a lot of the fundamental, everyday transformation steps that you use when building ETL jobs were developed by different people. It is not a seamless paradigm. A table input step has a different way of thinking than a data merge step."

What is our primary use case?

Our primary use case is to populate a data warehouse and data marts, but we also use it for all kinds of data integration scenarios and file movement. It is almost like middleware between different enterprise solutions. We take files from our legacy app system, do some work on them, and then call SAP BAPIs, for example.

It is deployed on-premises. It gives you the flexibility to deploy it in any environment, whether on-premises or in the cloud, but this flexibility is not that important to us. We could deploy it on the cloud by spinning up a new server in AWS or Azure, but as a manufacturing facility, it is not important to us. Our customer preference is primarily to deploy things on-premises.

We usually stay one version behind the latest one. We're a manufacturing facility. So, we're very sensitive to any bugs or issues. We don't do automatic upgrades. They're a fairly manual process.

How has it helped my organization?

We've had it for a long time. So, we've realized a lot of the improvements that anybody would realize from almost any data integration product.

The speed of developing solutions has been the best improvement. It has reduced the development time and improved the speed of getting solutions deployed. The reduced ETL development time varies by the size and complexity of the project. We probably spend days or weeks less than then if we were using a different tool.

It is tremendously flexible in terms of adding custom code by using a variety of different languages if you want to, but we had relatively few scenarios where we needed it. We do very little custom coding. Because of the tool we're using, it is not critical. We have developed thousands of transformations and jobs in the tool.

What is most valuable?

It makes it pretty simple to do some fairly complicated things. Both I and some of our other BI developers have made stabs at using, for example, SQL Server Integration Services, and we found them a little bit frustrating compared to Data Integration. So, its ease of use is right up there.

Its performance is a pretty close second. It is a pretty highly performant system. Its query performance on large data sets is very good.

What needs improvement?

Its basic functionality doesn't need a whole lot of change. There could be some improvement in the consistency of the behavior of different transformation steps. The software did start as open-source and a lot of the fundamental, everyday transformation steps that you use when building ETL jobs were developed by different people. It is not a seamless paradigm. A table input step has a different way of thinking than a data merge step.

For how long have I used the solution?

We have been using this solution for more than 10 years.

What do I think about the stability of the solution?

Its stability is very good.

What do I think about the scalability of the solution?

Its scalability is very good. We've been running it for a long time, and we've got dozens, if not hundreds, of jobs running a day.

We probably have 200 or 300 people using it across all areas of the business. We have people in production control, finance, and what we call materials management. We have people in manufacturing, procurement, and of course, IT. It is very widely and extensively used. We're increasing its usage all the time.

How are customer service and support?

They are very good at quickly and effectively solving the issues we have brought up. Their support is well structured. They're very responsive.

Because we're very experienced in it, when we come to them with a problem, it is usually something very obscure and not necessarily easy to solve. We've had cases where when we were troubleshooting issues, they applied just a remarkable amount of time and effort to troubleshoot them.

Support seems to have very good access to development and product management as a tier-two. So, it is pretty good. I would give their technical support an eight out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We didn't have another data integration product before Pentaho.

How was the initial setup?

I installed it. It was straightforward. It took about a day and a half to get the production environment up and running. That was probably because I was e-learning as I was going. With a services engagement, I bet you would have everything up in a day.

What about the implementation team?

We used Pentaho services for two days. Our experience was very good. We worked with Andy Grohe. I don't know if he is still there or not, but he was excellent.

What was our ROI?

We have absolutely seen an ROI, but I don't have the metrics. There are analytic cases that we just weren't able to do before. Due to the relatively low cost compared to some of the other solutions out there, it has been a no-brainer.

What's my experience with pricing, setup cost, and licensing?

We did a two or three-year deal the last time we did it. As compared to other solutions, at least so far in our experience, it has been very affordable. The licensing is by component. So, you need to make sure you only license the components that you really intend to use.

I am not sure if we have relicensed after the Hitachi acquisition, but previously, multi-year renewals resulted in a good discount. I'm not sure if this is still the case.

We've had the full suite for a lot of years, and there is just the initial cost. I am not aware of any additional costs.

What other advice do I have?

If you haven't used it before, it is worth engaging services with Pentaho for initial implementation. They'll just point out a number of small foibles related to perhaps case sensitivity. They'll just save you a lot of runs through the documentation to identify different configuration points that might be relevant to you.

I would highly recommend the Data Integration product, particularly for anyone with a Java background. Most of our BI developers at this point do not have a Java background, which isn't really that important. Particularly, if you're a Java business and you're looking for extensibility, the whole solution is built in Java, which just makes certain aspects of it a little more intuitive at first.

On the data integration side, it is really a good tool. A lot of investment dollars go into big data and new tech, and often, those are not very compelling for us. We're in an environment where we have medium data, not big data.

It provides a single end-to-end data management experience from ingestion to insights, but at this point, that's not critical to us. We mostly do the data integration work in Pentaho, and then we do the visualization in another tool. The single data management experience hasn't enabled us to discontinue the use of other data management analysis delivery tools just because we didn't really have them.

We take an existing job or transformation and use that as a test. It is certainly easy enough to copy one object to another. I am not aware of a specific templating capability, but we are not really missing anything there. It is very easy for us to clone a job or transformation just by doing a Save As, and we do that extensively.

Vantara's roadmap is a little fuzzy for me. There has been quite a bit of turnover in the customer-facing roles over the last five years. We understand that there is a roadmap to move to a pure web-based solution, but it hasn't been well communicated to us.

In terms of our decision to purchase Hitachi's product services or solutions, our satisfaction level is average or on balance.

I would rate this solution a seven out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
reviewer1855218 - PeerSpot reviewer
Data Architect at a consumer goods company with 1,001-5,000 employees
Real User
May 12, 2022
I can extend and customize existing pipeline templates for changing requirements, saving time
Pros and Cons
  • "I can use Python, which is open-source, and I can run other scripts, including Linux scripts. It's user-friendly for running any object-based language. That's a very important feature because we live in a world of open-source."
  • "I would like to see improvement when it comes to integrating structured data with text data or anything that is unstructured. Sometimes we get all kinds of different files that we need to integrate into the warehouse."

What is our primary use case?

We use it for orchestration and as an ETL tool to move data from one environment to another, including moving data from on-premises to the cloud and moving operational data from different source systems into the data warehouse.

How has it helped my organization?

People are now able to get access to the data when they need it. That is what is most important. All the reports go out on time.

The solution enables us to use one tool that gives a single, end-to-end data management experience from ingestion to insights. From the reporting point of view, we are able to make our customers happy. Are they able to get their reports in time? Are they able to get access to the data that they need on time? Yes. They're happy, we're happy, that's it.

With the automation of everything, if I start breaking it into numbers, we don't have to hire three or four people to do one simple task. We've been able to develop some generic IT processes so that we don't have to reinvent the wheel. I just have to extend the existing pipeline and customize it to whatever requirements I have at that point in time. Otherwise, whenever we would get a project, we would actually have to reinvent the wheel from scratch. Now, the generic pipeline templates that we can reuse save us so much time and money.

It has also reduced our ETL development time by 40 percent, and that translates into cost savings.

Before we used Pentaho, we used to do some of this stuff manually, and some of the ETL jobs would run for hours, but most of the ETL jobs, like the monthly reports, now run within 45 minutes, which is pretty awesome. Everything that we used to do manually is now orchestrated.

And now, with everything in the cloud, any concerns about hardware are taken care of for us. That helps with maintenance costs.

What is most valuable?

I can use Python, which is open-source, and I can run other scripts, including Linux scripts. It's user-friendly for running any object-based language. That's a very important feature because we live in a world of open-source. With open-source on the table, I am in a position to transform the data where it's actually being moved from one environment to another.

Whether we are working with structured or unstructured data, the tool has been helpful. We are actually able to extend it to read JSON data by creating some Java components.

The solution gives us the flexibility to deploy it in any environment, including on-premises or in the cloud. That is another very important feature.

What needs improvement?

I would like to see improvement when it comes to integrating structured data with text data or anything that is unstructured. Sometimes we get all kinds of different files that we need to integrate into the warehouse. 

By using some of the Python scripts that we have, we are able to extract all this text data into JSON. Then, from JSON, we are able to create external tables in the cloud whereby, at any one time, somebody has access to this data on the S3 drive.

For how long have I used the solution?

I've been using Hitachi Lumada Data Integration since 2014.

What do I think about the stability of the solution?

It's been stable.

What do I think about the scalability of the solution?

We are able to scale our environment. For example, if I had that many workloads, I could scale the tool to run on three instances, and all the workloads would be distributed equally.

How are customer service and support?

Their tech support is awesome. They always answer and attend to any incidents that we raise.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

Everything was done manually in Excel. The main reason we went with Pentaho is that it's open-source.

How was the initial setup?

The deployment was like any other deployment. All the steps are written down in a document and you just have to follow those steps. It was simple for us.

What other advice do I have?

The performance of Pentaho, like any other ETL tool, starts from the database side, once you write good, optimized scripts. The optimization of Pentaho depends on the hardware it's sitting on. Once you have enough RAM on your VM, you are in a position to run any workloads.

Overall it is an awesome tool. We are satisfied with our decision to go with Hitachi's product. It's like any other ETL tool.  It's like SQL Server Integration Services, Informatica, or DataStage. On a scale of one to 10, where 10 is best, I would give it a nine in terms of recommending it to a colleague.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Renan Guedert - PeerSpot reviewer
Business Intelligence Specialist at a recruiting/HR firm with 11-50 employees
Real User
Apr 20, 2022
Creates a good, visual pipeline that is easy to understand, but doesn't handle big data well
Pros and Cons
  • "Sometimes, it took a whole team about two weeks to get all the data to prepare and present it. After the optimization of the data, it took about one to two hours to do the whole process. Therefore, it has helped a lot when you talk about money, because it doesn't take a whole team to do it, just one person to do one project at a time and run it when you want to run it. So, it has helped a lot on that side."
  • "A big problem after deploying something that we do in Lumada is with Git. You get a binary file to do a code review. So, if you need to do a review, you have to take pictures of the screen to show each step. That is the biggest bug if you are using Git."

What is our primary use case?

It was our principle to make the whole ETL and data warehousing on our projects. We created a whole step for collecting all the raw data from APIs and other databases from flat files, like Excel files, CSV files, and JSON files, to do the whole transformation and data preparation, then model the data and put it in SQL Server and integration services.

For business intelligence projects, it is sometimes pretty good, when you are extracting something from the API, to have a step to transform the JSON file from the API to an SQL table.

We use it heavily as a virtual machine running on Windows. We have also installed the open-source version on the desktop.

How has it helped my organization?

Lumada provides us with a single, end-to-end data management experience from ingestion to insights. This single data management experience is pretty good because then you don't have every analyst doing their own stuff. When you have one unique tool to do that, you can keep improving as well as have good practices and a solid process to do the projects.

What is most valuable?

It has many resourceful things. It has a variety of the things that you can do. It is also pretty open, since you can put in a Python script or JavaScript for everything. If you don't have the native tool on the application, you can build your own using scripts. You can build your other steps and jobs on the application. The liberty of the application has been pretty good.

Lumada enables us to create pipelines with minimal manual coding efforts, which is the most important thing. When creating a pipeline, you can see which steps are failing in the process. You can keep up the process and debug, if you have problems. So, it creates a good, visual pipeline that makes it easy to understand what you are doing during the entire process.

What needs improvement?

There is no straight-line explanation about bugs and errors that happen on the software. I must search heavily on the Internet, some YouTube videos, and other forums to know what is happening. The proper site of Hitachi and Lumada doesn't have the best explanation about bugs, errors, and the functions. I must search for other sources to understand what is happening. Usually, it is some guy in India or Russia who knows the answer.

A big problem after deploying something that we do in Lumada is with Git. You get a binary file to do a code review. So, if you need to do a review, you have to take pictures of the screen to show each step. That is the biggest bug if you are using Git.

After you create a data pipeline, if you could make a JSON file or something with another language, we could simplify the steps for creating what we are doing. Or, a simple flat file of text could be even better than that, but generated by their own platform so people can look and see what is happening. You shouldn't need to download the whole project in your own Pentaho, I would like to just look at the code and see if there is something wrong.

When I use it for open-source applications, it doesn't handle big data too well. Therefore, we have to use other kinds of technologies to manage that.

I would like it more accessible for Macs. Previously, I always used Linux, but some companies that I worked for before used MacBooks. It would be good if I could use Pentaho in that too since I need to use other tools or create a virtual machine to use Pentaho. So, it would be pretty good if the solution had a friendly version for Macs or Linux-based programs, like Ubuntu.

For how long have I used the solution?

I have been using it for six years, but more heavily over the last two years.

How are customer service and support?

I don't bring issues to Hitachi since Lumada is open source in some kind of way. 

Once, when I had a problem with connections because of the software, I saw the issue in the forums on the Internet because there was some type of bug happening.

Which solution did I use previously and why did I switch?

At my first company, we used just Lumada. At my second company, we used a lot of QlikView, SQL, Python, and Lumada. At my third company, we used Python and SQL much more. I used Lumada just once at that company. At my new company, I don't use it at all. I just use Azure Data Factory and SQL.

With Pentaho, we finally have data pipelines. We didn't have solid data pipelines before. After the data pipelines became very solid, the team who created them became very popular in the company.

How was the initial setup?

To set up the things, we used a virtual machine. It was a version where we can download it and unlock a machine too. You can do Ctrl-C and Ctrl-V with Pentaho because all you need to have is the newest version of Java. So, it was pretty smooth to do the setup. It took an hour maximum to deploy.

What was our ROI?

Sometimes, it took a whole team about two weeks to get all the data to prepare and present it. After the optimization of the data, it took about one to two hours to do the whole process. Therefore, it has helped a lot when you talk about money, because it doesn't take a whole team to do it, just one person to do one project at a time and run it when you want to run it. So, it has helped a lot on that side.

The solution reduced our ETL development time by a lot because a whole project used to take about a month to get done previously. After having Lumada, it took just a week. For a big company in Brazil, it saves a team at least $10,000 a month.

Which other solutions did I evaluate?

I just use the ETL tool. For data visualization, we are using Power BI. For data storage, we use SQL Server, Azure, or Google BigQuery.

We are just using the open-source application for ETL. We have never looked into other tools of Hitachi because they are paid.

I know other companies who are using Alteryx, which has a friendlier user interface, but they have fewer tools and are more difficult to utilize. My wife uses Alteryx, and I find it is not as good after I used Lumada because they have more solutions and it's open-source. Though, Alteryx has more security and better support.

What other advice do I have?

For someone who wants simple solutions, open-source tools are very perfect for someone who isn't a programmer or knowledgeable about technology. In one week, you can try to understand this solution and do your first project. In my opinion, it is the best tool for people starting out.

Lumada is a great tool. I would rate it as a straight seven out of 10. It gets the work done. The open-source version doesn't work well with big data sources, but there is a lot of flexibility and liberty to do everything you want and need. If the open-source version worked better with big data, then I would give it a straight eight since there is always room for improvement. Sometimes when debugging, some errors can be pretty difficult. It is a tool in principle, when you are starting business intelligence and data engineering, to understand everything that is going on.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
reviewer1772286 - PeerSpot reviewer
Director of Software Engineering at a healthcare company with 10,001+ employees
Real User
Feb 3, 2022
Reports on predictions that our product is doing. It would be nice if they could have analytics perform well on large volumes.
Pros and Cons
  • "The way it has improved our product is by giving our users the ability to do ad hoc reports, which is very important to our users. We can do predictive analysis on trends coming in for contracts, which is what our product does. The product helps users decide which way to go based on the predictive analysis done by Pentaho. Pentaho is not doing predictions, but reporting on the predictions that our product is doing. This is a big part of our product."
  • "The performance could be improved. If they could have analytics perform well on large volumes, that would be a big deal for our products."

What is our primary use case?

We started using Pentaho for two purposes:

  1. As an ETL tool to bring data in. 
  2. As an analytics tool. 

As our solution progressed, we dropped the ETL piece of Pentaho. We didn't end up using it. What remains in our product today is the analytics tool.

We do a lot of simulations on our data with Pentaho reports. We use Pentaho's reporting capabilities to tell us how contracts need to be negotiated for optimal results by using the analytics tool within Pentaho.

How has it helped my organization?

This was an OEM solution for our product. The way it has improved our product is by giving our users the ability to do ad hoc reports, which is very important to our users. We can do predictive analysis on trends coming in for contracts, which is what our product does. The product helps users decide which way to go based on the predictive analysis done by Pentaho. Pentaho is not doing predictions, but reporting on the predictions that our product is doing. This is a big part of our product.

What is most valuable?

There is an end-to-end flow, where a user can say, "I am looking at this field and want to slice and dice my data based on these parameters." That flexibility is provided by Pentaho. This minimal manual coding is important to us.

What needs improvement?

The performance could be improved. If they could have analytics perform well on large volumes, that would be a big deal for our products.  

For how long have I used the solution?

I have been using it for eight years.

What do I think about the stability of the solution?

We are on-prem. Once the product was installed and up and running, I haven't had issues with the product going down or not being responsive.

We have one technical lead who is responsible for making sure that we keep upgrading the solution so we are not on a version that is not supported anymore. In general, it is low maintenance.

What do I think about the scalability of the solution?

The only complaint that I have with Pentaho has been with scaling. As our data grew, we tested it with millions of records. When we started to implement it, we had clients that went from 80 million to 100 million. I think scale did present a problem with the clients. I know that Pentaho talks about being able to manage big data, which is much more data than what we have. I don't know if it was our architecture versus the product limitations, but we did have issues with scaling.

Our product doesn't deal with big data at large. There are probably 17 million records. With those 17 million records, it performs well when it has been internally cached within Pentaho. However, if you are loading the dataset or querying it for the first time, then it does take awhile. Once it has been cached in Pentaho, the subsequent queries are reasonably fast.

How are customer service and support?

We haven't had a lot of functional issues. We had performance issues, especially early on, as we were trying to spin up this product. The response time from the support group has been a three on a scale of one to five.

We had trouble with the performance and had their engineers come in. We shared our troubles and problems, then those engineers had brainstorming sessions. Their ability to solve problems was really good and I would rate that as four out of five.

A lot of the problems were with the performance and scale of data that we had. It could have been that we didn't have a lot of upfront clean architecture. With the brainstorming sessions, we tried giving two sets of reports to users: 

  1. One was more summary level, which was quick, and that is what 80% of our clients use. 
  2. For 20% of our clients, we provided detailed reports that do take awhile. However, you are then not impacting performance for 80% of your clients. 

This was a good solution or compromise that we reached from both a business and technology perspective. 

Now, I feel like the product is doing well. It is almost like their team helped us with rearchitecting and building product expectations.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

Previously, we used to have something called QlikView, which is almost obsolete now. We had a lot of trouble with QlikView. Anytime processing was done, it would take a long time for those processed results to be loaded into QlikView's memory. This meant that there was a lot of time spent once an operation was done. Before users could see results or reports, it would take a couple of hours. We didn't want that lag. 

Pentaho offered an option not to have that lag. It did not have its own in-memory database, where everything had to be loaded. That was one of the big reasons why we wanted to switch away from QlikView, and Pentaho fit that need.

How was the initial setup?

I would say the deployment/implementation process was straightforward enough for both data ingestion and analytics.

When we started with the data ingestion, we went with something called Spoon. Then we realized, while it was a Pentaho product, Spoon was open source. We had integrated with the open source version of it, but later found that it didn't work for commercialization. 

For us to integrate Pentaho and get it working, it took a couple of months because we needed to figure out authentication with Pentaho. So, learning and deployment within our environment took a couple of months. This includes the actual implementation and figuring out how to do what we wanted to do.

Because this is a licensed product, the deployment for the client was a small part of the product's deployment. So, on an individual client basis, the deployment is easy and a small piece. 

It gives us the flexibility to deploy it in any environment, which is important to us.

If we went to the cloud version of Pentaho, that would be a big maintenance relief. We wouldn't have to worry about getting the latest version, installing it, and sending it out to our clients.

What about the implementation team?

For the deployment, we had people come in from Pentaho for a week or two. They were there with us through the process.

Which other solutions did I evaluate?

We looked at Tableau, Pentaho and an IBM solution. In the absence of Pentaho, we would have gone with either Tableau or building our own custom solution. When we were figuring out what third-party tool to use, we did an analysis and a bunch of other tools were compared. Ultimately, we went with Pentaho because it did have a wide variety of features and functionalities within its reports. Though I wasn't involved, there was a cost analysis done and Pentaho did favorably in terms of cost.

For the product that we use Pentaho for, I think we're happy with their decision. There are a few other products in our product suite. Those products ended up using Tableau. I know that there have been discussions about considering Tableau over Pentaho in the future. 

What other advice do I have?

Engage Pentaho's architects early on, so you know what data architecture works best with the product. We built our database and structures, then had performance issues. However, it was too late when we brought in the Pentaho architects, because our data structure was out in the field with multiple clients. Therefore, I think engaging them early on in the data architecture process would be wise.

I am not very familiar with Hitachi's roadmap and what is coming up for them. I know that they are good with sending out newsletters and keeping their customers in the know, but unfortunately, I am unaware of their roadmap.

I feel like this product is doing well. There haven't been complaints and things are moving along. I would rate it as seven out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Senior Engineer at a comms service provider with 501-1,000 employees
Real User
Jan 3, 2022
Saves time and makes it easy for our mixed-skilled team to support the product, but more guidance and better error messages are required in the UI
Pros and Cons
  • "The graphical nature of the development interface is most useful because we've got people with quite mixed skills in the team. We've got some very junior, apprentice-level people, and we've got support analysts who don't have an IT background. It allows us to have quite complicated data flows and embed logic in them. Rather than having to troll through lines and lines of code and try and work out what it's doing, you get a visual representation, which makes it quite easy for people with mixed skills to support and maintain the product. That's one side of it."
  • "Although it is a low-code solution with a graphical interface, often the error messages that you get are of the type that a developer would be happy with. You get a big stack of red text and Java errors displayed on the screen, and less technical people can get intimidated by that. It can be a bit intimidating to get a wall of red error messages displayed. Other graphical tools that are focused at the power user level provide a much more user-friendly experience in dealing with your exceptions and guiding the user into where they've made the mistake."

What is our primary use case?

We're using it for data warehousing. Typically, we collect data from numerous source systems, structure it, and then make it available to drive business intelligence, dashboard reporting, and things like that. That's the main use of it. 

We also do a little bit of moving of data from one system to another, but the data doesn't go into the warehouse. For instance, we sync the data from one of our line of business systems into our support help desk system so that it has extra information there. So, we do a few point-to-point transfers, but mainly, it is for centralizing data for data warehousing.

We use it just as a data integration tool, and we haven't found any problems. When we have big data processing, we use Amazon Redshift. We use Pentaho to load the data into Redshift and then use that for big data processing. We use Tableau for our reporting platform. We've got quite a number of users who are experienced in it, so it is our chosen reporting platform. So, we use Pentaho for the data collection and data modeling aspect of things, such as developing facts and dimensions, but we then publicly export that data to Redshift as a database platform, and then we use Tableau as our reporting platform.

I am using version 8.3, which was the latest long-term support version when I looked at it the last time. Because this is something we use in production, and it is quite core to our operations, we've been advised that we just stick with the long-term support versions of the product.

It is in the cloud on AWS. It is running on an EC2 instance in AWS Cloud.

How has it helped my organization?

It enables us to create low-code pipelines without custom coding efforts. A lot of transformations are quite straightforward because there are a lot of built-in connectors, which is really good. It has got connectors to Salesforce, which makes it very easy for us to wire up a connection to Salesforce and scrape all of that data into another table. Their flows have got absolutely no code in them. It has a Python integrator, and if you want to go into a coding environment, you've got your choice of writing in Java or Python.

The creation of low-code pipelines is quite important. We have around 200 external data sets that we query and pull the data from on a daily basis. The low-code environment makes it easier for our support function to maintain it because they can open up a transformation and very easily see what that transformation is doing, rather than having to troll through reams and reams of code. ETLs written purely in code become very difficult to trace very quickly. You spend a lot of time trying to unpick it. They never get commented on as well as you'd expect, whereas, with a low-code environment, you have your transformation there, and it almost self documents itself. So, it is much easier for somebody who didn't write the original transformation to pick that up later on.

We reuse various components. For instance, we might develop a transformation that does a lookup based on the domain name to match to a consumer record, and then we can repeat that bit of code in multiple transformations. 

We have a metadata-driven framework. Most of what we do is metadata-driven, which is quite important because that allows us to describe all of our data flows. For example, Table one moves to Table two, Table two moves to table three, etc. Because we've got metadata that explains all of those steps, it helps people investigate where the data comes from and allows us to publish reports that show, "You've got this end metric here, and this is where the data that drives that metric came from." The variable substitution that Pentaho has to allow metadata-driven frameworks is definitely a key feature that Pentaho offers.

The ability to automate data pipeline templates affects our productivity and costs. We run a lot of processes, and if it wasn't reliable, it would take a lot more effort. We would need a lot bigger team to support the 200 integrations that we run every day. Because it is a low-code environment, we don't have to have support instances escalated to the third line support to be investigated, which affects the cost. Very often our support analysts or more junior members are able to look into what an issue is and fix it themselves without having to escalate it to a more senior developer.

The automation of data pipeline templates affects our ability to scale the onboarding of data because after we've done a few different approaches and we get new requirements, they fit into a standard approach. It gives us the ability to scale with code and reuse, which also ties in with the metadata aspect of things. A lot of our intermediate stages of processing data are purely configured in metadata, so in order to implement transformation, no custom coding is required. It is really just writing a few lines of metadata to drive the process, and that gives us quite a big efficiency.

It has certainly reduced our ETL development time. I've worked at other places that had a similar-sized team to manage a system with a much lesser number of integrations. We've certainly managed to scale Pentaho not just for the number of things we do but also for the type of things we do.

We do the obvious direct database connections, but there is a whole raft of different types of integrations that we've developed over time. We have REST APIs, and we download data from Excel files that are hosted in SharePoint. We collect data from S3 buckets in Amazon, and we collect data from Google Analytics and other Google services. We've not come across anything that we've not been able to do with Pentaho. It has proved to be a very flexible way of getting data from anywhere.

Our time savings are probably quite significant. By using some of the components that we've already got written, our developers are able to, for instance, put in a transformation from a staging area to its model data area. They are probably able to put something in place in an hour or a couple of hours. If they were starting from a blank piece of paper, that would be several days worth of work.

What is most valuable?

The graphical nature of the development interface is most useful because we've got people with quite mixed skills in the team. We've got some very junior, apprentice-level people, and we've got support analysts who don't have an IT background. It allows us to have quite complicated data flows and embed logic in them. Rather than having to troll through lines and lines of code and try and work out what it's doing, you get a visual representation, which makes it quite easy for people with mixed skills to support and maintain the product. That's one side of it. 

The other side is that it is quite a modular program. I've worked with other ETL tools, and it is quite difficult to get component reuse by using them. With tools like SSIS, you can develop your packages for moving data from one place to another, but it is really difficult to reuse a lot of it, so you have to implement the same code again. Pentaho seems quite adaptable to have reusable components or sections of code that you can use in different transformations, and that has helped us quite a lot.

One of the things that Pentaho does is that it has the virtual web services ability to expose a transformation as if it was a database connection; for instance, when you have a REST API that you want to be read by something like Tableau that needs a JDBC connection. Pentaho was really helpful in getting that driver enabled for us to do some proof of concept work on that approach.

What needs improvement?

Although it is a low-code solution with a graphical interface, often the error messages that you get are of the type that a developer would be happy with. You get a big stack of red text and Java errors displayed on the screen, and less technical people can get intimidated by that. It can be a bit intimidating to get a wall of red error messages displayed. Other graphical tools that are focused at the power user level provide a much more user-friendly experience in dealing with your exceptions and guiding the user into where they've made the mistake.

Sometimes, there are so many options in some of the components. Some guidance about when to use certain options embedded into the interface would be good so that people know that if they set something, what would it do, and when should they use an option. It is quite light on that aspect.

For how long have I used the solution?

I have been using this solution since the beginning of 2016. It has been about seven years.

What do I think about the stability of the solution?

We haven't had any problems in particular that I can think of. It is quite a workhorse. It just sits there running reliably. It has got a lot to do every day. We have occasional issues of memory if some transformations haven't been written in the best way possible, and we obviously get our own bugs that we introduce into transformations, but generally, we don't have any problems with the product.

What do I think about the scalability of the solution?

It meets our purposes. It does have horizontal scaling capability, but it is not something that we needed to use. We have lots of small-sized and medium-sized data sets. We don't have to deal with super large data sets. Where we do have some requirements for that, it works quite well. We can push some of that processing down onto our cloud provider. We've dealt with some of such issues by using S3, Athena, and Redshift. You can almost offload some of the big data processing to those platforms.

How are customer service and support?

I've contacted them a few times. In terms of Lumada's ability to quickly and effectively solve issues that we brought up, we get a very good response rate. They provide very prompt responses and are quite engaging. You don't have to wait long, and you can get into a dialogue with the support team with back and forth emails in just an hour or so. You don't have to wait a week for each response cycle, which is something I've seen with some of the other support functions. 

I would rate them an eight out of 10. We've got quite a complicated framework, so it is not possible for us to send the whole thing over for them to look into it, but they certainly give help in terms of tweaks to server settings and some memory configurations to try and get things going. We run a codebase that is quite big and quite complicated, so sometimes, it might be difficult to do something that you can send over to show what the errors are. They wouldn't log in and look at your actual environment. It has to be based on the log files. So, it is a bit abstract. If you have something that's occurring just on a very specific transformation that you've got, it might be difficult for them to drill into to see why it is causing a problem on our system.

Which solution did I use previously and why did I switch?

I have a little bit of experience with AWS Glue. Its advantage is that it is tied natively into the AWS PySpark processing. Its disadvantage is that it writes some really difficult-to-maintain lines of code for all of its transformations, which might work fine if you have just a dozen or so transformations, but if you have a lot of transformations going on, it can be quite difficult to maintain.

We've also got quite a lot of experience working with SSIS. I much prefer Pentaho to SSIS. The SSIS ties you rigidly to your data flow structure that exists at design time, whereas Pentaho is very flexible. If, for instance, you wanted to move 15 columns to another table, in SSIS, you'd have to configure that with your 15 columns. If a 16th column appears, it would break that flow. With Pentaho, without amending your ETL, you can just amend your end data set to accept the 16th column, and it would just allow it to flow through. This and the fact that the transformation isn't tied down at the design time make it much more flexible than SSIS.

In terms of component reuse, other ETL tools are not nearly as good at being able to just pick up a transformation or a sub-transformation and drop it into your pipelines. You do tend to keep rewriting things again and again to get the same functionality.

What about the implementation team?

I was here during the initial setup, but I wasn't involved in it. We used an external company. They do our upgrades, etc. The reason for that is that we tend to stick with just the long-term support versions of the product. Apart from service packs, we don't do upgrades very often. We never get a deep experience of that, so it is more efficient for us to bring in this external company that we work with to do that.

What was our ROI?

It is always difficult to quantify a return on investment for data warehousing and business intelligence projects. It is a cost center rather than a profit center, but if you take the starting point as this is something that needs to be done, you could pick up the tools to do it. In the long run, you would necessarily find that they are much cheaper. If you went for more of a coded approach, it might be cheaper in terms of licensing, but then you might have higher costs of maintaining that.

What's my experience with pricing, setup cost, and licensing?

It does seem a bit expensive compared to the serverless product offering. Tools, such as Server Integration Services, are "almost" free with a database engine. It is comparable to products like Alteryx, which is also very expensive.

It would be great if we could use our enterprise license and distribute that to analysts and people around the business to use in place of Tableau Prep, etc, but its UI is probably a bit too confusing for that level of user. So, it doesn't allow us to get the tool as widely distributed across the organization to non-technical users as much as we would like.

What other advice do I have?

I would advise taking advantage of using metadata to drive your transformations. You should take advantage of the very nice and easy way in which variable substitution works in a lot of components. If you use a metadata-driven framework in Pentaho, it will allow you to self-document your process flows. At some point, it always becomes a critical aspect of a project. Often, it doesn't crop up until a year or so later, but somebody always comes asking for proof or documentation of exactly what is happening in terms of how something is getting to here and how something is driving a metric. So, if you start off from the beginning by using a metadata framework that self documents that, you'll be 90% of the way in answering those questions when you need to.

We are satisfied with our decision to purchase Hitachi's products, services, or solutions. In the low-code space, they're probably reasonably priced. With the serverless architectures out there, there is some competition, and you can do things differently using serverless architecture, which would have an overall lower cost of running. However, the fact that we have so many transformations that we run, and those transformations can be maintained by a team of people who aren't Python developers or Java developers, and our apprentices can use this tool quite easily, is an advantage of it.

I'm not too familiar with the overall roadmap for Hitachi Vantara. We're just using the Pentaho data integration products. We don't use the metadata injection aspects of Pentaho mainly because we did have a need for them, but we know they're there. 

I would rate it a seven out of 10. Its UI is a bit techy and more confusing than some of the other graphical ETL tools, and that's where improvements could be made.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Anton Abrarov - PeerSpot reviewer
Project Leader at a mining and metals company with 10,001+ employees
Real User
Jun 8, 2022
Fastens the data flow processes and has a user-friendly interface
Pros and Cons
  • "It has a really friendly user interface, which is its main feature. The process of automating or combining SQL code with some databases and doing the automation is great and really convenient."
  • "As far as I remember, not all connectors worked very well. They can add more connectors and more drivers to the process to integrate with more flows."

What is our primary use case?

The company where I was working previously was using this product. We were using it for ETL process management. It was like a data flow automatization.

In terms of deployment, we were using an on-premise model because we had sensitive data, and there were some restrictions related to information security.

How has it helped my organization?

Our data flow processes became faster with this solution.

What is most valuable?

It has a really friendly user interface, which is its main feature. The process of automating or combining SQL code with some databases and doing the automation is great and really convenient.

What needs improvement?

As far as I remember, not all connectors worked very well. They can add more connectors and more drivers to the process to integrate with more flows.

The last time I saw this product, the onboarding instructions were not clear. If the process of onboarding this product is made more clear, it will take the product to the next level. There is a possibility that the onboarding process has already improved, and I haven't seen it. 

For how long have I used the solution?

I have used this solution for two or three years.

What do I think about the stability of the solution?

I would rate it an eight out of ten in terms of stability.

What do I think about the scalability of the solution?

We didn't have to scale too much. So, I can't evaluate it properly in terms of scalability.

In terms of its users, only our team was using it. There were approximately 20 users. It was not for the whole company.

How are customer service and support?

We didn't use too much customer support. We were using the open-source resources through Google Search. So, we were just using text search. There were some helpful forums where we were able to find the answers to our questions.

Which solution did I use previously and why did I switch?

I didn't use any other solution previously. This was the only one.

How was the initial setup?

I wasn't a part of its deployment. In terms of maintenance, as far as I know, it didn't require much maintenance.

What was our ROI?

We absolutely saw an ROI. It was hard to calculate, but we felt it in terms of
the speed of our processes. After using this product, we could do some of the things much faster than before.

What's my experience with pricing, setup cost, and licensing?

I mostly used the open-source version. I didn't work with a license.

Which other solutions did I evaluate?

I did not evaluate other options.

What other advice do I have?

I would recommend using this product for data engineering and Extract, Transform, and Load (ETL) processes.

I would rate it an eight out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free Pentaho Data Integration and Analytics Report and get advice and tips from experienced pros sharing their opinions.
Updated: January 2026
Product Categories
Data Integration
Buyer's Guide
Download our free Pentaho Data Integration and Analytics Report and get advice and tips from experienced pros sharing their opinions.