Try our new research platform with insights from 80,000+ expert users
reviewer2137491 - PeerSpot reviewer
Associate Consultant at Capgemini
Real User
Apr 9, 2023
Helps schedule and monitor the SAP ECC batch and reduces workloads
Pros and Cons
  • "It can connect to a number of third-party/legacy systems."
  • "The monitoring dashboard could have been more user-friendly so that in the monitoring dashboard itself we can see the total number of jobs created in the system and how many were currently active/scheduled/chained."

What is our primary use case?

We have connected this automation tool to our SAP ECC system. All the ECC batch jobs are scheduled via this tool.

We had configured the alerting mechanism so that whenever we have any job fails/long-running jobs, we get an immediate email notification which helps in monitoring the jobs.

The best part of the tool is the submit frame and time window where we can schedule jobs as per the customer's requirements.

As this has a quality environment, we connected this tool to ECC QA, and before making any changes to production, we are able to test the new requirements in quality then we can move to production. 

How has it helped my organization?

It helped to schedule and monitor the SAP ECC batch jobs.

It reduced the workload.

It can connect to a number of third-party/legacy systems. Once the job is scheduled, no manual interruption is required. Therefore, once the job is scheduled, there won't be any interruption to the job.

As we support the different countries in the project, we need to schedule jobs in different time zone; this tool helped to schedule the jobs as per the respective time zone because this tool contains almost all the time zones. We can schedule jobs as per the regional/country time.

What is most valuable?

The best feature is the alerting mechanism.

We had configured email alerts for many scenarios so that whenever a job fails/is long-running, we get an immediate email notification. There is no need to log in to the system every time and do the monitoring based on the email alert. We can inform the respective team and take action immediately. It helps to avoid business impact.

We get the best customer support; whenever we are doing any testing/facing any issues/during any new requirement, we can raise a ticket to the support team, or we can schedule a call with them so that we get an immediate response and solutions.

What needs improvement?

The monitoring dashboard could have been more user-friendly so that in the monitoring dashboard itself, we can see the total number of jobs created in the system and how many were currently active/scheduled/chained.

The reports could have more pre-defined options, such as killed/failed jobs from the current month. That way, we can get the reports quickly and help the audit process.  

Whenever any job fails in the system, it should be listed based on the priority of the job incident and should generate and assign it to the respective job owners.

Buyer's Guide
ActiveBatch by Redwood
March 2026
Learn what your peers think about ActiveBatch by Redwood. Get advice and tips from experienced pros sharing their opinions. Updated: March 2026.
885,264 professionals have used our research since 2012.

For how long have I used the solution?

I've used the solution for one to two years.

Which solution did I use previously and why did I switch?

We did not use a different solution. 

What's my experience with pricing, setup cost, and licensing?

The solution is easy to set up.

It offers very good value for money.

The license renewal activity is easy; the support team will provide the query we need to run it.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Sr Technical Engineer at Compeer Financial
Real User
Dec 15, 2020
We can automate just about anything
Pros and Cons
  • "ActiveBatch's Self-Service Portal allows our business units to run and monitor their own workloads. They can simply run and review the logs, but they can't modify them. It increases their productivity because they are able to take care of things on their own. It saves us time from having to rerun the scripts, because the business units can just go ahead and log in and and rerun it themselves."
  • "We see ActiveBatch as the Center of Excellence for all things related to automation for our business."
  • "They have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse. Then, in order to continue on with your testing, you have to open those back up. I have taken that to them, and they are like, "Yep. We know about it. We know we have some enhancements that need to be taken care of. We have more developers now." They are working towards taking the minor things that annoy us, resolving them, and getting them fixed."
  • "While the console app works well, they have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse."

What is our primary use case?

It does a little bit of everything. We have everything from console apps that our developers create to custom jobs built directly in ActiveBatch, which go through the process of moving data off of cloud servers, like SFTP, onto our on-premise servers so we can ingest them into other workflows, console apps, or whatever the business needs.

How has it helped my organization?

We use it company-wide. With us being a financial organization, we rely on a bunch of data from some of our parent companies that process transactions for us. We are able to bring all that data into our system, no matter what department it is from, e.g., we have things from the IT department that we want to do maintenance on, such as clearing out the logs in IAS on the Exchange Server, to being able to move millions of dollars with automation.

If there is a native tool for it, then we try to use it. We have purchased the SharePoint, VMware, and ServiceNow modules. Wherever we find that we can't connect in because the native APIs aren't there, we have been using PowerShell to strip those rows out into an array of variables that have worked pretty well. So far, we have not found a spot where we can't hook in to have it do the tasks that we are asking it to do.

We have only really tapped into SharePoint native integration because we haven't gotten to the depths of being able to use the ServiceNow and some of the other integrations. However, being able to use the native plugins has been very helpful. It saves us from having to write a PowerShell script to do the functionality that we are looking to do. We are really trained to write it, because within the old process that we used to use, we would do a lot of PowerShell as the old tool just wouldn't do what we're asking it to do. We are finding a lot of processes within ActiveBatch are now replacing those PowerShell scripts because ActiveBatch can just do it. We don't have to teach it how to do it.

We can do things within ActiveBatch, not having to teach it everything. That is the biggest thing that we've been learning with it: It's easy to use and its workflows work a lot better. The other day, we ran into a problem where Citrix ShareFile, which is one of our SFTP locations, was being stupid where it would disconnect from the SFTP server. It was all just a time out. Well, ActiveBatch has a process included where we can troubleshoot the connection failures and have itself heal enough to be able to get the data off of the SFTP server. Being able to discover the functionalities of ActiveBatch self-healing has been a lifesaver for us.

We have so many different processes out there with so many different schedules. My boss looked at it one day and noticed there was somewhere between 1,000 and 2,000 processes a day. The solution gives us that single pane of glass to see everything under one spot because we have four execution agents constantly running, so there are processes happening at all times of the day and night.

We are actively monitoring all our ActiveBatch processes using SolarWinds Orion. If a process doesn't run, a service is not running on one particular execution agent, etc., Orion will alert us to that. I don't think that we have set up anything too major within ActiveBatch to figure out what is going on. I know that we have HA across everything. So, we are running four execution agents and two jobs schedulers. Having all that stuff put together, then it does failover to the other location if there is a problem with one of the sites.

What is most valuable?

The most valuable feature is being able to ingest some PowerShell scripting into variables that we can then utilize in loops. Our first rendition of doing PowerShell into variables was being able to pull some Active Directory computers using a PowerShell script and Active Directory PowerShell modules, then we were able to take that and dump it into a SharePoint list, because we keep inventory of all our servers. It was through the process of trying to understand how to get something out of PowerShell into an array and being able to process that out into something else that it would become useful down the road.

There are some things that ActiveBatch can't do natively, which is no fault to them. It's just the fact that we're trying to do things that just don't exist in ActiveBatch. With us being proficient in PowerShell scripting, we were able to extend the ActiveBatch environment to be able to say, "We'll run this PowerShell script and get the array that we're looking for, but then take that and do something native within ActiveBatch that can ultimately meet our goals."

The ease of use has been pretty good. I have been able to create workflows and utilize different modules within the job library, which has worked out really well. 

ActiveBatch's ability to automate predictable, repeatable processes is good. It does that very nicely. A lot of what we do is we pull files down from SFTP servers and put them onto our local file servers. Based on that, we are able to run a console app that developers have written, which is a lot more complicated, for doing various tasks. Our console apps are easy to set up because we have templates already drawn up. So, if we just right click into our task folder, we can quickly create an item in there that we can start up for doing an automation feature. Just being able to use PowerShell to drop variables into the ActiveBatch process has worked really well now that we understand it.

What needs improvement?

I know that there are some improvements that I have brought back to the development team that they want to work on. The graphical interface has some hiccups that we have been noticing on our side, and it seems a little bit bloated. 

While the console app works well, they have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse. Then, in order to continue on with your testing, you have to open those back up. I have taken that to them, and they are like, "Yep. We know about it. We know we have some enhancements that need to be taken care of. We have more developers now." They are working towards taking the minor things that annoy us, resolving them, and getting them fixed.

For how long have I used the solution?

We did a proof of concept back in April.

We are in the process of migrating all our old processes over to ActiveBatch. The solution is in production, and we do have workloads on it.

What do I think about the stability of the solution?

It is pretty stable. Now that we have worked through the details and ensured that we can do a failover to let the process do what it needs to do, we haven't seen any problems with it.

We are about 90 percent done migrating our processes.

What do I think about the scalability of the solution?

Right now, we have four execution agents, and they are sitting pretty idle for the most part. If we find that we're starting to see taxed resources on our execution agents, then we have the capability of spinning up more. So, we can run hundreds of servers and automation, if we wanted to.

There are only three of us who have been working with ActiveBatch, which is a good fit. We have one admin who is a developer first, then admin second. Then, there are two of us, who are server people first and developers second. All three of us manage all the different job libraries out there.

In the entire organization, there are about 1,300 of us using the different processes. A lot of people who would be more hands-on are the IT department, mainly because we are directly involved with all the different console apps. We have actually got a significant number of console apps, just because SCORCH couldn't do some of the things that ActiveBatch can do, so our developer teams went in and created the console app. At this point, all that ActiveBatch really needed to do was to be able to run an executable and provide an exit code on it, then let us know if it fails. There are some other business units who are involved a bit more along the way due to the movement of money, for example.

It is heavily used, at least in terms of what is out there. There is a lot of interest in adoption of using it in the future along with a lot of processes that people are really pushing to get put into ActiveBatch. They still have the mentality that a lot of it needs to be done as a console app. However, with us just ending the migration phase of things, we are trying to just get everything moved over so we can shut down the servers. Then, the next step in the future, probably 2021, we'll end up focusing on what ActiveBatch can do without us having to write a console app. 75 percent of the time, we could have ActiveBatch do it natively. There is just a matter of getting a lot of the IT developers to feel comfortable with adopting it as a platform.

How are customer service and technical support?

I am working with them on their tech support. We have a customer advocate with whom we have been working. She has been awesome. We have had some issues where tech support will suggest one thing, then we are sitting there scratching our heads, going, "Do we really need to go that complicated on a solution?" Then, we reach out to our customer advocate, who comes back, saying, "No, this is how you really need to do it. I'm going to take this ticket and go train that tech support person. So, in the future, you don't get the answer you did." Therefore, their tech support is a bit rough around the edges, but I foresee in the next six months to a year, they will be on their game and able to provide exactly the answers within the timeframe that we expect.

Which solution did I use previously and why did I switch?

We see ActiveBatch as the Center of Excellence for all things related to automation for our business. It is the best solution that we have had compared to what we were running before, which was Microsoft System Center Orchestrator (SCORCH). We don't want to have a whole bunch of different solutions out there. Being able to have one solution that can do all our automation is the best way to do it.

We switched over because of the intelligence. We were right in the middle of trying to decide whether we were going to upgrade SCORCH to the latest version or if it was time for us to go a different path. As we started going down through the different requirements that we needed SCORCH to do, we decided that it was time for us to go in a different direction. SCORCH had to be taught everything you wanted it to do, whereas there are a lot of processes that ActiveBatch will just go ahead and handle.

The performance is about the same between the two solutions in terms of doing what they are supposed to do. Where we really have the advantage is the fact that we don't have to reinvent the wheel, e.g., triggers within Active Batch are native and can be set up pretty quickly and easily. Whereas with SCORCH, we struggled with trying to get a schedule setup for that trigger or being able to rely on constraints. For example, if a file doesn't exist, then you really can't do anything. In SCORCH, we had to teach it that if you don't see a file, then hold on a second because we have to wait. Where ActiveBatch just says, "Oh, okay. I know how to do that."

In certain cases, ActiveBatch has resulted in an improvement in workflow completion times, because of the error retries. We can take care of them by telling ActiveBatch that if you have a problem, go ahead, try it again, and modify this. If the job runs at two o'clock in the morning and it failed with SCORCH, we always had to go back, figure out what happened, and how to get it run again. It might have been something as stupid as no network connection, because one of our upstream providers had an outage. Whereas, at least with ActiveBatch, we have been able to build in that self-healing or error detection. Once it sees the connection, it can go ahead and just correct the problem. For example, the Internet might go down from 2:00 AM to 2:15 AM, then by 2:30 AM, it's all back up and running. ActiveBatch can go ahead and finish the task. Where with SCORCH, we were finding that it would fail. Then, at seven o'clock in the morning, we got to troubleshoot any issues that might have come up. 

A lot of times, troubleshooting did not take very long, as it depended on the process. If it's something that could be downloaded from the SFTP, then that relied on several other steps that needed to take place. That might have delayed it a bit because we had to walk through five different processes that normally would have been scheduled to run at 3:00 AM versus 2:00 AM. So, if the Internet is out between 2:00 AM and 2:15 AM, ActiveBatch heals that first process before the second one runs at 3:00 AM. Then, we don't have to go through and do any added troubleshooting because step one didn't work, and step two failed because we can't troubleshoot it until we get up and start looking at it that day.

How was the initial setup?

The initial setup was straightforward.

It took two to three hours to deploy, by the time we had all the intricacies done that we wanted.

We knew that we wanted it to be highly available in two data centers for DR purposes, because some of these processes move millions of dollars of money between accounts (in various pieces for wire transfers). I think HA was the big thing that we were trained to ensure that our strategy was based around. 

The only other strategy was the fact that we have multiple environments that we go through to test our solution out first. When we are done, we export/promote it up to the production environment.

What about the implementation team?

The good part was that we really didn't have to do the install because we ended up getting a proof of concept setup with one of their engineers. So, we didn't have to do the initial setup ourselves, but we did build two other environments: one in our test environment and one in our development environment. Based on the fact that we walked through it the first time with the proof of concept, I was able to go back and reproduce every step that they walked us through on day one to build out the test and dev environments.

What was our ROI?

I have absolutely seen ROI. Coming from the admin point of view, it has streamlined the process of being able to just implement something instead of having to teach the software how to do its job. From our point, I know that I have implemented a couple of different processes that were not a migration piece, and it's been fairly easy for us to deploy because we know what the business unit wants to do with it. For us to implement, it takes us about 20 minutes to get it perfected on my side, then I can have developers run with it, test it, and figure out what their code was doing to make it happen. So, the biggest thing is that it is easy to use.

I know that there are enough processes out there that it's worth a gold mine. We can automate just about anything that we would ever want to. If we wanted the lights to turn on at a certain time, we could go ahead and turn the lights on at a certain time, and it would just happen.

ActiveBatch's Self-Service Portal allows our business units to run and monitor their own workloads. They can simply run and review the logs, but they can't modify them. It increases their productivity because they are able to take care of things on their own. It saves us time from having to rerun the scripts, because the business units can just go ahead and log in, then rerun it themselves. 

This solution improves our job success rate percentage. The biggest thing is having built-in capabilities of error detection, retries, and the ability to self-heal.

ActiveBatch has saved us man-hours. We don't have to rerun some of these scripts on behalf of the business unit. Or, if there is a script that fails, it can go ahead and self-heal, fixing itself. That is all unaccounted for troubleshooting time while helping our business units. 

What's my experience with pricing, setup cost, and licensing?

The pricing was fair. 

There are additional costs for the plugins. We have the standard licensing fees for different pieces, then we have the plugins which were add-ons. However, we expected that.

Which other solutions did I evaluate?

We had a consultant come in and try to share with us all the different tools. However, there isn't a lot of competition out there for automation capabilities.

A major component was that the vendor is thinking five years ahead, looking to future-proof our business. When we were making our decision, we were either ready to go with either upgrading SCORCH or a different path. We wanted to be in connection with an organization who had a long-term plan. We didn't want to revisit this in one to three years down the road.

What other advice do I have?

We have been able to learn it pretty quickly. We were kind of thrown right in after we got the proof of concept up and going. We had a couple of use cases drawn up and implemented, and they showed us how to do it. Our boss ended up buying the software, and said, "Ready, set, go. We're going to start migrating all these different processes over." We really didn't get time to learn it. Based on what we knew about our previous application that we were using for automation, we were able to step right in and do the best we could. We have been doing weekly, one- to two-hour sessions where three of us get together, just understand the solution, and try to work through all the details. We have been able to learn it pretty quickly without having too much training or knowledge.

We have gone through and given the business units a demo of what the possibilities are for sharing knowledge and ideas. At the end of the day, there is a team of three of us who are actually implementing all the processes so we keep a kind of standard. However, to give a business unit an idea of what the functionality is and how we could best utilize it, we at least give them the 30,000 foot view of what ActiveBatch could do, then we build it.

We mainly use it for console apps, but we haven't explored them in real depth. I know that we could get even deeper. At some point down the road, a lot of the console apps that our developer teams create will more than likely become native ActiveBatch processes which we will no longer need the console apps to run.

For the admins, the biggest lesson learnt would be in those first 30 days going through and learning through the Academy. They have an online Academy that they have out on their website. The biggest struggle that we had was just the fact that we were trying to do this migration not knowing all the different features of the software. We ran into trouble where we would try and implement something (and we wanted to do it by best practices because we want to get it right the first time), but there were features that we were discovering along the way that we had no idea about until all of a sudden we needed that feature. Then, we would go back, and go, "Oh, you know what? That last procedure that we just implemented. It would've been really cool if we would have known that at the time."

If we would have taken the first 30 to 60 days, or even a week long crash course, in ActiveBatch development to get all the highlights of everything that the software could physically do, that would have helped us immensely just to make sure that we knew what was going on and how it worked. We probably would have implemented some of our migrations a little differently than we have them done today. So, we will have to circle back and revisit some of those processes and reinvent them.

Take that time and learn the solution. Make sure you understand the software, at least at a higher level, maybe not the 30,000 foot view, but maybe the 1,000 foot view and get through the Academy first. Once you get through the Academy, then you can go ahead and start implementing the job libraries and how you want it to lay out and be implemented. Even after nine months of working with the software, we're still discovering features that we wish we would have known nine months ago coming into the migration.

I would probably rate the software as a nine and a half or 10. I would rate the tech support as probably a six, but they are improving immensely. If I had to give it an overall score, I would go with an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
ActiveBatch by Redwood
March 2026
Learn what your peers think about ActiveBatch by Redwood. Get advice and tips from experienced pros sharing their opinions. Updated: March 2026.
885,264 professionals have used our research since 2012.
UI Developer at Gupshup
Consultant
Apr 24, 2023
Good monitoring with a centralized dashboard and helpful support
Pros and Cons
  • "We are able to integrate it into multiple third-party tools like email, backup, tracking systems, SharePoint, Slack alerts, etc."
  • "The help center and documentation are not that helpful."

What is our primary use case?

Mainly we have used ActiveBatch for automating the deployment of different environments like production staging and QA. Earlier, we used it to have different software for each environment. This used to consume a lot of time, however, after using ActiveBatch we can manage everything under a single piece of software. The monitoring and alerting features are a great help to get complete insights and hassle-free for work. 

With the help of ActiveBatch, we have come up with cool automation starting from file transfer and pipelining the scripts with more test cases, had helped our company to grow many folds with the same resources.

How has it helped my organization?

Earlier, we had around four to five different tools to manage our automation which was all replaced by ActiveBatch. It is great. Even the resources required to manage those tools were reduced to a great extent and now, with only two employees, we are managing end-to-end automation. 

Our team is mainly into the automation of the entire application which usually takes around 20 minutes to complete. When ActiveBatch was used, it was done in less than five minutes. We were able to complete it before the deadlines we had and even our clients are happy with the results we produced.

What is most valuable?

Almost all the features are great. That said, if we wanted to select the best, then the monitoring feature which gives complete insights in a single dashboard is the most helpful. It helps to detect immediately if something goes wrong instead of waiting for someone to report it. 

The ROI of the application is more than what we used to spend for the entire year and its reasonable pricing has helped us to use it to its maximum. 

We are able to integrate it into multiple third-party tools like email, backup, tracking systems, SharePoint, Slack alerts, etc.

What needs improvement?

The help center and documentation are not that helpful. If we had some more user-friendly explanations and more video tutorials about how to set up and debug items, that would be ideal. 

The preset job step types make designing easy, while the steps of the job that allow scripts and code to be run allow for a wide range of additional functionality. This can be made better with more example scripts and pre-coded samples. 

If a few AI tools can be integrated with the product, it would enhance the entire product setup time and debugging issues.

For how long have I used the solution?

I have used the solution in a previous company for more than a year. In my current company, I've been working with it for the past six months.

What do I think about the stability of the solution?

The product is very mature. There are a few bugs however, none of them are roadblocks. It can be resolved by some workaround.

What do I think about the scalability of the solution?

There are no scalability issues. It easily can be used in a company with more than 1,000 employees.

How are customer service and support?

They have very good customer service. We had an issue while setting up and we connected with their support team. They were able to help us and fix it on the same day. Their response time has been great.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I previously used Selenium, however, it's all bits and pieces, so we had to switch to ActiveBatch automation.

How was the initial setup?

It was straightforward to set up. Only in the end, when we were importing files, did we feel a little more documentation would have been required.

What about the implementation team?

We had an in-house team for the implementation. I would rate them eight out of ten.

What was our ROI?

It has helped to achieve a 20% to 30% net revenue increase in the last quarter.

What's my experience with pricing, setup cost, and licensing?

Setup is easy and can be done within one or two days. The pricing is reasonable when compared to competitors. There is no need to worry about licensing as it's taken care of when you choose the plan.

Which other solutions did I evaluate?

Since I have worked with ActiveBatch in my previous organization, it was my go-to option. I did not evaluate others. 

What other advice do I have?

Overall, it's the best product that fits perfectly to most of our use cases. That said, it can be made a little more budget-friendly.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Keerthi R - PeerSpot reviewer
DevOps Engineer at HTC Global Services (INDIA) Private
Real User
Apr 10, 2023
Good workflow management, service management, and proactive workflows
Pros and Cons
  • "Since I started using this product, I have been able to easily track everything as it mainly monitors, alerts, and looks after all the services - even across platform scheduling - which has helped me immensely."
  • "Except for the GUI, everything looks good."

What is our primary use case?

The primary use case would be for monitoring the servers along with alerts and logs. Earlier, I had to do a lot of work on this manually and it was very time-consuming. Since the ActiveBatch Workload Automation has been implemented,  everything is smooth.

Also, we use it for job scheduling and server maintenance. This has been good. More than anything, when managing workload balance and multiple platforms, this solution helps me to avoid switching across platforms and keeps an eye on them as the tool automatically takes care of it.

Apart from this, the integrations for APIs have been very helpful as well.

How has it helped my organization?

Implementing this solution has been a real improvement in the work we do. This tool helps reduce the manual workload, and operational skills have been reduced as well. Now, the focus is more on developmental and deployment work.

Also, since this work has a multiplatform tool for scheduling, the jobs across platforms have been easy to handle. This has reduced a lot of micro-managing on these apps, and the amount of manual work is also reduced.

Plus, since the scalability is also automated, the team has benefitted when growing.

What is most valuable?

I use this product for multi-purpose functions. Some of the best features are:

  • Workload processing
  • scalability
  • intelligent automation
  • administration console
  • workflow management
  • service management
  • proactive workflows
  • error alerts
  • service management
  • job scheduling
  • API integrations
  • integrations
  • multi-platform scheduling

Since I started using this product, I have been able to easily track everything as it mainly monitors, alerts, and looks after all the services - even across platform scheduling - which has helped me immensely.

What needs improvement?

The only issue I have is the price. It is a bit high compared to the other similar tools, yet the use case has been brilliant compared to others.

An additional feature would be the easy download of the data. I'd like to see that in this as a tool. 

Except for the GUI, everything looks good.

For how long have I used the solution?

I have been using this product for around nine months.

What do I think about the stability of the solution?

The solution is very stable and dependable.

What do I think about the scalability of the solution?

This is a good product. Since the scalability is automated, we do not have to wait on the alerts and manually increase the size.

How are customer service and support?

Technical support offers great service.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I did use AWS and ServiceNow, and the switch is due to the fact that this has improved my time management very efficiently.

How was the initial setup?

The initial setup was not much of an issue.

What about the implementation team?

We did the implementation via an in-house team.

What was our ROI?

This solution is really worth the money.

What's my experience with pricing, setup cost, and licensing?

It is worth the money; go for it.

What other advice do I have?

We ask the company to please try to reduce the cost and provide it as a tool rather than a web interface. 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
DBA Individual Contributor at Aristeia Capital
Real User
Oct 17, 2022
Good scheduling tool that has less downtime, even when managing many complex scheduling workflows
Pros and Cons
  • "I found ActiveBatch Workload Automation to be a very good scheduling tool. What I like best about it is that it has very less downtime when managing many complex scheduling workflows, so I'm very impressed with ActiveBatch Workload Automation."
  • "I found ActiveBatch Workload Automation to be a very good scheduling tool, and what I like best about it is that it has very less downtime when managing many complex scheduling workflows, so I'm very impressed with ActiveBatch Workload Automation."
  • "An area for improvement in ActiveBatch Workload Automation is its interface or GUI. It could be a little better. There isn't any additional feature I'd like to see in the tool, except for the GUI, everything looks good."
  • "An area for improvement in ActiveBatch Workload Automation is its interface or GUI. It could be a little better."

What is our primary use case?

We use ActiveBatch Workload Automation primarily for managing work schedules.

How has it helped my organization?

ActiveBatch Workload Automation improved the organization I worked in because it's able to manage complex workflow automation even with a lot of cross-dependencies and hundreds of processes running. ActiveBatch Workload Automation is a very good tool in the Windows environment.

What is most valuable?

I found ActiveBatch Workload Automation to be a very good scheduling tool. What I like best about it is that it has very less downtime when managing many complex scheduling workflows, so I'm very impressed with ActiveBatch Workload Automation.

What needs improvement?

An area for improvement in ActiveBatch Workload Automation is its interface or GUI. It could be a little better.

There isn't any additional feature I'd like to see in the tool, except for the GUI, everything looks good.

For how long have I used the solution?

I've been using ActiveBatch Workload Automation since 2009.

What do I think about the stability of the solution?

ActiveBatch Workload Automation has very good stability.

What do I think about the scalability of the solution?

ActiveBatch Workload Automation is a scalable solution.

How are customer service and support?

The technical support for ActiveBatch Workload Automation is very good.

Which solution did I use previously and why did I switch?

We didn't use a different solution before using ActiveBatch Workload Automation.

How was the initial setup?

The initial setup for ActiveBatch Workload Automation was straightforward.

What about the implementation team?

Our deployment for ActiveBatch Workload Automation was done in-house.

What was our ROI?

I've seen ROI from ActiveBatch Workload Automation. It's a very good tool.

What's my experience with pricing, setup cost, and licensing?

I don't have information on the licensing costs of ActiveBatch Workload Automation because a different team handles that.

Which other solutions did I evaluate?

We evaluated other solutions, but we went with ActiveBatch Workload Automation because it suits our environment.

What other advice do I have?

I'm using version 12 of ActiveBatch Workload Automation.

Ten to fifteen people use ActiveBatch Workload Automation within the company. Between three to four people take care of the deployment and maintenance of the solution. Right now, there isn't any plan to increase the usage of ActiveBatch Workload Automation.

My advice to anyone looking to implement ActiveBatch Workload Automation is that it's a good tool for small requirements, for example, a few hundred scheduling workflows. For that, it should be a good tool, with good stability.

I'm rating ActiveBatch Workload Automation as eight out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Senior System Analyst at a insurance company with 5,001-10,000 employees
Real User
May 5, 2022
Native API calls are very good and very easy, enabling us to tie in to a large range of solutions, including Tableau and ServiceNow
Pros and Cons
  • "The most valuable feature is its stability. We've only had very minor issues and generally they have happened because someone has applied a patch on a Windows operating system and it has caused some grief. We've actually been able to resolve those issues quite quickly with ActiveBatch. In all the time that I've had use of ActiveBatch, it hasn't failed completely once. Uptime is almost 100 percent."
  • "Overall, it has helped to improve workflow completion times by 70 to 80 percent, easily."
  • "A nice thing to have would be the ability to comfortably pass variables from one job to another. That was one of the things that I found difficult."
  • "A nice thing to have would be the ability to comfortably pass variables from one job to another."

What is our primary use case?

We have roughly 8,000 jobs that run every day and they manage anything from SaaS to Python to PowerShell to batch, Cognos, and Tableau. We run a lot of plans that involve a lot of constraints requiring them to look at other jobs that have to run before they do. Some of these plans are fairly complicated and others are reasonably simple.

We also pull information from SharePoint and load that data into Greenplum, which is our main database. SharePoint provides the CSV file and we then move it across to Linux, which is where our main agent is that actually loads into the Greenplum environment.

Source systems acquire data that goes into Greenplum. There are a number of materialized views that get populated, and that populating is done through ActiveBatch. ActiveBatch then triggers the Tableau refresh so that the reports that pull from those tables in Greenplum are updated. That means from just a bit after source acquisition, through to the Tableau end report, ActiveBatch is quite involved in that process of moving data.

We have 19 agents if you include the Linux environment, and 23 if you count the dev environments. It's huge.

It's on-prem. We manage the agents and the scheduler on a combination of Windows and Linux.

How has it helped my organization?

We have some critical processes in ActiveBatch that go to finance and to the auditors in our organization. Those processes are highly critical because that allows us to trade. If those reports don't get to them, we get penalized by the government or by APRA or by some financial institutions. ActiveBatch, in this particular case, is absolutely critical for getting those reports out.

We have SLAs requiring us to get reports out by a certain time of day or by a certain day of the month, by a certain time. We're judged on whether those reports go out. ActiveBatch, being as stable as it, is only impacted by external factors like the network and database performance. But otherwise, we are quite comfortable with the way ActiveBatch is able to handle these jobs without our having to look at them.

Because the connections between ActiveBatch and other tools are automated, it gives us more time to do other things, and more interesting things. If something goes wrong, we can go back and have a look in the logs that are produced and that explain what's going on, and we can then repair it. It's an enabler, and it provides us with more time to get on with other jobs. It's something that's critical and it runs by itself and we're really happy it does that. We have that time available because we're not actually manually babysitting processes.

It provides a central automation hub for scheduling and monitoring, bringing everything together under a single pane of glass, absolutely. There is finance, sales, marketing. Pretty much every department has a job that we deal with. It's quite heavily integrated into our whole stack. As an insurance company, our major events department, for example, is critical because every time there's a storm or a hail event or a cyclone somewhere, those reports must get out in a timely manner. I can't think of any department that isn't impacted by ActiveBatch, running some report for them.

The single pane of glass helps the DataOps team manage all of the processes that are supported by ActiveBatch as the main scheduling tool. We've created a dashboard which pulls information from ActiveBatch, information that we can share with the organization. They can look at jobs and the schedules and, if necessary, run their own jobs from that point. It's like the lungs of our company.

Overall, it has helped to improve workflow completion times by 70 to 80 percent, easily. Once you've built a job, it just runs and no one has to concern themselves with it doing what it's doing. They will get the notification or the file or the email that says it's processed and they move on with their day.

In addition, we had a guy who was spending seven hours in a week to extract, compile, and then export information into a CSV file, and then another few hours to get it transferred to another department. We were able to build a PowerShell script, with a query that could easily be updated, that was automated through ActiveBatch. It takes 10 minutes to run. What that guy was doing in hours, we are now doing within minutes.

What is most valuable?

One of the valuable features is the ability to tie ActiveBatch into other applications using API calls. The native integrations and REST API adapter for orchestrating the entire tech stack are very good and user friendly. We have a product called ServiceNow, which is a call tracking system. If a problem occurs, ActiveBatch will send an API call into ServiceNow, and it will raise a ticket to say that there's a problem. That gives us an auditing process. We're also using API calls for Tableau and we're also using some API calls for SharePoint. We tie ActiveBatch into a lot of different applications.

Also, the overall ease of use is brilliant. It's easy to pick up. We can get a newbie up and running within a day, using ActiveBatch. It's not to the extent where that person will know some of the more complicated issues, but in terms of being able to build a job and export or run the job, it's within a couple of hours. Within a day, people are quite comfortable with the application. We've just signed an agreement with ActiveBatch which gives us all the education materials now. That means we'll be applying more advanced features. It's really good as far as ease of use goes.

We use the solution across all sorts of organisational branches. It's used for SaaS and SAP, which is finance. We have fraud and Salesforce, which is for the sales group. It's also used with marketing and major events because, when there's a storm, we need to know what's going on. We also have the ability to pull from external sources, meaning external vendors such as Guidewire. So ActiveBatch is widely utilised and probably more widely utilised than the executives realise. It's well embedded in our company.

What needs improvement?

We have moved to version 12, and I believe that interface is more of a "webbie" look and fee. 

A nice thing to have would be the ability to comfortably pass variables from one job to another. That was one of the things that I found difficult. Other than that, it's all good.

For how long have I used the solution?

I've been with this company for over 10 years and it was already here before I arrived.

What do I think about the stability of the solution?

The most valuable feature is its stability. We've only had very minor issues and generally they have happened because someone has applied a patch on a Windows operating system and it has caused some grief. We've actually been able to resolve those issues quite quickly with ActiveBatch. In all the time that I've had use of ActiveBatch, it hasn't failed completely once. Uptime is almost 100 percent.

With those 8,000 jobs that run in a 16-hour period, the majority of the time we're spending about an hour of the day with ActiveBatch, repairing problems. There are issues where we have to re-run a job because of it exceeding its runtime. Or when a job fails, even though the alert goes out to the end user, we still have to tap the user on the shoulder and say, "Did you look at this alert? We've got a problem here, can you please fix it?" Other than that, it pretty much runs itself. Overall, ActiveBatch saves us a huge amount of time, being as stable as it is.

If we were having to repair everything, on an ongoing basis, we would be spending more than five or six hours a day, so we are saving at least five to six hours a day by using this tool. The improvement to the business is quite substantial. People aren't having to manually do anything that would normally take them two or three hours to do. Those things are being done within a matter of minutes and then passed on. And those five or six hours are just for us in our department. You can multiply that by the number of people who would normally have done something manually and who now have it done through ActiveBatch in minutes.

We're looking at more than a 98 percent success rate for uptime and for running jobs. The only time that something falls over is not to do with ActiveBatch itself, rather it's to do with problems with either the network, the database, or developers.

What do I think about the scalability of the solution?

The scalability is brilliant. We've got 23 machines. We have redundancy integrated into this environment. 

If a server goes down, we can turn that queue off and re-queue those jobs to another server, while we get a new image spun up and restarted. In that situation, the delay is in getting the IT guys to spin up the image. If we could get an image spun up when it failed, it would be a matter of five or 10 minutes to be back in business with that server. As it is, once the IT guys do spin it up, we kick off from there.

The main interface is used by about 12 people. The dashboard that we've built on top of it is probably used by 70 to 80 people. But the number of people it affects is in the thousands across the entire organization.

It's heavily utilized across a number of departments in the organization and they really do rely on ActiveBatch to stay up and stable and to provide their reporting mechanisms.

How are customer service and support?

We've had a couple of issues where we've had to log a defect with ActiveBatch. But the guys at ActiveBatch are really responsive. We had things fixed in 24 hours, and they're in a different time zone. The response time is exceptional. This is one of the few vendors that I can say is highly responsive and that shows a level of commitment that I don't think many other organizations show.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

ActiveBatch replaced Windows Scheduler, Chrome jobs that had been running on some servers. There was also another scheduling tool that popped up somewhere but that data was moved into ActiveBatch. The scheduling from Cognos was also moved into ActiveBatch because it was more convenient, and some of the Tableau scheduling was moved into ActiveBatch as well.

How was the initial setup?

The initial setup was straightforward. It's super-easy to install and super-easy to set up. Even on the Linux box, it was really easy to install and set up and run. There was no real complexity in the installation process.

Most of the time with setup or upgrades is spent testing. We usually deploy agents within 20 minutes. The scheduler and the database might take an hour and a half, but because the agents are on virtual machines, we have an image and we just spin that image up. If something goes wrong, we can just spin up a new image and get that agent started straight away. In terms of testing, when we do disaster recovery, we redeploy to a disaster recovery environment and then we test that the connections are working, the jobs are running, and that there are no problems. That's where most of the time is spent, not in the deployment itself.

We usually have two people involved in the process, one who is the primary and one who is the secondary. And then we have a couple of people on standby. The primary does the installation and the secondary is looking over their shoulder for learning purposes. Then we have a few people on the IT side in case there is a problem with the operating system or the network that we have to deal with, but they're not involved until there's a problem. The DBA is also on-call just in case there's an issue with the database.

Maintenance-wise, it's only if something happens that we go and look. We have a job that looks at the health of the database that ActiveBatch uses. It's pretty much all automated, so it looks after itself. We have another job that pings the servers to make sure that all the ports that it needs are running and open. We also have jobs that look at the network latency so that if the network latency is beyond a certain point, it notifies IT and us. It also looks at the operating system and the actual directories. Unless we schedule it for an upgrade, which we do every six months, we don't look at maintenance for that six months unless there's a problem.

What about the implementation team?

ActiveBatch has been implemented in-house.

What was our ROI?

It pays for itself because it gives the DataOps team more time to be involved in other projects. It allows the organization to move forward without having to worry about doing anything manually. ActiveBatch is performing a huge service to the organization in terms of reducing the number of man-hours required to do manual tasks.

What's my experience with pricing, setup cost, and licensing?

If you compare ActiveBatch licensing to Control-M, you're looking at $50,000 as opposed to millions.

Which other solutions did I evaluate?

ActiveBatch isn't the only scheduling tool that we have. There's also a product called Control-M, but control-M is a lot more expensive and mostly manages mainframe. ActiveBatch is at a very modest price for running a very complex process.

We can expand ActiveBatch more readily than Control-M because, with Control-M, you pay for X number of runs in a run book. If you want to extend that run book, they want half-a-million dollars, or more, for 500 jobs. We can expand ActiveBatch. We could go to 10,000 jobs and it wouldn't cost us any more. It's only if we were to add more agents to load balance that we would be charged any more, and it wouldn't be anywhere near what Control-M charges.

I've mainly been involved with ActiveBatch and it's hard to compare another vendor when there hasn't been a vendor to compare against. As far as performance is concerned, Control-M and ActiveBatch are on par, but they're not the same because Control-M is really just moving files and running programs on mainframes, whereas we're running against Windows and Linux environments.

The other one that's being utilized at the moment is Apache Airflow, but that's more for the developers because they like to be able to program the backend, rather than to use a frontend interface. We've been looking at how that works, but we haven't seen it to be very stable for a production environment. You can't compare Airflow with ActiveBatch, in effect.

What other advice do I have?

My advice would be to jump on it straight away. With the ease of installation, the expandability or scalability of the product across multiple servers with different agents, the ability to not only use Windows but Linux as well, and the fact that you can build complex plans that have multiple constraints, multiple types of scheduling, and multiple types of alert mechanisms, it's highly expandable. You're going to have a lot of fun with it.

It's highly flexible and easy to use. In terms of what we can do, we still haven't gone to the Nth degree of what we can't do with ActiveBatch. It's incredibly flexible. We're running shell scripts that run Python scripts. We've got PowerShell scripts and batch scripts. We tie into different applications. We still haven't exhausted the potential of ActiveBatch. That's what I've learned.

Predictability is something that is out of the control of ActiveBatch. We can set a job to run against a database, but it's really going to be the network or the database that will impact ActiveBatch. ActiveBatch will continue to run. There is an average run time that we look at, but if the network has high latency or the database is under load, the time will increase. ActiveBatch will continue to run as normal. The frequency of ActiveBatch failing is quite rare.

We use the ActiveBatch interface up to a certain point, and then we start looking at running Python and shell scripts. That's why we have the Linux agent. We call a shell script which runs a Python script that does some manipulation and passes that information back. And then there are a number of plans that manipulate the process. In this particular plan, the CSV file is created and it's dropped into a file location. ActiveBatch is polling for that location. It sees that file. Then a Python script runs and creates an MD5 hash. When you download a file from the internet, there's an alphanumeric number that indicates whether that file is valid or not. The MD5 hash is generated on the file and when it's moved to another location, another MD5 hash is generated to determine whether there was a change in that file when it moved from A to B. It's a validation to make sure that no data was corrupted during the movement from where the file was dropped to where the file landed. Once it has been validated, the file is then moved into another location where it's uploaded into the Greenplum database and a notification is sent to whomever was involved in that particular process. It's quite involved.

If a job fails, we have set it to wait for a few minutes and to then re-run. If that fails, we can trigger another job to continue on in that process flow, if the failed job isn't critical. Some of the plans are quite complicated and have a certain amount of logic involved, but that enables us to navigate around problems that might otherwise need a developer's assistance, if it doesn't affect the overall plan process. As long as there are no constraints involved that require the next job to run, and it can move around that job and continue on, that's how we set it up.

We're looking forward to version 12 to see how that goes as well. We've also mirrored the database, the backend database that ActiveBatch uses. We have a failover process which was just recently installed. If one database fails, we can switch over immediately to the other database in real time.

Overall, we're really comfortable with how ActiveBatch is performing and with what it's doing.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Systems Architect at a insurance company with 201-500 employees
Real User
Nov 30, 2020
Everything runs automatically from start to finish; we don't have to worry about somebody clicking the wrong button
Pros and Cons
  • "Since we are no longer waiting for an operator to see that a job is finished, we have changed our daily cycle from running in eight hours down to about five. We had a third shift-operator retire and that position was never refilled."
  • "The central automation hub for scheduling and monitoring brings everything together under a single pane of glass by streamlining everything: It takes less time to run everything, it's less expensive because we no longer have the extra operator running jobs, there is less chance of an operator clicking the wrong button because we run both a test system and production system side by side, and it automatically will send an output where it needs to go in real-time."
  • "There are some issues with this version and finding the jobs that it ran. If you're looking at 1,000 different jobs, it shows based on the execution time, not necessarily the run time. So, if there was a constraint waiting, you may be looking for it in the wrong time frame. Plus, with thousands of jobs showing up and the way it pages output jobs, sometimes you end up with multiple pages on the screen, then you have to go through to find the specific job you're looking for. On the opposite side, you can limit the daily activity screen to show only jobs that failed or jobs currently running, which will shrink that back down. However, we have operators who are looking at the whole nightly cycle to make sure everything is there and make sure nothing got blocked or was waiting. Sometimes, they have a hard time finding every item within the list."
  • "There are some issues with this version and finding the jobs that it ran. If you're looking at 1,000 different jobs, it shows based on the execution time, not necessarily the run time."

What is our primary use case?

We are using ActiveBatch to automate as many of our processes as we can, limiting the amount of time operators are running recurring jobs. Included in that is about 99.5 percent of our nightly cycles. We call a mixture of executables: SSIS jobs, SQL queries, and PowerShell scripts. We also call processes in both PeopleSoft and another third-party package software.

How has it helped my organization?

As an IT department, we do solutions for the entire business and control everything. Our nightly cycles affect everybody in the company, so we do have some jobs that we run in one department, then create output which goes to another department. Based on email distribution lists, we can let anybody in the company know when things run.

We don't really use ActiveBatch for sharing knowledge. It's more for sharing output. We have some processes that run SSRS reports that distribute links to many people across the organization all at once, so they all get the same data fed to them simultaneously.

The most complex process that we run is our nightly cycle, which is made up of about 230 individual jobs triggered based on other jobs completing or files showing up in the system. It integrates a mixture of executables, a third-party policy system called LifePRO, and PeopleSoft. With all the handshaking back and forth between the systems, it allows an operator to start a job at around eight o'clock at night. Then, at around two in the morning, the last job finish with very minimal interaction from the operator, who is more sitting there watching to see if a job fails or not.

The operator used to run a job for our nightly cycles and go off doing something, then they would come back to see if the job was finished. If it was, they would start the next job. With the operator's intervention, this entire process would run for around eight hours. We have managed to streamline that down, because we're no longer waiting for an operator to look for a job completion, to run in about five hours. This allows us to have one nighttime operator instead of two, so we have cut the number of staff at night in half.

Additionally, daytime jobs are what we are starting to focus on now to allow our daytime operators to basically sit there and watch different jobs run. We'll be retraining both the nighttime and daytime operators to do different jobs. For example, with our nighttime operator, while the job is running in the background, she doesn't have to do anything anymore. She has now been tasked with other systems, like upgrading servers, and doing other things that cannot be done when the majority of our staff are in the building. So, not only have we removed half of our nighttime staff, we have repurposed our one person whose there to do both jobs.

Internally, we ran a number of executables. Our operators used to run these jobs all manually and press buttons within our console. Now, all those processes are automated. The operator doesn't do anything. We have a number of reports that just get generated automatically throughout the course of the night or based on their own dependency. The operators used to have to wait for a specific job to finish before they could do all these pieces. Currently, those just automatically trigger on their own. In addition to that, with our financial system, PeopleSoft, we can call any job within it automatically. This is without our operators even opening up the PeopleSoft console. In our LifePRO policy system, we have about 150 jobs that we can call those automatically as well, including some daytime jobs that run processes every five minutes. Instead of having the operator sit there like Homer Simpson pressing a button, these jobs trigger automatically, ensuring that all the data is kept updated in real-time.

It is a system that calls other jobs. Therefore, it will return error codes from those other systems. If it's a job that is truly an ActiveBatch job and doing biomanipulation, it will return error codes. The logs associated with those error codes are usually pretty in-depth to let you know exactly what happened. This prevented problems from becoming fires. We have an email that goes out every day with a list of all the jobs that failed to ensure that we hit every single one and can take care any issues.

We have one job that runs every 30 minutes, handling batch input into our system. If one of those jobs fails, then it keeps the rest of them from working the rest of the day. Then, if one of those fails, the entire team that supports that is notified immediately, giving them the full amount of time to rectify the issue before the next time that process runs. In the past, when this was done manually, we would have to wait for someone to notice that there was an error and then find the right person to deal with it. Now, within 10 seconds, an email has been sent out saying, "There is an issue. Fix it."

For our nightly cycles, we have some cycles that will run from start to finish without a single error because it is controlling when jobs run. It does a lot of clean up before the system starts. Therefore, it knows where certain files are supposed to be and where they are. So, we don't have to worry about somebody clicking the wrong button; everything runs from start to finish.

What is most valuable?

We can control the runtime of files, based on timing, by a file showing up. They can be controlled by an email being sent into the system. We get error codes back. Therefore, we have one centralized location where we can see how jobs are running. We have the ability to notify end users when jobs are finished or if there are problems with jobs. It's a very robust system, which allows for a lot of different functionality.

The system is very easy to use. In a short amount of time, we trained a couple people who have been able to create jobs on their own. For the two people whom I have trained so far, I spent about an hour or two with them. They were able to start creating minor jobs themselves by looking at existing jobs. We gave them minor jobs to work on. Then, within a couple hours, they were able to create jobs and processes that work correctly.

A lot of our processes are jobs that we know run one job after another, along with a hierarchical system, e.g., once this one job finishes, it triggers these three. Then, as soon as those three are done, it triggers a fifth job. The scheduling of those in that format is very easy to do. 

You can set up automated controls where as soon as one job finishes, then another one kicks off. You can put in constraints where a job won't start until other situations are met. It's very easy to use.

The console is very easy and flexible to use. Whenever we have come up with something that we wanted to try in ActiveBatch, we have managed to find a solution. When you're calling an application, you can call it through a batch job or script. You can also call the executable directly or through PowerShell. Depending on how it's running and how the security needs to pass through the system, there are many different ways to get the processes to work.

ActiveBatch provides a central automation hub for scheduling and monitoring, bringing everything together under a single pane of glass. There is a daily activity screen within ActiveBatch that shows you every job currently running. You can look in the past and future. I think you can set it all the way up to seven days in the future. So, if you have jobs scheduled to run on a timeframe, then those will show up. It will show everything that is on hold. You can limit it down to show only the stuff that has run in the last hour, if you are trying to deal with a specific problem. You can set the ranges, to say, "Okay, show me between 5:00 and 8:00 PM." It is very easy to use in that regard.

It handles a lot of different business-critical system for us. We have applications that our agents use out on the field which trigger other things that run in the office. Those run every five minutes looking for input to make sure that we can keep things running smoothly. Things that we would have needed to have the operators, or somebody, just running every couple of minutes, we have about a dozen of those run automatically looking for input to keep things running. It also allows our financial system to integrate with all our other systems without anybody having to do the work. Our whole nightly cycle is automated through this. We just did an inventory, and I think we have about 500 unique jobs that we run through ActiveBatch now. These are things that somebody would have had to run manually in the past.

You can keep history of run times, so you can start setting up SLAs on job performance. We have one job setup now where if that job takes more than 15 minutes to run, then it automatically aborts the job, sending an email saying, "This job needs to be looked at, as it's running past its run time."

The have done a pretty good job with the operator interface. There are a number of different screens which can be used to see what is going on. We have chosen the daily activity screen because it gives the most complete view of what's going on: what's finished, what's failed, and what's currently running.

The performance on ActiveBatch has been stellar.

What needs improvement?

There are some issues with this version and finding the jobs that it ran. If you're looking at 1,000 different jobs, it shows based on the execution time, not necessarily the run time. So, if there was a constraint waiting, you may be looking for it in the wrong time frame. Plus, with thousands of jobs showing up and the way it pages output jobs, sometimes you end up with multiple pages on the screen, then you have to go through to find the specific job you're looking for. On the opposite side, you can limit the daily activity screen to show only jobs that failed or jobs currently running, which will shrink that back down. However, we have operators who are looking at the whole nightly cycle to make sure everything is there and make sure nothing got blocked or was waiting. Sometimes, they have a hard time finding every item within the list.

Now, it integrates well with our other solutions. There were some issues initially with getting ActiveBatch to work, but once we found a solution that worked, it was easy to replicate. The initial issues were a mixture of the fact that very few people had done this type of work before, and partly the person we had working on it at the time. We're not sure exactly what the issue was. We actually reached out to ActiveBatch who helped us to get this to work. 

It is a very complex application because the code we are trying to connect to was COBOL based and still dealt with INI files. So, we had to trick the system into thinking it was calling the system the exact same way. Once we did, everything worked fine, including getting the error messages back and being able to display them within ActiveBatch.

It was the connection between systems that became complex. Basically, we had to set about a dozen environment variables within a script in ActiveBatch. So, when we called the outside application, all those variables were set and we could understood what it was trying to do. The complexity was on the actual calling of the third-party application. It was not from the ActiveBatch side.

You have to be careful with automation tools. We had one job where the person who initially programmed it created an infinite loop, so it kept triggering itself. It ran for less than a second, so we couldn't stop it. 

For how long have I used the solution?

The company has been using ActiveBatch for about five or six years. I have been using it for about three years.

What do I think about the stability of the solution?

Stability-wise, there is only one function that we have had trouble with. We haven't even reached out to ActiveBatch to try and figure it out because we're trying to figure out what is causing it. There is one DLL within the system that gets the current date, but will just stop working from time to time. The rest of the system is very stable. On occasion, we will have to reboot a server to release some locks on things, but that's once a month where we have to do anything like that.

I maintain all the jobs in production. There is nothing out of the ordinary that needs to be done. It does its own self-cleanup. It also deletes history periodically on its own.

What do I think about the scalability of the solution?

We started with just a few jobs and are right now up to 500 jobs that we run. When adding new things, it allows you to put everything in its own folders, so you can keep track of different parts. You can flag them as part of different systems, if you want. As we have added more things, we have seen no degradation in the performance.

We use it more as an automation tool, so it is just running jobs. In terms of people who go into ActiveBatch to look at it, we have our two daily operators who go in and look how things are running. We do have some jobs that they go in and trigger, because we're still automating the actual execution of these jobs, but they're all still controlled from ActiveBatch. We have a number of programmers, probably about a dozen, who will go into ActiveBatch. Some will tinker around with creating jobs that they need in our test system. Some will go into production to see how their jobs ran, if they're supporting the system. They can go in and see what the end result was, if it came back successful, had a warning, or an error. They can look at the logs to see what the problem was, allowing them to fix the process themselves.

Right now, we don't have any end users going into the system directly. We're building them a web interface front-end where they will be able to trigger specific jobs, so they can see the jobs that they can control. We have it setup through the ActiveBatch API so it returns the results to that web interface of how the job ran the previous time and when it ran last.

Our nightly cycle is 99.5 percent automated right now. We're finishing up the last few pieces of that. We have started looking at all of our daytime operator jobs. Those are being worked on next. All of our reports sent out to users on a daily basis are all automated within ActiveBatch to be triggered at specific times and sent out. The next piece that we will be working on is giving our programmers the ability to bring up Azure sites as needed, then we will be starting to add in all of our FTP jobs into ActiveBatch as well.

How are customer service and technical support?

In the past, we haven't used their technical support that much. The few times that I have called and asked them questions, they have been very easy to work with and get the answers from. They are in the process of changing their whole structure on how they support their clients, along with having their pricing structures change. 

They are trying to make the system more user-friendly from the support side, so you can go and look for the information yourself as opposed to trying to call someone.

Which solution did I use previously and why did I switch?

Previously, we have only used some scheduling through Microsoft Schedulers and SSRS schedulers.

How was the initial setup?

I was not involved for the initial setup. Though, the installation of ActiveBatch was straightforward. 

I was involved the last time that we did an upgrade. Everything was straightforward. Moving the jobs from one version to the next was relatively straightforward. The initial application that they picked to interface with was one of our more complex ones. That may have been why the person who was doing the program initially had an issue, because nobody had done this before with this type of system. 

There are a lot of APIs for packages that you can get with ActiveBatch for doing connections. We don't use a lot of their integration tools, though it does integrate with a lot of different ones. The one we do use right now is PeopleSoft. The issues with the integration of PeopleSoft have been more on the PeopleSoft side, not the ActiveBatch side. We had to reconfigure how we had PeopleSoft setup, so it would allow outside applications to communicate into it.

Once we decided to do the installation, I think it was done in the course of a day over a weekend.

What about the implementation team?

We did the installation ourselves. It was done by our systems department. One of my coworkers did all the work. She installed the new system and exported everything out of the old version into the new one. On top of that, we broke one system into two, because we used to have our model and production on one server. In the course of upgrading to version 12, we put our test and production systems on different servers.

What was our ROI?

Since we are no longer waiting for an operator to see that a job is finished, we have changed our daily cycle from running in eight hours down to about five. We had a third shift-operator retire and that position was never refilled.

The person who used to run all these jobs now just watches the system run. She is doing other stuff while she is working. On top of that, with the pandemic, we have managed to be able to allow our second shift operator to run everything remotely from home. They don't even have to be in the building anymore to run our cycles.

The central automation hub for scheduling and monitoring brings everything together under a single pane of glass by streamlining everything:

  1. It takes less time to run everything.
  2. It's less expensive because we no longer have the extra operator running jobs.
  3. There is less chance of an operator clicking the wrong button because we run both a test system and production system side by side. In the past, where they might have run the job in the wrong system, this makes sure that the correct system is running the right jobs.
  4. It automatically will send an output where it needs to go in real-time. We have management reports that used to have to be run by an operator. Now, if management comes in early, the report is there just waiting for them.

What's my experience with pricing, setup cost, and licensing?

Make sure that the pricing is in the contract.

ActiveBatch is currently redesigning themselves. In the past, they were a low cost solution for automation. They had a nice tool that was very inexpensive. With their five-year plan, they will be more enhancement-driven, so they're trying to improve their software, customer service, and the way that their customers get information from them. In doing that, they're raising the price of their base system. They changed from one pricing model to another, which has caused some friction between ActiveBatch and us. We're working through that right now with them. That's one of the reasons why we're why we were evaluating other software packages. For the time being, we are staying with ActiveBatch because we like it the best of the four.

Up until now, if you wanted to do a training class through them or go into some of their deep dives, you needed to pay additionally for that. The new way that they are doing their structured agreements has that all part of the contract. Now that we will be paying for it, we will be looking at their deep dives a lot more and seeing the stuff that they have done in the past.

Which other solutions did I evaluate?

It is the only automation tool that we're using. We are actually moving items from other automation tools that we have into this, so we have one central location where everything is automated. In the past, we have used some of our Microsoft Servers' scheduling tools and SQL Servers to automate the distribution of reports. Now, we are moving everything into one place so it's all controlled centrally. Then, you can look in one place to see where everything is. 

We have looked at a few different solutions in this past six months to see if they offer that same type of functionality and evaluated three other ones, which are very similar. I like ActiveBatch the best among the four solutions. The other tools seemed to not have the file manipulation tasks, and kept saying, "Well, you can do that in Doc." I thought, "That's okay. Welcome to the eighties." They basically said, "We don't have any filing manipulation tools built in because you can do that other ways." However, we're trying to put everything in one place. There is a lot of archiving of files that we do based on different criteria. For example, there was one job that we wrote which looks at the size of an Access database. When the size of the file gets too large, it notifies that team, saying, "You need to go delete data out of it." Those kinds of things were not available within other solutions.

What other advice do I have?

I would recommend reaching out to a client who has used it, especially if you have questions. While talking with customer support is great, people who use it on the build have better knowledge of how to use it in the business area.

We haven't used any of the APIs directly through ActiveBatch yet. We have started playing around with having our own little outside website which allows our end users to trigger jobs directly within ActiveBatch. But, we have not fully implemented that yet.

We have started looking at cloud solutions for bringing Azure sites up and down. We have not implemented that yet.

I would rate this solution as a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Senior IT Architect at a pharma/biotech company with 5,001-10,000 employees
Real User
Apr 5, 2020
Makes the environmental passback of an SDLC process seamless
Pros and Cons
  • "What ActiveBatch allows you to do is develop a more efficient process. It gave me visibility into all my jobs so I could choose which jobs to run in parallel. This is much easier than when I have to try to do it through cron for Windows XP, where you really can't do things in parallel and know what is going on."
  • "Over the years that I have used this, it has probably saved us several hundred hours of development time for other teams and my own."
  • "I can't get the cleaning up of logs to work consistently. Right now, we are not setup correctly, and maybe it is something that I have not effectively communicated to them."
  • "I can't get the cleaning up of logs to work consistently."

What is our primary use case?

We use it for a variety of different tasks, most of which are related to data management tasks, such as scheduling, processes related to updating business intelligence reporting, or general data management stuff. It's also used for some low level file transfers and mergers in some cases. 

We use the solution for execution on hybrid machines, across on-prem, and cloud systems. We have code that it is executed on a cloud environment, various Windows and Unix servers.

We are on version 11, moving to version 12 later this year.

How has it helped my organization?

We found that the solution created simplicity for us with our workflows and process automation. It gives me the folder and job name, then I'm done. I don't have to remember a plethora of things and that makes life a lot easier. Once you get it setup and have it configured, you don't have to remember it anymore. It allows you to focus on doing the right thing. 

I find it super flexible. Every time that I ask if the solution can do something, they say, "Yes." I have not been able to come up with a challenge so far that they have not been able to do.

It definitely allows the ability to develop the workflow. It has reduced the amount of coding. Some groups don't pay attention to that, as they are very much an old school group. I am trying to get people to do things differently, but that's just changing habits.

One process may at some point time run across five different servers in parellel before coming back to a final point of finishing. They built that in, where it say, "Every time we do certain things, execute this package." All I have to do is drag that package into the master package and master plan. It's very modular. 

All our workflows are efficient. This solution allows for tighter integrations across environments where you don't necessarily want developers cross pollinating each others' code. It's more or less about securing code. I have people who are experts in doing PowerCenter. They don't have any idea what they're doing in other solutions. You don't want them accidentally editing the wrong code. Therefore, it helps keep related things isolated, but allows them to communicate.

For code maintenance, it's really simplified it. For things that are coded, like day-to-day Unix or Windows level batch type jobs, this means I don't have to rewrite the code and I can easily migrate it from the environment. I can do this by leveraging variables and naming practices. I can basically develop code, do development, migrate it through our four environments, and not made changes to the code at all. It makes the environmental passback of an SDLC process seamless.

What is most valuable?

One of the great features that they have implemented is called Job Steps. It is a much more mechanical way to control processes. It allows us to connect to external providers. For example, we were a big Informatica shop. The development time to create a job that can execute a task or workflow (once you have the initial baseline set up) takes you about a minute to say, "I created this new job in Informatica. I have created an equivalent job to run the batch, then about a minute later, it was done." It improves the development time to market and getting things done.

What ActiveBatch allows you to do is develop a more efficient process. It gave me visibility into all my jobs so I could choose which jobs to run in parallel. This is much easier than when I have to try to do it through cron for Windows XP, where you really can't do things in parallel and know what is going on.

Improvement in workflow completion times has to do with optimization. The ability to do true parallel submittal of jobs, then be able to pay attention to the status of those job simultaneously to know when they are done, that is what creates the optimization.

The solution provides us with a single pane of glass for end-to-end visibility of workflows. It has a very broad, deep scale vision of what's going on. You can go down to an individual job level or see across the whole system and different groups. Because we roll out by project area, each project has their own root group folder that they use to manage their routines. We don't have a master operational group yet that is managing it. Therefore, each of group does its own operational support for it. However, if I look at things in it, there are a lot of shared things that we have put in there. If a machine is taking too long, I can go focus on that. E.g., why is it taking so long? Then, I can let people know that we have a particular routine that is running poorly.

What needs improvement?

I can't get the cleaning up of logs to work consistently. Right now, we are not setup correctly, and maybe it is something that I have not effectively communicated to them. This has been my challenge.

For how long have I used the solution?

I have been using this solution since 2007: 13 years.

What do I think about the stability of the solution?

The stability is rock solid. The four failures that we have had are related to issues we've done to our server or environment. Mostly, they are self-inflicted failures. There was a bit of cross pollination for what we were doing with security procedures where we experienced interruption. ActiveBatch hadn't updated itself directly to handle that situation.

We use the solution’s API extensibility. It has helped with the stability. It allows us to know when a job fails. If there's a problem connecting to a server or a job fails because something has gone wrong with a server, then we know very quickly. 

Four people are needed for development and maintenance of this solution. I am the primary admin but I don't support the solution on a day-to-day basis. I have a secondary gentleman, who like me, is also an admin. There are two others who primarily deal with the database. There's not a lot to it, except for the log stuff. When it comes to individual job failures, that's not our domain. That's the domain of each group maintaining their space. We also manage security issues.

What do I think about the scalability of the solution?

We are not the biggest shop out there. In our production environment, there are about 10 group who are doing work on a daily basis. Our user base is primarily developers and a few technical business analysts. There are approximately 50 to 100 users.

We have administrators, operations people, and developers. Administrators have full control across all environments. Operators have the ability to execute and see things across many of the environments. Developers can only work on a nonproduction event. 

For what we are doing on a relatively modest machine, ActiveBatch hasn't had any issues.

I haven't had to scale it yet. It has been a simple server for 13 to 14 years now. I haven't had to go to multicluster. We have a failover setup. However, we don't use that for parallel processing. It is more just for failing. 

How are customer service and technical support?

I'm on a first name basis with many of their engineers and developers. I have passed on some challenging things since my history goes so far back. They have always been very responsive to answering questions and providing the right knowledge base article. They are open to suggestions and very interactive.

Which solution did I use previously and why did I switch?

We first implemented this a number of years ago, it took our processes from several hours overnight, and not knowing if those jobs failed until we checked in the morning, to having an ActiveBatch team as an overnight team who watched jobs for us. Though, sometimes they would take an hour or two before they realized something had failed. Now, we have it so that team is responding within minutes. The alerting that texts and emails you has improved our ability to respond in a timely fashion.

How was the initial setup?

We installed versions 5, 6, 8, 9, and 11. Upgrades have always been seamless. It has been able to recognize code from previous versions, even 10 years ago, and update it.

Every time we do a redeployment, we go through the same process. We develop, upgrade the dev environment, and have people check to make sure their job still work. We then take that environment and migrate it to our test environment where we totally check it. That usually goes faster because we are just moving the database forward, checking to make sure everything works, and then moving onto the next page. Typically, we do a new server for production. We don't upgrade in place. I've done the upgrade in place without a problem in the dev environment, and it does go faster. I find it very clean, and I've not had a problem. Most of the issues are related to consumers of the tool.

We have only used it in one scenario. It took us a bit of time to get it setup as we have two halves of our processes. One is the data management process that happens multiple times a day. When that is completed, we want see reporting based on these processes. What we have is an event base that is executable. The viewable data sets are in different folders so these two groups don't actually see each other. That is routine, but they are able to read and have scheduled events.

What about the implementation team?

I installed it. To install it and get the environment up and running, it takes less than a day. Once my database is up and I have access to install the software, it takes an hour or two for me to get it up and running.

What was our ROI?

Over the years that I have used this, it has probably saved us several hundred hours of development time for other teams and my own. 

The solution has absolutely resulted in an improvement in job success rate percentage. We can see what the problems are and isolate them sooner. We are able to catch these problems and alert people.

It allows for lower operational overhead.

What's my experience with pricing, setup cost, and licensing?

I buy features when I have need of them.

What other advice do I have?

Right now, we only use the Informatica AI and Informatica PowerCenter. We are looking at  a ServiceNow integration. Some of the other ones, like Azure, we don't need right now as we continue to grow it organically. It's more as teams migrate technologies. We want to have an opportunity to have a conversation with them, and say, "Hey, come in and do it this way."

We are not using all the features yet. E.g. we don't use any load balancing variables.

I would rate the solution as an eight to nine (out of 10).

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Download our free ActiveBatch by Redwood Report and get advice and tips from experienced pros sharing their opinions.
Updated: March 2026
Buyer's Guide
Download our free ActiveBatch by Redwood Report and get advice and tips from experienced pros sharing their opinions.