The benefit is that it removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts.
Domain Manager at KeyBank National Association
Video Review
Enables us to incorporate automation and self-service to eliminate all of our manual efforts
Pros and Cons
- "It removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts."
- "Core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs."
- "All financial institutions are based on mainframes, so they're never going to go away. There are ppportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product."
How has it helped my organization?
What is most valuable?
Currently the data mining, complex data mining, that we do out there. Any sort of financial institution runs along the same challenges that we face in that referential integrity across all databases, and finding that one unique customer piece of information that meets all the criteria that we're looking for. All the other functions are fabulous as far as sub-setting, data creation.
What needs improvement?
I think the biggest one will be - all financial institutions are based on mainframes, so they're never going to go away. Opportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product. Certainly, it does what we need out there, but there's always opportunities for greatly improving it.
What do I think about the stability of the solution?
Stability for the past year and a half has been very good. We have not had an outage that has prevented us from doing anything. It has allowed us to connect to the critical databases that we need, so no challenges.
Buyer's Guide
Broadcom Test Data Manager
May 2025

Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
851,823 professionals have used our research since 2012.
What do I think about the scalability of the solution?
We haven't run into any issues at this point. So far we think that we're going to be able to get where we need to. In the future, as we expand, we may have a need to increase the hardware associated with it and optimize some query language, but I think we'll be in good shape.
Which solution did I use previously and why did I switch?
We were not using a previously solution. It was all home-grown items that we did out there, so a lot of automated scripting and some performance scripting that we did, in addition to manual efforts.
As we looked at what the solutions were, some of our core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs.
What other advice do I have?
I'd rate it about an eight. It provides the functionality that we're needing. There are always opportunities for improvement and I don't ever give anyone a 10, so it's good for our needs.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.

Practice Manager (Testing Services) at a financial services firm with 1,001-5,000 employees
Video Review
Includes basic services which allow you to mask data and create synthetic data. It also includes test matching which accelerates test cycles and allows automation to happen.
What is most valuable?
You've got the basic services of the TDM tool which allows you to mask data, it allows you to create synthetic data, but I think what really sets TDM apart from the other competitors, is the kind of the added extras that you get with doing true test data management, so you've got things like the cubing concepts that are grid tools, or data maker, kind of really brings to bear within test data management teams. You've also got test matching as well which massively accelerates test cycles and really gives stability and allows automation to happen.
How has it helped my organization?
We've got a centralized COE in terms of test data management within our organization, benefits that are really three fold in terms of cost, quality and time to market. In terms of the quality we through test data management where, data is kind of the glue that holds systems together and therefore, if I understand my test data, I understand what I'm testing. Through the tooling and kind of the maturity in the tooling we're really bringing an added quality aspect in terms of what we test and how we test, and the risk based testing that we might approach.
In terms of the speed to market, because we don't manually produce data anymore, we use intelligent profiling techniques, test data matching, we massively reduce the time we spend finding data, and we also can produce data on the fly, which turns around test data cycles. In terms of cost, because we're doing it a lot quicker, it's a lot cheaper.
We have a centralized test data management team that caters for all development within my organization. We've created an organization that is so much more effective and optimized in terms of the kind of the time to get to test execution, to identify data and get into execution in the right way.
What needs improvement?
I think the kind of the big area for exploitation for us is already a feature that already exists within the tool. The TCO element is something massive, I talked earlier on about the kind of the maturity and the structure that it gives you to testing. I think this is kind of a game changer in terms of articulating impact of change and no project goes swimmingly first time and therefore the ability to impact a test through kind of a making simple process changes is a massive benefit.
What do I think about the stability of the solution?
The stability of the solution is really fine. I think the really big question is the stability of underlying system that it's trying to manipulate and the tool is the tool, it does what it needs to do.
What do I think about the scalability of the solution?
Within our organization we have many, many platforms, many, many different technologies. One of the interesting challenges we always have is in terms of, especially when we're doing performance testing, can we get the kind of the volumes of data in sufficient times, and we use things like data explosion quite often and it does what it needs to do and it does it very quickly.
How are customer service and technical support?
We work in an organization where we use many tools from many different suppliers. I think that the kind of a relationship that my organization has with CA is kind of a much richer one in terms of, you know, it's not just a tool support.
Which solution did I use previously and why did I switch?
Originally we used to spend probably, hours and hours and hours of spreadsheet time manually creating, keying data, massively inefficient, massively error prone, and clearly as part of a financial institution we need to conform to regulations. Therefore we needed an enterprise solution to make sure that we could actually deliver a regulatory data, test data, to suit our projects.
The initial driver with kind of really buying any tooling initially is kind of what's the problem statement, what's the driver to get these things in? I think once you realize that there is so much more than just the regulatory bit, as I say, the time, cost, quality aspect that it can actually give to test, that's really the kind of the bigger benefit than just regulatory.
How was the initial setup?
We've had the tool for about four or five years now within the organization. As you might expect we first got the guys in not knowing anything about the tool and not really knowing how to deploy it, therefore what we needed to do was we called on the CA guys to come in and really to show us how the tool works, but also how to manipulate that within our organization. We had a problem case that we wanted to address, we used that as the proving item, and that's really where we started our journey in terms of a dedicated test data management function.
Which other solutions did I evaluate?
Important evaluation criteria: to be honest it's got to be around what does the tool do? A lot of the tools on the market do the same thing, whether there are things that differentiate those tools, and what's really the organization's problem statement they're trying to fulfill. Once you've got the tool, that's great, but you need the people and process, and without that, it comes back to the relationship that you have with the CA guys, you've just got to shelfwear and a tool. We went through a proper RFP selection process where we kind of set our criteria and we kind of we invited a few of the kind of the vendors into come and demonstrate what they could do for us and picked the one that was best suited to us.
What other advice do I have?
Rating: no one's perfect. You got to go in the top quartile of it, so you're probably eight upwards. I think in terms of test data management solutions, it's the best out there. I think that the way that tool is going it's kind of moving into other areas in TCO and kind of the integration with SV, it's a massive thing for us.
I think the recommendation is that absolutely this is kind of the best in breed. As well as buying the tool, it would be a mistake to not also invest in kind of understanding how the tool integrates into the organization and kind of how to bring that into the kind of the tools team, the testing teams and the environment teams that you need to work with.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Broadcom Test Data Manager
May 2025

Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
851,823 professionals have used our research since 2012.
Senior Presales Engineer at a computer software company with 51-200 employees
An user-friendly, stable, and scalable solution offering Kubernetes edition
Pros and Cons
- "The solution is very user-friendly. For instance, if I wanted to create a project with Data Generation, Data sub-setting, Selective sub-setting, and Data Marketing, it can be easily done in TDM. And if you want to export these project definitions from one system to another, it can be done with just one click."
- "I would request to extend the data source support because there are a lot of cloud-native SaaS-based solutions in the market. Like, Azure has its data sources, Amazon-specific data sources, and there are a lot of proprietary data sources outside as well. I recommend the team to expand the data source support."
What is our primary use case?
The solution used for DevOps Pipeline.
How has it helped my organization?
For us to sell TDM, we look for specific key differentiators. For instance, TDM only requires a little infrastructure compared to Intel Six, Informatica or other solutions requiring hardware resources. TDM can be connected with DevOps and easily plugged into the DevOps pipeline with the available APIs. Plus, there is an inbuilt automation-friendly API and an inbuilt automation tool to automate the tedious task within TDM.
What is most valuable?
The solution is very user-friendly. For instance, if I wanted to create a project with Data Generation, Data sub-setting, Selective sub-setting, and Data Marketing, it can be easily done in TDM. And if you want to export these project definitions from one system to another, it can be done with just one click.
What needs improvement?
I would request to extend the data source support because there are a lot of cloud-native SaaS-based solutions in the market. Like, Azure has its data sources, Amazon-specific data sources, and there are a lot of proprietary data sources outside as well. I recommend the team to expand the data source support.
For how long have I used the solution?
.
What do I think about the stability of the solution?
Most of the CA products go through a lot of compliance and certifications. So they apply secure coding and secure build; hence the product stability is too good. And apart from that, there is a community outside. It's a publicly available community, and it's very responsive. You don't need to drive the ticket in the portal if you have any issues. You can paste your queries. In the community, you'll immediately get the response because a dedicated team handles the community.
What do I think about the scalability of the solution?
It is a scalable solution. Go with a classic VM-based installation. You can do horizontal scaling, like adding resources, you can do that, and you don't need to have any downtime. But with the Kubernetes edition, you can have other vertical and horizontal scaling, and auto-scaling is available.
How are customer service and support?
There are no dedicated resources for TDM, but one can go and paste their queries on the portal or erase a ticket on the community portal. The team is very responsive and active. A dedicated team for support is in progress.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial setup is straightforward. There is a wizard, and one needs to click it and set up the solution. And you can run it on a desktop machine; there is no need to have high-end servers to run or install it. The good part is that it has a Kubernetes edition, which is amazing. If you look at the trend, there are many tools you need to place on On-Prem, or there could be a SaaS version also. But Broadcom TDM has a Kubernetes edition so you don't need to consume a lot of cloud infrastructure. So as and when there is a demand, it can make a scalable number of forwards and do a Kubernetes scalable masking, which I don't think any other system in the market delivers these kinds of options capable of. As TDM is delivering a Helm Chart so, once the prerequisites are ready, it just takes a few minutes to deploy it. For the deployment, the main thing is taking role-based Control, and there is a dedicated guy. We can have for creating definitions for data generation and sub-setting, you can have a dedicated guy for creating the masking definitions as well.
What's my experience with pricing, setup cost, and licensing?
Mainly, we compete with the IBM team and Informatica, and if you compare the license cost, Broadcom is too flexible. Depending on the customer's budget, we suggest they buy only respective SKUs, and it's much cheaper compared to others because we have gone over multiple deals where pricing was never a factor in differentiating products.
What other advice do I have?
TDM provides the option of choosing your Kubernetes environment and whether it is managed Kubernetes environment or a classic On-prem Kubernetes environment. You can put it over there because it is primarily customers' preference, especially banking customers, who want the On-Prem version as they don't want to allocate more resources. And suppose a company wants to have a cloud partnership that can use Azure, AWS, or any other Managed Kubernetes environment. In that case, the idea is you don't consume the entire cloud resources because if you look at masking a job, it will only happen sometimes. So here's the idea on-demand, scale it up to the maximum, and once the job is done, you scale down. It saves a lot of money.
The users should keep a futuristic view for TDM because, at some point, you may need to decide where it can be deployed in Kubernetes and in which version. Because going forward, you'll shift to some other cloud, or you might have a private cloud within your organisation, and you don't want to use the classic deployment options. So whenever you find a solution, you need to look at a futuristic solution where it can be deployed. If you are deploying in VM today and tomorrow, if you want to change, installing it in the Kubernetes environment should also be viable.
I rate it nine out of ten.
I rate Broadcom Test Data Manager a nine out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Senior Specialist at Cox Automotive
Video Review
The data masking is a powerful aspect of the tool and I have found the best success in the data generation features.
What is most valuable?
A lot of people, when they first started looking at the tool, started immediately jumping in and looking at the data masking, the data subsetting that it can do, and it works fantastically to help with the compliance issues for masking their data. That's a very powerful aspect of the tool.
But the part I found the best success in is actually the data generation features. In really investing into that concept of generating data from the get-go, we can get rid of any of those concerns right off the bat, since we know it's all made-up data in the first place.
We can fulfill the request of any team to very succinct and specific requirements for them each time. When I look at it as a whole, it's that data generation aspect that really is the big win for me.
How has it helped my organization?
When I look at the return on investment, there are not only huge financial gains on it. In fact, when I recently ran the numbers, we had about $1.1 million in savings on just the financials from 2016 alone. What it came down to is, when we started creating our data using Test Data Manager, we reduced our hours used by about 11,800 in 2016. That's real time. That's a significant, tangible benefit to the company.
When you think about it, that's somewhere around six employees that you've now saved; let alone, you have the chance to focus on all the different testing features, instead of having them worrying about where they're going to get their test data from.
What needs improvement?
It's cool that right now with this tool, they're doing a lot of things to continuously improve it. I think Test Data Management as a strategy across the whole organization, has really picked up a lot of momentum, and CA’s been intelligent to say, "We have a really great product here, and we can continue to evolve it."
Right now, they're taking everything and taking it from a desktop client and moving it into a web portal. I think there's going to be a lot of flexibility in that. If I was going to look at one thing that I am hoping they are going to improve on is – it is a great database tool – I'm not always sure about the programmatic abilities of it. Moreover, specifically, it's great in terms of referential integrity across multiple systems, multiple tables, but I do find a couple of limitations every now and then, because of trying to maintain that referential integrity; that I have to go in and try to manually make sure I want to break things.
For how long have I used the solution?
I've been using it for about two-and-a-half years at my current position, and I've actually been familiar with the tool for about the last five or six years.
What do I think about the stability of the solution?
The stability is wonderful on it. I don't think that, at any point, have I had a showstopper issue with the application. It's never caused any major issues with our systems, and I will give credit where credit's due. Even right now, as they continue to enhance the tool, it has still stayed wonderfully stable through that process, and everyone on CA’s side has been there to support on any kind of small bug or enhancement that might come up along the way.
What do I think about the scalability of the solution?
It has scaled tremendously. Especially, again, I don't want to harp back too much on it, but when you start looking at data generation, your options are endless in the way you want to incorporate that into your environment.
I have my manual testers utilizing this to create data on the fly at any moment. I have my automation users who are going through a little bit more of it, getting daily builds sent to them. I have more performance guys sending requests in for hundreds of thousands of records at any given time, that might have taken them two weeks to build out before, that I can now do in a couple hours. It ties in with our pipelines out to production.
It's a wonderful tool when it comes to the scalability.
How are customer service and technical support?
Any time that I've had something that I question and said, "Could this potentially be a bug," or even better, "I would love this possible enhancement", it's been a quick phone call away or an email. They respond immediately, every single time, and they communicate with me, look at what our use case is on the solutions, and then come up with an answer for me, typically on the spot. It's great.
Which solution did I use previously and why did I switch?
We knew we needed to invest in a new solution because our company was dealing with a lot of transformations. Not only do we still have a large root in our legacy systems, that are the iSeries, DB2-type of systems, but we have tons and tons of applications that have been built on a much larger scale in the past 40 years, since the original solutions were rolled out. Not only did we have a legacy transition occurring within our own company, but we also changed the way that our teams were built out. We went from teams that were a waterfall, iterative, top-down approach, to a much more agile shop.
When you look at the two things together, any data solution that we were using before, maybe manual hands on keyboards, or automated scripts for it, just weren't going to cut it anymore. They weren't fast enough, and able to react enough. We started looking at it and realized that Test Data Manager by CA was the tool that could actually help to evolve that process for us.
When selecting a vendor, I wanted someone that I'm going to have actually some kind of personal relationship with. I realized that we can't always have that with everyone that we're working with, but CA has done a wonderful job of continuously reaching out and saying, “How are you doing? How are you using our product? How do you plan on using our product? Here's what we’re considering doing. Would that work for you?" They've been a wonderful partner, in terms of communication of the road map of where this is all going.
How was the initial setup?
It's a great package that they have out there. It's a plug-and-play kind of system, so it executes well on its own to get up and running in the first place. When they do send releases in, it's as simple as loading the new release.
What's kind of neat about it is, if they do have something that needs to be upgraded on an extension of the system, some of the repositories and things like that, it's smart enough to actually let you know that needs to happen. It's going to shut it down, take care of it itself, and then rebuild everything.
Which other solutions did I evaluate?
We evaluated other options when we first brought it in. We looked at a couple of the others. The reason that we ended up choosing Test Data Manager was that it was stronger, at the time at least, in its AS/400 abilities, which is what all of our legacy systems are built on. It was much more advanced than anything else that we were seeing on the market.
What other advice do I have?
It’s not something that I would often give, but I do give this a perfect rating. We've been able to solve any of the data issues that we were having initially when we first brought it in, and it's expanded everything that we can do as we looked into the future right now of where we want to go with this. That includes its tie-ins for service virtualization; that includes the way that we can build out our environments in a way that we'd never considered before. It's just always a much more dynamic world that we can react a lot faster to, and attribute most all of that to Test Data Manager.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Senior System Engineer at a comms service provider with 10,001+ employees
Can mask data according to your needs and statistical distribution
Pros and Cons
- "The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different. On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe."
- "We are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster."
What is our primary use case?
Data masking, exactly what this tool is created for. We are going to use it for the incorporation into test or development environments.
We are managing a lot of customer data, and the idea is to not have, or approve, or give a lot of permissions to read all this data. We need to mask them, but we still need to work with them, which means that developers need access to a lot of data.
We have needed a tool where the data provided for developers should be easy and anonymized. This is probably the one and only tool with so many sophisticated features. We need those features for masking/anonymizing data with statistical distribution and with preparation of test/dev data (a lot of data).
How has it helped my organization?
This tool is super fast and it has solved many of our issues. It is also much better than many other solutions which are on the market. We've already tested different ones, but this one looks the best currently.
We can deliver, first, securely; second, safely; and third, without extra permissions. We don't need to go through a whole procedure so that developers have permission to access production data. It's not needed anymore. And it will work with production data because it's almost the same data but, of course, not real. The structure of the data is the same and the context of the data is the same but the values are different.
The features are very technical and are definitely what we need. We've got some rules, especially from security, from compliance, but we need to take care of our customer data, very securely, and subtly. There is no other product that gives you these opportunities.
What is most valuable?
- Masking of data.
- There are lots of filters, templates, vocabularies, and functions (which are very fast) to mask data according to your needs and statistical distribution, too.
The functionality of this tool is something that changed our work. We need to manage the data, and for developers to work on actual data. On the other hand, you don't want to give this data to the developers because they are customer data that developers shouldn't see. This tool can deliver an environment which is safe for developers. Developers can work on a big amount of data, proper data, actual data, but despite the fact that they are actual, they are not true, because they are masked. For the developer, it's absolutely proper because instead of a customer's date of birth, he's got a different date of birth, which mean its actual data but not the exact data, it's already masked.
The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different.
On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe.
These are two pros you cannot find anywhere. Most tools - for example, Informatica - are taking data as they are, in the original, not masked form, then on the Informatica server you need to mask them, and then you're sending them to the destination. Here, in TDM, you already take masked data.
What needs improvement?
If you want to automate something, you need to figure it out. There is no easy way (software is only for Windows). I am missing a lot of terminal tools, or API for the software.
The software is working on Windows and, from some perspectives, that might be a problem. From our perspective, it is a problem because we need to have a different team to deploy for our Windows machines. This is a con from our perspective. Not a big one, but still.
They have already improved this product since our testing of it, so it may be that the following no longer applies.
The interface is definitely one you need to get used to. It's not like a current interface which is really clear, easy to check. It's like from those days, some time ago, an interface that you need to get to know.
Also, we are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster.
We asked CA if there would be any possibility to implement our suggestions and they promised us they would but I haven't seen this product for some time. Maybe they are already implemented. The requests were very specifically related to the product we have, Teradata. This was one of the real issues.
Overall, there was not much, in fact, to improve.
For how long have I used the solution?
Less than one year.
What do I think about the stability of the solution?
We didn't face any issues with stability.
The only problems we had, and we asked CA to solve, were some very deep things related to our products. It was not core issues, in fact. It was, '"We would like to have this because it's faster, or that because it's more robust or valuable."
What do I think about the scalability of the solution?
I cannot answer because we only did a PoC, so I have no idea how it will work, if there will be a couple of designers working with the stool.
Still, I don't see any kind of issues because there will be only a few people working with the design of masking and the rest will be done on the scripting level, so it's possible we won't see it at all.
How are customer service and technical support?
During the PoC we had a support person from CA assigned to us who helped in any way we needed.
Which solution did I use previously and why did I switch?
We didn't use any other resolution, we simply needed to have it implemented and we tried to figure it out. We looked at the market for what we could use. TDM was our very first choice.
How was the initial setup?
I didn't do the setup by myself, it was done by a person from CA. It didn't look hard. It looked pretty straightforward, even with configuration of the back-end database.
Which other solutions did I evaluate?
After doing our PoC we tried to figure out if there was any other solution which might fit. We tried and, from my perspective, because I was responsible for the whole project, there was no solution we might use in the same way or in a similar way. This product exactly fits our compliance and security very tightly, which is important.
There aren't any real competitors on the market. I think they simply found a niche and they started to develop it. We really tried, there are many options out there, but there are some features only specific to this product and there are features you might need, if you, for example, work for a big organization. And these features aren't in any other product.
There are many solutions for masking data, there are even very basic Python modules you can use for masking data but you need to take data from the source, you need to mask them, and you need to deliver the data to the destination. If you have a big organization like ours, and you have to copy one terabyte of data, it will take hours. With this solution, this terabyte is done in a couple of minutes.
What other advice do I have?
We did a proof of concept with TDM to see if the solution fits our needs. We did it for a couple of months, did some testing, did some analysis, and tried to determine if it fit our way of working. Now we are going to implement it in production.
If there is a big amount of data to mask and you need to deliver it conveniently, pretty easily, there is no other solution. Configuration is easy. It's built slightly differently, the design is slightly different than any other tool, but the delivery of the masked data is much smoother than in any other solution. You don't need to use something like a stepping stone. You don't need to copy data to some place, then mask it, and then send it, because you copy data which is already masked. Data is masked on the fly, before they are copied to the destination. You don't need anything like a server in the middle. In my opinion, this is the biggest feature this software has.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Senior Test Data Management Specialist at a transportation company with 10,001+ employees
We have moved data creation from manual or limited and costly automated processes to a set of weekly data builds and an On-Demand offering capable of delivering versatile data.
What is most valuable?
Synthetic Data Creation with use of flexible built in data functions.
How has it helped my organization?
We have moved data creation from manual or limited and costly automated processes to a set of weekly data builds and an On-Demand offering capable of delivering versatile data that meets the needs of our many teams.
What needs improvement?
An increase in the types of programmatic capabilities could allow the tool to be more powerful. For instance, often data inserts in one table are contingent upon entries or flags from another. In these situations, there is no way to choose to include/exclude a row based on the primary table.
For how long have I used the solution?
2.5 years
What was my experience with deployment of the solution?
The tool installs in a snap and includes test repositories that allow for new users to start working with the application immediately.
What do I think about the stability of the solution?
The stability of the tool has never been an issue. Any time a possible defect has surfaced, the support team was quick to respond. Beyond that, there have been constant new versions created that provide optimizations.
What do I think about the scalability of the solution?
The many databases supported and data delivery formats available provide a seemingly endless supply of options to meet the ever growing demand of our testing teams.
How are customer service and technical support?
Customer Service:
Above and beyond that of any company that I’ve worked with before. I’ve never been more than an hour or two without a response to a standard ticket creation.
Technical Support:Also above the standard. Those that support TDM have an intimate knowledge with the product and it’s many available use cases.
Which solution did I use previously and why did I switch?
All previous solutions were homegrown and they missed the complete solution we found in CA’s TDM.
What about the implementation team?
Our process had ups and downs as we tempted to get TDM off the ground. The winning combination for us was TDM experts from a vendor-partner, Orasi Software, Inc. working hand in hand with employees that had an intimate knowledge with our systems.
Which other solutions did I evaluate?
This tool had been purchased by another group in our company but it’s potential was not realized.
What other advice do I have?
Our biggest wins in implementing this tool was to start out by working with a singular team and find some data delivery wins. After that internal proof of concept was realized, expansion to other teams became much more simple.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
AVP Quality Assurance at GM Financial
Video Review
Gives you confidence in data that you're creating and keeps you out of the SOX arena, because there's no production data within that environment.
What is most valuable?
Test Data Manager allows you to do synthetic data generation. It gives you a high level of confidence in your data that you're creating. It also keeps you out of the SOX arena, because there's no production data within that environment. The more that you can put in controls and keep your data clean, the better off you are. There are some laws coming into effect in the next year or so that are going to really scrutinize production data being in the lower environments.
How has it helped my organization?
We have certain aspects of our data that we have to self-generate. The VIN number is one that we have to generate and we have to be able to generate on the fly. TDM allows us to generate that VIN number based upon whether it's a truck, car, etc. We're in the car, auto loan business.
What needs improvement?
I would probably like to see improvement in the ease of the rule use. I think sometimes it gets a little cumbersome setting up some of the rules. I'd like to be able to see a rule inside of a rule inside of a rule; kind of an iterative process.
What do I think about the stability of the solution?
TDM has been around for a couple of years. I used it at my previous company, as well. It's been really stable. It's a tool that probably doesn't get utilized fully. We intend on taking that, partnering it with the SV solution and being able to generate the data for the service virtualization aspect.
What do I think about the scalability of the solution?
Scalability is similar along the SV lines; it's relatively easy to scale. It's a matter of how you want to set up your data distribution.
How are customer service and technical support?
We were very pleased with the technical support.
Which solution did I use previously and why did I switch?
When you have to generate the amount of loan volume that we need – 50 states, various tax laws, etc. – I needed a solution that I can produce quality data that fits the target testing we need; any extra test cases; etc. We’re more concentrated on being very succinct in the delivery and the time frame that we need to get the testing done in.
I used CA in my previous company. I have prior working relationship with them.
How was the initial setup?
The initial setup was done internally. Obviously, the instructions that were online when we downloaded it, we were able to follow those and get the installation done. We did have a couple of calls into the technical solution support area and they were able to resolve it fairly quick.
What other advice do I have?
I think from my synthetic generation, a lot of times generating synthetic data can be cumbersome. TDM, with some of the rules aspect of it, you can generate it and have your rules in place that you know your data's going to be very consistent. When we want a particular loan to come through with a particular credit score, we can generate the data. We can select and generate the data out of TDM that will create me a data file for my in-front script, through using DevTest.
I also push the service virtualization record to respond to the request of the loan, hitting the credit bureau, returning a certain credit score, which then gets us within that target zone for that loan we're looking for, to trigger a rule.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Along with stability, the solution offers its users a supportive technical support team
Pros and Cons
- "Data generation, data masking, and data subsetting are three key features of this tool."
- "I would say implementing test data management requires a lot of effort and time...Even if you have experienced people, it takes time."
What is our primary use case?
Broadcom Test Data Manager is used for test data management requirements, along with data generation, data masking, data subsetting, and service virtualization. Also, the solution can be connected to your automation tool.
What is most valuable?
Data generation, data masking, and data subsetting are three key features of this tool. So, all these three features come in a single tool.
What needs improvement?
Regarding improvements in the solution, we always need to customize Broadcom Test Data Manager because every customer and every organization is different. So, a few changes are always required, for which we get support from Broadcom.
Stability is an area in the solution that has room for improvement. In general, the price, stability, and scalability could be better.
For how long have I used the solution?
I have been using Broadcom Test Data Manager for a year and a half. Also, I am using the solution's latest version.
What do I think about the stability of the solution?
Stability-wise, I rate the solution a nine out of ten.
What do I think about the scalability of the solution?
Scalability-wise, I rate the solution between seven to eight out of ten since there are so many systems and databases, and the scalability of the solution depends on them. But it does support a lot of databases and applications. Right now, there are three users using the solution in my company. Also, we don't plan to increase its usage.
How are customer service and support?
We contacted Broadcom's customer service and support in our company. The technical support has been really nice. They have been very much supportive.
How was the initial setup?
I would say that the initial setup is not that easy, but it's not that difficult, either. An average level of complexity is involved in the setup phase. Installation doesn't take time. However, installing, like, a full-fledged data management solution can take years, depending on the organization and complexity. Regarding the deployment process, we started the data masking, and then we moved on slowly to synthetic data generation and data subsetting. So, these steps could be difficult to elaborate on because it depends upon the organization's requirements and what they really want to implement.
The staff required for the deployment to maintenance would really depend on the organization. But typically, it should depend on a minimum of three to five resources, and then it can go on based on the requirements.
What's my experience with pricing, setup cost, and licensing?
My company makes yearly payments toward the licensing costs. So, I consider the licensing costs to be expensive, especially considering what I have heard from my colleagues. They say that there are so many vendors in the market now for TDM, and Broadcom is one of the costlier ones in the market.
What other advice do I have?
I would say implementing test data management requires a lot of effort and time. Many organizations fail to provide sufficient support and time since most organizations think that they have the tools, licenses, and people to solve their problems. However, that's not the case. Even if you have experienced people, it takes time. So, in my first organization, it took almost four years for us to give synthetic data generation capabilities. Certainly, we should implement Broadcom Test Data Manager, but the organization doing so should have time and patience.
Overall, I rate the solution a nine out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner

Buyer's Guide
Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros
sharing their opinions.
Updated: May 2025
Popular Comparisons
Perforce Delphix
Informatica Test Data Management (TDM)
BMC Compuware File-AID
IBM InfoSphere Optim Test Data Management (TDM)
Redgate Data Masker
BMC Compuware Topaz Enterprise Data
TCS MasterCraft DataPlus
Buyer's Guide
Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- When evaluating Test Data Management, what aspect do you think is the most important to look for?
- Looking for recommendations for a service contract to de-identifiy patient data in databases.
- CA TDM vs. Delphix TDM
- Which would you choose, Informatica Test Data Management (TDM) or Collibra Catalog for data subset validation?
- IBM InfoSphere Optim vs. Informatica TDM
- Which solution would you choose: BMC Compuware or IBM Optim Test Data Management?
- Why is Test Data Management important for companies?