No more typing reviews! Try our Samantha, our new voice AI agent.

Matillion Data Productivity Cloud vs Skyvia comparison

 

Comparison Buyer's Guide

Executive SummaryUpdated on Jan 18, 2026

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

Matillion Data Productivity...
Ranking in Cloud Data Integration
12th
Average Rating
8.4
Reviews Sentiment
7.4
Number of Reviews
28
Ranking in other categories
AI Data Analysis (17th)
Skyvia
Ranking in Cloud Data Integration
26th
Average Rating
9.0
Reviews Sentiment
7.8
Number of Reviews
1
Ranking in other categories
Data Integration (56th)
 

Mindshare comparison

As of May 2026, in the Cloud Data Integration category, the mindshare of Matillion Data Productivity Cloud is 5.7%, up from 3.2% compared to the previous year. The mindshare of Skyvia is 1.4%, up from 0.2% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Cloud Data Integration Mindshare Distribution
ProductMindshare (%)
Matillion Data Productivity Cloud5.7%
Skyvia1.4%
Other92.9%
Cloud Data Integration
 

Featured Reviews

Jitendra Jena - PeerSpot reviewer
Director Axtria - Ingenious Insights! at Axtria - Ingenious Insights
Easy integration and workflow proposals streamline processes
The predefined connectors eliminate the need to write code for connectivity. If you have a predefined connector, it is easy to use with plug and play functionality. The processing time and ease of use are significant benefits. As everyone is moving into AI integration, it will definitely help. When creating workflows, they can propose solutions directly.
RH
CTO & Developer at a consultancy with self employed
The product works, is simple to use, and is reliable.
Error handling. This has caused me many problems in the past. When an error occurs, the event on the connection that is called does not seem to behave as documented. If I attempt a retry or opt not to display an error dialog, it does it anyway. In all fairness, I have never reported this. I think it is more important that a unique error code is passed to the error event that identifies a uniform type of error that occurred, such as ecDisconnect, eoInvalidField. It is very hard to find what any of the error codes currently passed actually mean. A list would be great for each database engine. Trying to catch an exception without displaying the UniDAC error message is impossible, no matter how you modify the parameters in the OnError of the TUniConnection object. I have already implemented the following things myself. They are suggestions rather than specific requests. Copy Datasets: This contains an abundance of redundant options. I think that a facility to copy one dataset to another in a single call would be handy. Redundancy: I am currently working on this. I have extended the TUniConnection to have an additional property called FallbackConnection. If the TUniConnection goes offline, the connection attempts to connect the FallbackConnection. If successful, it then sets the Connection properties of all live UniDatasets in the app to the FallbackConnection and re-opens them if necessary. The extended TUniConnection holds a list of datasets that were created. Each dataset is responsible for registering itself with the connection. This is a highly specific feature. It supports an offline mode that is found in mission critical/point of sale solutions. I have never seen it implement before in any DACs, but I think it is a really unique feature with a big impact. Dataset to JSON/XML: A ToSql function on a dataset that creates a full SQL Text statement with all parameters converted to text (excluding blobs) and included in the returned string. Extended TUniScript:- TMyUniScript allows me to add lines of text to a script using the normal dataset functions, Script.Append, Script.FieldByName(‘xxx’).AsString := ‘yyy’, Script.AddToScript and finally Script.Post, then Script.Commit. The AddToScript builds the SQL text statement and appends it to the script using #e above. Record Size Calculation. It would be great if UniDac could estimate the size of a particular record from a query or table. This could be used to automatically set the packet fetch/request count based on the size of the Ethernet packets on the local area network. This I believe would increase performance and reduce network traffic for returning larger datasets. I am aware that this would also be a unique feature to UniDac but would gain a massive performance enhancement. I would suggest setting the packet size on the TUniConnection which would effect all linked datasets.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"Matillion ETL is one hundred percent stable."
"It takes less than five minutes to set up and delivers results. It is much quicker than traditional ETL technologies."
"The tool's middle-dimensional structure significantly simplifies obtaining the right data at the appropriate level. This feature makes deploying our applications easier since we utilize a single source without publishing data from various sources."
"We allow non-technical people to use Matillion to load data into our data warehouse for reporting. Thus, it is easy enough to use that we don't always have to get a technical person involved in setting up a data movement (ETL)."
"Matillion ETL helps manage data movement, ingestion, and transformation through pipelines."
"It's highly scalable. It takes upon itself the Redshift scalability, so it's very good."
"It's been able to do everything we require."
"It has enabled building a data warehouse within three months from the ground up to support WMS reporting."
"For what it offers, I think this solution is a must for any Delphi programmer."
 

Cons

"Our main challenge currently is that Matillion runs on an EC2 instance, limiting us to running only two processes simultaneously at the entry level."
"There are certain functions that are available in other ETL tools which are still not present in Matillion ETL. It would be good to have more features."
"I am looking forward to seeing the expansion of the source range for their data loader product."
"To complete the pipeline, they might want to include some connectors which would put the data into different platforms. This would be helpful."
"It is not an end-to-end platform for ETL. To complete the pipeline, they might want to include some connectors which would put the data into different platforms."
"When using the SQL loader type there were not a lot of pre-processing features for the data. For example, if there is a table with twenty columns, but we only want to load ten columns. In that case, we can use a security script to select the specific columns needed. However, if we want to perform extensive pre-processing of the data, I faced some challenges with Matillion ETL. I did not encounter many challenges, but my overall experience is limited as I only have three years of experience."
"Performance can be improved for efficiency, and it can be made faster."
"The product must enhance its near-real-time data capture feature."
"Error handling has caused me many problems in the past; when an error occurs, the event on the connection that is called does not seem to behave as documented."
 

Pricing and Cost Advice

"The cost of the solution is high and could be reduced."
"The price of Matillion ETL is reasonable."
"The solution is very cheap. You're paying $2.50 an hour and if you set your service up, which you can do, you're not getting charged. Currently, our ETL process is just an overnight process that runs for about an hour. I can start and stop my server just for an hour if I want to and spent $2.50 a day for an ETL solution. There are no additional costs."
"I have heard from my manager and other higher ups, "This product is cheaper than other things on the market," and they have done the research."
"The prices needs to be lower."
"The solution's pricing is not based on the licensing cost but on the running hours when the Matillion instance is up and running."
"It is not necessarily a cheap solution. However, it's reasonable priced, especially with the smaller machines that we run it on."
"It is cost-effective. Based on our use case, it's efficient and cheap. It saves a lot of money and our upfront costs are less."
Information not available
report
Use our free recommendation engine to learn which Cloud Data Integration solutions are best for your needs.
893,221 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Financial Services Firm
12%
Computer Software Company
10%
Manufacturing Company
8%
Construction Company
7%
Performing Arts
20%
Construction Company
11%
Outsourcing Company
8%
Computer Software Company
7%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
By reviewers
Company SizeCount
Small Business6
Midsize Enterprise10
Large Enterprise11
No data available
 

Questions from the Community

What is your experience regarding pricing and costs for Matillion ETL?
The pricing is managed by the tooling team. The pricing is moderate, neither expensive nor cheap.
What needs improvement with Matillion ETL?
The main areas for improvement are AI features and scalability.
What is your primary use case for Matillion ETL?
For the ETL, we are using Matillion Data Productivity Cloud. We have skilled resources for Matillion Data Productivity Cloud, which is why we are using it. The infrastructure is provided by the cus...
Ask a question
Earn 20 points
 

Also Known As

Matillion ETL for Redshift, Matillion ETL for Snowflake, Matillion ETL for BigQuery
Skyvia, Skyvia Data Integration
 

Overview

 

Sample Customers

Thrive Market, MarketBot, PWC, Axtria, Field Nation, GE, Superdry, Quantcast, Lightbox, EDF Energy, Finn Air, IPRO, Twist, Penn National Gaming Inc
Boeing, Sony, Honda, Oracle, BMW, Samsung
Find out what your peers are saying about Amazon Web Services (AWS), Informatica, Salesforce and others in Cloud Data Integration. Updated: May 2026.
893,221 professionals have used our research since 2012.