Lead Software Engineer at a tech vendor with 10,001+ employees
MSP
Top 10
Nov 1, 2025
I have used Apache Kafka on Confluent Cloud for one of my projects with regard to log monitoring. My main use case for Apache Kafka on Confluent Cloud in that project was mainly streaming of the logs. I wanted to capture logs coming from various interconnected systems into a unified place, so Confluent helped me to streamline all those logs into one place, and then I was consuming those logs that were produced. Having all my logs unified helped our team a lot because the main challenge we were trying to solve was that in the current scenario we were working on, there was no place where we could view the logs in one place like Grafana or anything similar. It was not available, and for this scenario, we had to use multiple systems in order to check the logs, which could be databases, different applications, and logs for various other APIs. So it was not unified in one place. Now we unified all those logs by producing it to Apache Kafka on Confluent Cloud and then we were consuming it. It was very much easier because I know Kafka and using Confluent made it much simpler. It was much more easy to understand, grasp, and very well structured with regard to even the JSON response and everything. I am using Apache Kafka on Confluent Cloud with the Confluent servers itself; we had taken a subscription to Confluent, so it would be the private cloud. In terms of the development it took us to set this whole thing up using Confluent, we were able to do it at a quicker rate. If we went with the ideal vanilla Kafka, it would require much more manual effort, but here it was easier because of the user interface and the experience, which was mainly very much drag and drop, so we could easily get it done faster. We did initially have Grafana, but the only problem with Grafana was it was limited to certain applications, and ours was not among them. Because of that, we thought of shifting to a unified log monitoring system and started off with having vanilla Kafka installed on our servers. But we found Apache Kafka on Confluent Cloud much more convenient regarding how we would set it up, so to get things done faster, we shifted to Confluent.
We need to send a lot of asynchronous messages in this project, and we use the middleware and Apache Kafka on Confluent Cloud to guarantee asynchronous messaging between the services. We use Apache Kafka on Confluent Cloud in the cloud. We also use it with AWS.
VP Engineering at a tech vendor with 1,001-5,000 employees
Real User
Top 10
May 28, 2025
We find that the best features include using the CDC functionality with the connector to take the data from our SQL database and publish it to many consumers. Any changes enable us to easily publish changes about their domain business objects without too much code and work from domain teams. In this way, we can more easily provide a very robust layer of API and events. The second use case is easier projection of data. We found that many teams were struggling to create projections and read stores with regular event buses, and Apache Kafka on Confluent Cloud helped us because of all sorts of features, such as the log architecture they have, and other features. KSQL also helped us there. When order is more important, we rely on Apache Kafka on Confluent Cloud.
We send events to Apache Kafka on Confluent Cloud ( /products/apache-kafka-on-confluent-cloud-reviews ) for microservices and in an event-driven system. We are building an event-driven system, and we send all the events for microservices communication via Apache Kafka on Confluent Cloud ( /products/apache-kafka-on-confluent-cloud-reviews ).
Global Vice President, Product Strategy & Gtm at NucleusTeq
Real User
Top 10
Mar 4, 2025
I use Apache Kafka on Confluent Cloud ( /products/apache-kafka-on-confluent-cloud-reviews ) as a streaming platform for enterprises to move data in real time from the point of generation to where it needs to be consumed. Use cases for this include point of sale, IoT, financial transactions, and any application that benefits from real-time data processing. My work involves using these solutions for industry verticals and customers in the retail and financial services sectors.
Learn what your peers think about Apache Kafka on Confluent Cloud. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
We use Apache Kafka on Confluent Cloud for streaming large volumes of data in real-time. It's employed in scenarios such as handling events from various countries and streaming them efficiently for our clients. We also utilize it for data analytics and in client versions for topic creation, consumer consumption, and ACL provisioning.
Whenever you need to handle a huge load of real-time data processing, Kafka is useful. We currently use it for an output management system for insurance, where the system receives data in a fixed amount and has to process it in several steps. We manage these steps with Kafka because the load can be quite big, with millions of XMLs coming into the system that need to be processed in near real-time.
Integration Solution Architect at a consultancy with 11-50 employees
Real User
Top 5
May 24, 2024
In my company, we are not using the tool for analytics and it is more for CDC processes, so we change the capture processes. It is used to extract data from a database and make it available in other parts of our systems or produce events that inform us of data updates.
Data Architect at a government with 10,001+ employees
Real User
Top 20
Jan 26, 2024
We use Apache Kafka with Confluent Cloud for specific real-time transaction use cases, both on-premise and in the cloud. We have been using Confluent Cloud for about five years. We initially used it for data reputation, then expanded to microservices integration and Kubernetes, focusing on improving data quality and enabling real-time location tracking. We configure it for data transactions across various topics and partitions, depending on the specific use case and required throughput. From an IT perspective, I've used this product across all domains: system development, operations, data management, and system quality.
Senior Architect at a outsourcing company with 501-1,000 employees
Real User
Nov 29, 2023
Our use case is for real-time data integration. It was a preferred tool for this purpose. Additionally, we employed Azure EventHub, another service, as an indicator for real-time data in a couple of larger programs focused on integrating real-time data and visualization.
I use it for real-time processing workloads. So, in some instances, it's like IoT data. We need to put it into a data lake. However, we are using Redpanda, which is still a Kafka protocol. Lots of real-time processing and high-velocity data are the use cases.
We had a legacy website collecting user data as they logged into the portal. We wanted to capture that information in Snowflake and store it in a mobile app. We used Apache Kafka on Confluent Cloud for real-time data streaming.
Apache Kafka on Confluent Cloud provides real-time data streaming with seamless integration, enhanced scalability, and efficient data processing, recognized for its real-time architecture, ease of use, and reliable multi-cloud operations while effectively managing large data volumes.Apache Kafka on Confluent Cloud is designed to handle large-scale data operations across different cloud environments. It supports real-time data streaming, crucial for applications in transaction processing,...
I have used Apache Kafka on Confluent Cloud for one of my projects with regard to log monitoring. My main use case for Apache Kafka on Confluent Cloud in that project was mainly streaming of the logs. I wanted to capture logs coming from various interconnected systems into a unified place, so Confluent helped me to streamline all those logs into one place, and then I was consuming those logs that were produced. Having all my logs unified helped our team a lot because the main challenge we were trying to solve was that in the current scenario we were working on, there was no place where we could view the logs in one place like Grafana or anything similar. It was not available, and for this scenario, we had to use multiple systems in order to check the logs, which could be databases, different applications, and logs for various other APIs. So it was not unified in one place. Now we unified all those logs by producing it to Apache Kafka on Confluent Cloud and then we were consuming it. It was very much easier because I know Kafka and using Confluent made it much simpler. It was much more easy to understand, grasp, and very well structured with regard to even the JSON response and everything. I am using Apache Kafka on Confluent Cloud with the Confluent servers itself; we had taken a subscription to Confluent, so it would be the private cloud. In terms of the development it took us to set this whole thing up using Confluent, we were able to do it at a quicker rate. If we went with the ideal vanilla Kafka, it would require much more manual effort, but here it was easier because of the user interface and the experience, which was mainly very much drag and drop, so we could easily get it done faster. We did initially have Grafana, but the only problem with Grafana was it was limited to certain applications, and ours was not among them. Because of that, we thought of shifting to a unified log monitoring system and started off with having vanilla Kafka installed on our servers. But we found Apache Kafka on Confluent Cloud much more convenient regarding how we would set it up, so to get things done faster, we shifted to Confluent.
We need to send a lot of asynchronous messages in this project, and we use the middleware and Apache Kafka on Confluent Cloud to guarantee asynchronous messaging between the services. We use Apache Kafka on Confluent Cloud in the cloud. We also use it with AWS.
The use cases with this product are events. I use Apache Kafka on Confluent Cloud, and that's what events are.
We find that the best features include using the CDC functionality with the connector to take the data from our SQL database and publish it to many consumers. Any changes enable us to easily publish changes about their domain business objects without too much code and work from domain teams. In this way, we can more easily provide a very robust layer of API and events. The second use case is easier projection of data. We found that many teams were struggling to create projections and read stores with regular event buses, and Apache Kafka on Confluent Cloud helped us because of all sorts of features, such as the log architecture they have, and other features. KSQL also helped us there. When order is more important, we rely on Apache Kafka on Confluent Cloud.
We send events to Apache Kafka on Confluent Cloud ( /products/apache-kafka-on-confluent-cloud-reviews ) for microservices and in an event-driven system. We are building an event-driven system, and we send all the events for microservices communication via Apache Kafka on Confluent Cloud ( /products/apache-kafka-on-confluent-cloud-reviews ).
I use Apache Kafka on Confluent Cloud ( /products/apache-kafka-on-confluent-cloud-reviews ) as a streaming platform for enterprises to move data in real time from the point of generation to where it needs to be consumed. Use cases for this include point of sale, IoT, financial transactions, and any application that benefits from real-time data processing. My work involves using these solutions for industry verticals and customers in the retail and financial services sectors.
We use Apache Kafka on Confluent Cloud for streaming large volumes of data in real-time. It's employed in scenarios such as handling events from various countries and streaming them efficiently for our clients. We also utilize it for data analytics and in client versions for topic creation, consumer consumption, and ACL provisioning.
It's basically four bands of use cases, where we publish data on Kafka topics and stream it across microservices.
Whenever you need to handle a huge load of real-time data processing, Kafka is useful. We currently use it for an output management system for insurance, where the system receives data in a fixed amount and has to process it in several steps. We manage these steps with Kafka because the load can be quite big, with millions of XMLs coming into the system that need to be processed in near real-time.
In my company, we are not using the tool for analytics and it is more for CDC processes, so we change the capture processes. It is used to extract data from a database and make it available in other parts of our systems or produce events that inform us of data updates.
We use Apache Kafka with Confluent Cloud for specific real-time transaction use cases, both on-premise and in the cloud. We have been using Confluent Cloud for about five years. We initially used it for data reputation, then expanded to microservices integration and Kubernetes, focusing on improving data quality and enabling real-time location tracking. We configure it for data transactions across various topics and partitions, depending on the specific use case and required throughput. From an IT perspective, I've used this product across all domains: system development, operations, data management, and system quality.
Our use case is for real-time data integration. It was a preferred tool for this purpose. Additionally, we employed Azure EventHub, another service, as an indicator for real-time data in a couple of larger programs focused on integrating real-time data and visualization.
I use it for real-time processing workloads. So, in some instances, it's like IoT data. We need to put it into a data lake. However, we are using Redpanda, which is still a Kafka protocol. Lots of real-time processing and high-velocity data are the use cases.
We had a legacy website collecting user data as they logged into the portal. We wanted to capture that information in Snowflake and store it in a mobile app. We used Apache Kafka on Confluent Cloud for real-time data streaming.