Sr. DevOps Engineer at a consultancy with 501-1,000 employees
Real User
Top 20
2025-09-05T15:32:20Z
Sep 5, 2025
I have a total of around four years of experience in multiple clouds, especially in AWS, and many times I used Amazon EKS for our multiple products and projects.In our environment, we already have all the other infrastructure and services running on AWS, so we benefit from using Amazon EKS because the other services can easily communicate with it. For example, some of our services need to access S3, and our application objects reside there, so we can easily integrate them with Amazon EKS. We also use IAM rules for integration to provide granular access to resources. As per your question, in our environment, most of our clients and resources reside in AWS, which is why we prefer to deploy other services there, as most of our development environment uses Lightsail. This gives us an edge, allowing us to easily move from development to staging or production environments within the same cloud.
DevOps Engineer at a consultancy with 201-500 employees
Real User
Top 5
2025-09-04T14:49:56Z
Sep 4, 2025
It is a managed service, a Kubernetes cluster, specifically for Amazon EKS. Whatever application is going to deploy on the cloud, cloud provided this solution, a managed service cluster, a Kubernetes cluster. In that scenario where we will use for seamless and zero downtime, we are using Amazon EKS. The major advantage of Amazon EKS is that it is a managed service. Whatever error or downtime, if we are dealing with an on-prem Kubernetes, we have to understand the root cause. If the control plane is down, understanding and fixing takes time. But AWS provides a solution, Amazon EKS, where we are not worried about any control plane components, such as ETC, other API servers, etc. It is a significant advantage, as AWS continuously checks for problems. If they occur, they will fix them immediately. They will also configure the backup from the background. As an end user, we are not able to understand any kind of downtime. For us as end users, it is always working for the managed services. When dealing with Amazon EKS, as an end user, we also configure the IAM policies, role, and responsibility for users managing the cluster and node cluster. The user-specific permissions determine whether they are able to deploy applications to the managed service, whether at root level, admin level, or developer level with view access. We make decisions accordingly and provide the IAM permissions. We have RBAC and context, two major parts of the Kubernetes services that provide security and the authentication and authorization process. We implement context and RBAC to secure our Amazon EKS cluster. Because it is Kubernetes, these services need to be integrated with the Kubernetes repository. ECR is a repository, an Elastic Container Registry. When creating or integrating any images with updated builds, we create updated Docker images, push them into ECR, and integrate our Amazon EKS services with ECR. It syncs with that repository, so whenever it identifies new Docker images, it will pull and deploy them into the Amazon EKS cluster. When setting up an Amazon EKS cluster, we define the number of nodes with minimum, actual, and maximum parameters. For example, with a minimum of two nodes for normal load, only two nodes will always be running. If it identifies increasing server load, it will automatically increase to two more nodes if we have set the maximum to four. In the Amazon EKS cluster configuration background, we specify load thresholds at 70% or 80%. It will identify that and increase or decrease nodes accordingly. If load changes persist for more than 15 minutes, it will take appropriate action. We define an auto-healing process there.
In my recent project, I have used Amazon EKS to deploy and scale machine learning and generative AI applications, containerizing LLM-powered APIs with Docker and deploying them using EKS for high availability and scalability. I also integrated the CI/CD pipelines and GitHub Actions to automate deployments into EKS clusters, leveraging IAM roles for service accounts, KMS encryption and VPC isolations for security. I used CloudWatch, Prometheus, and Grafana for monitoring, and Amazon EKS allowed me to build scalable, compliant, and enterprise-ready AI services without worrying about managing Kubernetes manually.
I am actually the end user myself without a third party at my company. I am using Amazon EKS for my Kubernetes with a private ECR on AWS. The application focuses on workforce software as a service. Amazon EKS adapts and fits with my needs. It provides correct tools since I still need to configure manually some nodes or setting an SSH into the nodes.
We are the end user of Amazon EKS. I will provide context for clarity. In my previous project at Johnson & Johnson, we had assets including Jenkins running on Amazon EKS cluster. This is how we were using it as an end user. One of our use cases for Amazon EKS involved Jenkins running as a pod. We had an EBS volume in AWS that handled all storage-related tasks, while Amazon EKS managed the heavy lifting of ensuring Jenkins was always operational. Additionally, we integrated Prometheus and Grafana for data and metrics, which also ran as pods on Amazon EKS. We have also utilized Amazon EKS's integration with IAM.
Platform DevOps Engineer at a consumer goods company with 201-500 employees
Real User
Top 20
2025-09-02T06:25:41Z
Sep 2, 2025
We have been working on a use case where we need to deploy an application in Amazon EKS, which is a Kubernetes cluster. Kubernetes is a managed service provided by Amazon, and if we want to deploy any applications in Amazon EKS, such as application A, I want my application A to be more scalable, more tangible, and I want to deploy this in an Amazon EKS cluster. Then I check for any open source Helm charts, configure them, and deploy that Helm chart in the Amazon EKS cluster. This is how our day-to-day work goes. In general, we also deploy applications such as Apache Spark in the Amazon EKS clusters.
I use Amazon EKS for creating users, depending on the project I've worked on. I use it to update config maps and for role binding in the Kubernetes cluster. Additionally, I use it for creating namespaces and accessing different clusters, such as dev clusters. Regarding integration with IAM in Amazon EKS, I use it primarily for creating IAM users and managing permissions for users.
Unitel Group at a comms service provider with 1,001-5,000 employees
Real User
Top 5
2025-09-01T09:30:23Z
Sep 1, 2025
We have integrated with IAM. We haven't used it in production now, but we are still in the research and development stage. We are planning to use the Amazon EKS hybrid solution by using our own data center virtual machine on our Amazon EKS clusters. A hybrid cluster would be a great solution for us since we have many resources here, and we want to utilize our cloud service. We are still in research and development solution. The only use case I can remember integrating IAM with Amazon EKS is the hybrid work for nodes.We wanted to easily integrate our on-premise nodes with Amazon EKS to fully utilize the cloud service with our already persistent on-premise nodes. The major use case was that we wanted to host calls where the latency has to be in one digit. We can't have a call here in Mongolia jump from the Hong Kong AWS server and jump back to us. That would be a terrible experience for our users. We would have to implement the one-digit latency solution within our country, so we have to utilize the on-premise data center nodes. That was a bit of a challenge for us. Other than that, it's usually great.
I use this to develop my products. I use it internally in my company and in the other projects I have been working on for the deployment and managing the services which I'm deploying into the Amazon EKS infrastructure. I have not actually been involved with automated patching, as my role has predominantly been as a developer setting up how we deploy our applications into Kubernetes. That's primarily where I've gained experience, not on the server management side where the patching is done, so I'm not sure how the patching works or what benefits it could offer in that context. However, I can discuss how I manage my CI/CD pipelines, application deployment, and how I use Amazon EKS for deployment. That is the part I have experience with.
Technical Expert at a computer software company with 201-500 employees
Real User
Top 20
2025-08-15T07:21:23Z
Aug 15, 2025
The main reasons for using Amazon EKS in our use were third-party solutions that were distributed as Helm charts. We were using Rancher to manage multi-cloud deployment for unification. We are also using it for evaluation purposes, building customer pilots and prototypes. Sometimes it is easy to make the build chain run through and come out as images and deploy them into Kubernetes. It completely depends on use case. If you have got a very dynamic or a requirement to scale very fast with nodes, then Amazon EKS is a very good choice because you have got that reach and the ability to scale quickly. But if you have got a fairly static load, it becomes quite expensive quite quickly. They are expensive CPU cycles.
For the usual use cases of Amazon EKS, we have been running different kinds of servers, such as web pages, and we have also used it to provide the SaaS solution for the end customer, delivering software as a service to the end customers. Basically, I deal with apps, SaaS applications, and websites. We don't use Amazon EKS internally for us; we usually provide the service to others for their solutions.
I am using Amazon EKS as an integrator. Regarding Amazon EKS integration with IAM, I do not use it. To use Amazon EKS as a cloud provider and as a Kubernetes cluster managed by a cloud provider, it offers more benefits because you don't have to configure the cluster on your own. You can use the default configuration and just set the right networking space, set the subnet, and a few other things, but you don't have to raise up or configure your own cluster. Self-healing nodes help to minimize administrative burdens in the organization. It helps to keep the nodes up and running. Then you can use other solutions to minimize costs or to keep the nodes running most of the time.
There are migration projects where we need to migrate some on-premise services to Kubernetes. In that case, we have used Amazon EKS to migrate our workload. There are two production services that we need to deploy on the cloud, so we chose to use Amazon EKS to deploy them on the servers. There are multiple node clusters that we have within Amazon EKS. We have different node groups that we have used. Some of them are high performance, some have high-speed hard disks, and some have high CPU. There are multiple node groups that we have provisioned and also enable the scaling within the pod level and within the cluster level. There are multiple features that we have used along with node management. For streamline, we need to apply GitOps within Amazon EKS. Whenever there is a commit in the deployment-related stuff, we need to deploy the new image on the image repository, and from there, we need to update the kubectl YAML files with the new image tag. We can also choose the Helm charts based, and we can also choose Argo CD for the automated deployments. We have implemented two things with Amazon EKS. First of all, OIDC-based connectivity between the AWS services to the Kubernetes workload. Additionally, we implemented RBAC. We have used RBAC to provision IAM users to have proper security and a constrained environment so that read-only users can only read the things. We have used Amazon EKS Anywhere for on-premises deployments, and within Amazon EKS, it has an air-gapped environment where we can deploy the things and manage the local type of Kubernetes.
From AWS, I use many services, but mostly my work revolves around Cloud Native, specifically Amazon EKS. Kubernetes is my area of focus and expertise. Most of my expertise is around Kubernetes and Cloud Native technologies. This is why I don't call myself a full cloud offering expert, but I mostly focus on the Kubernetes usage with other OSS solutions around K8S. It's not really a niche; it's huge. I handle both application workloads and data ingestion workloads.
Integration Specialist at a financial services firm with 10,001+ employees
Real User
Top 10
2025-07-23T12:44:02Z
Jul 23, 2025
We are going to be using Lambda on the AWS Stack. We are migrating all our on-premises applications to AWS. Eggplant is going to be on the AWS platform soon. We are using Lambda functions and CloudWatch for monitoring. We are also using Elasticsearch and ELK. After we move all the applications to the cloud, we're going to move them into Amazon EKS as containers for easy management. It would be the same as how we are using EC2 instances, but in this case, we're going to move all applications to run on containers, Amazon EKS containers. We use it for common file sharing across applications. I'm not really sure about all aspects as I didn't use it much. We just set it up to make sure all applications can read. The main capability is allowing different applications to access the same file or resources at the same time, providing collaboration capabilities.
The use cases for the product involve provisioning of infrastructure and auto provisioning of infrastructure. I have managed on-premise deployments in my use case with a Helm chart.
Our typical use case for Amazon EKS is that we have a number of applications and microservices that we host in EKS. We have a separate code base for the infrastructure platform, and the microservice team and the application team will be deploying their microservices on their own. We have configured it in a way that it could be easily accessible for developers as well as the platform engineers; we just platformize things. Earlier, I was using ECS, and the reason we use Amazon EKS is for better adaptation of Kubernetes, fitting our multi-tenant model.
DevOps Engineer | AWS and Terraform Specialist | Multicloud Experience at a agriculture with 11-50 employees
Real User
Top 20
2025-04-28T10:54:26Z
Apr 28, 2025
I use Amazon EKS ( /products/amazon-eks-reviews ) to provide the computing power for my applications. We have over thirty clusters in Amazon EKS ( /products/amazon-eks-reviews ). Our team uses Amazon EKS to deploy new applications using Helm and to manage our infrastructure. We use Amazon EKS to scale and deploy more applications using different namespaces. Amazon EKS services help us provide clusters where we deploy APIs, services, cron jobs, and other applications to support our services.
Senior Java Consultant at a comms service provider with 1,001-5,000 employees
Real User
Top 5
2025-03-21T15:28:03Z
Mar 21, 2025
I deal with application development. I have used AWS ( /products/amazon-aws-reviews ) services for configuring elastic search, deploying in pods, and using the CI/CD pipeline with Jenkins ( /products/jenkins-reviews ) to build and deploy applications.
I am working mostly on AWS infrastructure services, such as EC2, EKS, RDS, CloudFormation, IAM, and CloudWatch. I have around one year of experience with Kubernetes and have been using AWS services continuously for three years. My responsibilities include working on server storage, containerization, monitoring, and access policies.
We use EKS in our company to run containerized applications. I work in the container ecosystem team, and we manage EKS clusters for our developer teams so they don't have to. We provide them with the necessary tools to run on top of the cluster. EKS helps us simplify and speed up cluster management. We don't have to take care of cluster updates; we just initiate the update, and AWS handles it. The same goes for some of the AWS-managed add-ons.
For EKS, we deployed a Django application. The application built the whole image and stored it in ECR (Elastic Container Registry). We stored the code repository in GitHub, but the image was in ECR. We also had another repository for the Kubernetes manifest files. So we were deploying it in a different image, and the code was in a different image. We had a whole pipeline for deployment, from CodePipeline to ECR, and then from ECR to Kubernetes. I work with different AWS solutions, such as Elastic Beanstalk, AWS Lambda, DynamoDB, and VPC. I use services like EC2, S3, and VPC every day, so I'm not including those. I've also used API Gateway, and currently, I also use AWS Bedrock.
I used Amazon EKS for my personal learning purposes. I used the solution to learn how to initiate and upgrade the Kubernetes cluster for testing in my own lab.
The use case for Amazon EKS is for a payment gateway corporation whose applications run on microservices. Their software team develops cloud-native applications. They use Amazon's public cloud for these applications but find it expensive. They want a less expensive solution for their customers. We suggest using Amazon EKS open-source solutions. By using these solutions on-premises, they don't have to pay Amazon.
I use the solution for its microservices. I used the product in some of my personal projects for deploying applications. From an organizational standpoint, the product is useful for its microservices.
We use Amazon EKS as an APM tool for the environment while migrating the monolithic architecture to microservices architecture. It helps us to test product functionality in a particular environment.
The solution can be described as a microservice, and it is also a fully containerized platform. The solution can be described as a stateless service. Amazon EKS can be a great solution for deployments since it supports autoscaling and keeps scaling as well. In my company, we only pay for the resources we use, and owing to such a concept, we use the solution in our company.
It's a great service because we can do a lot of things using it. It's easy to create clusters and services in pods there. So, the main purpose is to create clusters and services and define some pods there.
Specialist Data Analysis vehicle safety at Cubeware
Real User
2022-10-27T13:01:20Z
Oct 27, 2022
Our primary case is using Amazon EKS with all of our data in our MapReduce, map clusters, and our data clusters. And from there, we just input the information using Python and do our analysis using that.
Solution Architect Grade I at a tech services company with 5,001-10,000 employees
Real User
2022-09-08T13:19:00Z
Sep 8, 2022
Our client is doing some image analysis, and we need a robust system that won't go down during the image rotation, so we are using Amazon EKS. With this solution, our services will not go down during their work, and data will remain safe and available to the user.
Our client in the healthcare industry has multiple clinics and patients who use the solution to interact with their portal and insert patient details. Patient information is managed via databases created in the solution.
Cloud Architect & Devops engineer at KdmConsulting
Real User
2022-07-09T03:27:16Z
Jul 9, 2022
We use this solution for containerization and push containers into the EKS or CI/CD pipeline in the DevOps pipeline. It's very easy and well-managed for autoscale as we can manage our node groups. In addition, we can tailor autoscaling to our needs.
Solutions Architect at a financial services firm with 1,001-5,000 employees
Real User
Top 10
2022-04-25T09:35:43Z
Apr 25, 2022
Amazon EKS is basically a model provided by Amazon that allows you to create and deploy multiple microservices and manage containers. Once the Kubernetes is installed, we can directly create the container, set up ports, and set up new services. We currently have Java containers running. We have more than 500 people using this solution. We are on version 21.
Solution Architect / Head of DevOps Engineer at a tech services company with 201-500 employees
Real User
2021-06-23T00:52:03Z
Jun 23, 2021
Amazon Elastic Kubernetes Service (EKS) solution integrates AWS cloud with Kubernetes. Kubernetes is an open-source container technology that is popular right now. It has the ability to replicate applications for scaling.
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.
EKS is the best place to run Kubernetes for several reasons. First, you can choose to run your EKS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage...
I have a total of around four years of experience in multiple clouds, especially in AWS, and many times I used Amazon EKS for our multiple products and projects.In our environment, we already have all the other infrastructure and services running on AWS, so we benefit from using Amazon EKS because the other services can easily communicate with it. For example, some of our services need to access S3, and our application objects reside there, so we can easily integrate them with Amazon EKS. We also use IAM rules for integration to provide granular access to resources. As per your question, in our environment, most of our clients and resources reside in AWS, which is why we prefer to deploy other services there, as most of our development environment uses Lightsail. This gives us an edge, allowing us to easily move from development to staging or production environments within the same cloud.
It is a managed service, a Kubernetes cluster, specifically for Amazon EKS. Whatever application is going to deploy on the cloud, cloud provided this solution, a managed service cluster, a Kubernetes cluster. In that scenario where we will use for seamless and zero downtime, we are using Amazon EKS. The major advantage of Amazon EKS is that it is a managed service. Whatever error or downtime, if we are dealing with an on-prem Kubernetes, we have to understand the root cause. If the control plane is down, understanding and fixing takes time. But AWS provides a solution, Amazon EKS, where we are not worried about any control plane components, such as ETC, other API servers, etc. It is a significant advantage, as AWS continuously checks for problems. If they occur, they will fix them immediately. They will also configure the backup from the background. As an end user, we are not able to understand any kind of downtime. For us as end users, it is always working for the managed services. When dealing with Amazon EKS, as an end user, we also configure the IAM policies, role, and responsibility for users managing the cluster and node cluster. The user-specific permissions determine whether they are able to deploy applications to the managed service, whether at root level, admin level, or developer level with view access. We make decisions accordingly and provide the IAM permissions. We have RBAC and context, two major parts of the Kubernetes services that provide security and the authentication and authorization process. We implement context and RBAC to secure our Amazon EKS cluster. Because it is Kubernetes, these services need to be integrated with the Kubernetes repository. ECR is a repository, an Elastic Container Registry. When creating or integrating any images with updated builds, we create updated Docker images, push them into ECR, and integrate our Amazon EKS services with ECR. It syncs with that repository, so whenever it identifies new Docker images, it will pull and deploy them into the Amazon EKS cluster. When setting up an Amazon EKS cluster, we define the number of nodes with minimum, actual, and maximum parameters. For example, with a minimum of two nodes for normal load, only two nodes will always be running. If it identifies increasing server load, it will automatically increase to two more nodes if we have set the maximum to four. In the Amazon EKS cluster configuration background, we specify load thresholds at 70% or 80%. It will identify that and increase or decrease nodes accordingly. If load changes persist for more than 15 minutes, it will take appropriate action. We define an auto-healing process there.
In my recent project, I have used Amazon EKS to deploy and scale machine learning and generative AI applications, containerizing LLM-powered APIs with Docker and deploying them using EKS for high availability and scalability. I also integrated the CI/CD pipelines and GitHub Actions to automate deployments into EKS clusters, leveraging IAM roles for service accounts, KMS encryption and VPC isolations for security. I used CloudWatch, Prometheus, and Grafana for monitoring, and Amazon EKS allowed me to build scalable, compliant, and enterprise-ready AI services without worrying about managing Kubernetes manually.
I am an end user of Amazon EKS. As a software engineer, we are using Amazon EKS as a platform for deploying our applications.
I am actually the end user myself without a third party at my company. I am using Amazon EKS for my Kubernetes with a private ECR on AWS. The application focuses on workforce software as a service. Amazon EKS adapts and fits with my needs. It provides correct tools since I still need to configure manually some nodes or setting an SSH into the nodes.
We are the end user of Amazon EKS. I will provide context for clarity. In my previous project at Johnson & Johnson, we had assets including Jenkins running on Amazon EKS cluster. This is how we were using it as an end user. One of our use cases for Amazon EKS involved Jenkins running as a pod. We had an EBS volume in AWS that handled all storage-related tasks, while Amazon EKS managed the heavy lifting of ensuring Jenkins was always operational. Additionally, we integrated Prometheus and Grafana for data and metrics, which also ran as pods on Amazon EKS. We have also utilized Amazon EKS's integration with IAM.
We have been working on a use case where we need to deploy an application in Amazon EKS, which is a Kubernetes cluster. Kubernetes is a managed service provided by Amazon, and if we want to deploy any applications in Amazon EKS, such as application A, I want my application A to be more scalable, more tangible, and I want to deploy this in an Amazon EKS cluster. Then I check for any open source Helm charts, configure them, and deploy that Helm chart in the Amazon EKS cluster. This is how our day-to-day work goes. In general, we also deploy applications such as Apache Spark in the Amazon EKS clusters.
I use Amazon EKS for creating users, depending on the project I've worked on. I use it to update config maps and for role binding in the Kubernetes cluster. Additionally, I use it for creating namespaces and accessing different clusters, such as dev clusters. Regarding integration with IAM in Amazon EKS, I use it primarily for creating IAM users and managing permissions for users.
For our microservice architecture, where we have multiple services for our business use cases, we have been using Amazon EKS from the very beginning.
We have integrated with IAM. We haven't used it in production now, but we are still in the research and development stage. We are planning to use the Amazon EKS hybrid solution by using our own data center virtual machine on our Amazon EKS clusters. A hybrid cluster would be a great solution for us since we have many resources here, and we want to utilize our cloud service. We are still in research and development solution. The only use case I can remember integrating IAM with Amazon EKS is the hybrid work for nodes.We wanted to easily integrate our on-premise nodes with Amazon EKS to fully utilize the cloud service with our already persistent on-premise nodes. The major use case was that we wanted to host calls where the latency has to be in one digit. We can't have a call here in Mongolia jump from the Hong Kong AWS server and jump back to us. That would be a terrible experience for our users. We would have to implement the one-digit latency solution within our country, so we have to utilize the on-premise data center nodes. That was a bit of a challenge for us. Other than that, it's usually great.
I use this to develop my products. I use it internally in my company and in the other projects I have been working on for the deployment and managing the services which I'm deploying into the Amazon EKS infrastructure. I have not actually been involved with automated patching, as my role has predominantly been as a developer setting up how we deploy our applications into Kubernetes. That's primarily where I've gained experience, not on the server management side where the patching is done, so I'm not sure how the patching works or what benefits it could offer in that context. However, I can discuss how I manage my CI/CD pipelines, application deployment, and how I use Amazon EKS for deployment. That is the part I have experience with.
The main reasons for using Amazon EKS in our use were third-party solutions that were distributed as Helm charts. We were using Rancher to manage multi-cloud deployment for unification. We are also using it for evaluation purposes, building customer pilots and prototypes. Sometimes it is easy to make the build chain run through and come out as images and deploy them into Kubernetes. It completely depends on use case. If you have got a very dynamic or a requirement to scale very fast with nodes, then Amazon EKS is a very good choice because you have got that reach and the ability to scale quickly. But if you have got a fairly static load, it becomes quite expensive quite quickly. They are expensive CPU cycles.
For the usual use cases of Amazon EKS, we have been running different kinds of servers, such as web pages, and we have also used it to provide the SaaS solution for the end customer, delivering software as a service to the end customers. Basically, I deal with apps, SaaS applications, and websites. We don't use Amazon EKS internally for us; we usually provide the service to others for their solutions.
I am using Amazon EKS as an integrator. Regarding Amazon EKS integration with IAM, I do not use it. To use Amazon EKS as a cloud provider and as a Kubernetes cluster managed by a cloud provider, it offers more benefits because you don't have to configure the cluster on your own. You can use the default configuration and just set the right networking space, set the subnet, and a few other things, but you don't have to raise up or configure your own cluster. Self-healing nodes help to minimize administrative burdens in the organization. It helps to keep the nodes up and running. Then you can use other solutions to minimize costs or to keep the nodes running most of the time.
There are migration projects where we need to migrate some on-premise services to Kubernetes. In that case, we have used Amazon EKS to migrate our workload. There are two production services that we need to deploy on the cloud, so we chose to use Amazon EKS to deploy them on the servers. There are multiple node clusters that we have within Amazon EKS. We have different node groups that we have used. Some of them are high performance, some have high-speed hard disks, and some have high CPU. There are multiple node groups that we have provisioned and also enable the scaling within the pod level and within the cluster level. There are multiple features that we have used along with node management. For streamline, we need to apply GitOps within Amazon EKS. Whenever there is a commit in the deployment-related stuff, we need to deploy the new image on the image repository, and from there, we need to update the kubectl YAML files with the new image tag. We can also choose the Helm charts based, and we can also choose Argo CD for the automated deployments. We have implemented two things with Amazon EKS. First of all, OIDC-based connectivity between the AWS services to the Kubernetes workload. Additionally, we implemented RBAC. We have used RBAC to provision IAM users to have proper security and a constrained environment so that read-only users can only read the things. We have used Amazon EKS Anywhere for on-premises deployments, and within Amazon EKS, it has an air-gapped environment where we can deploy the things and manage the local type of Kubernetes.
From AWS, I use many services, but mostly my work revolves around Cloud Native, specifically Amazon EKS. Kubernetes is my area of focus and expertise. Most of my expertise is around Kubernetes and Cloud Native technologies. This is why I don't call myself a full cloud offering expert, but I mostly focus on the Kubernetes usage with other OSS solutions around K8S. It's not really a niche; it's huge. I handle both application workloads and data ingestion workloads.
We are going to be using Lambda on the AWS Stack. We are migrating all our on-premises applications to AWS. Eggplant is going to be on the AWS platform soon. We are using Lambda functions and CloudWatch for monitoring. We are also using Elasticsearch and ELK. After we move all the applications to the cloud, we're going to move them into Amazon EKS as containers for easy management. It would be the same as how we are using EC2 instances, but in this case, we're going to move all applications to run on containers, Amazon EKS containers. We use it for common file sharing across applications. I'm not really sure about all aspects as I didn't use it much. We just set it up to make sure all applications can read. The main capability is allowing different applications to access the same file or resources at the same time, providing collaboration capabilities.
The use cases for the product involve provisioning of infrastructure and auto provisioning of infrastructure. I have managed on-premise deployments in my use case with a Helm chart.
Our typical use case for Amazon EKS is that we have a number of applications and microservices that we host in EKS. We have a separate code base for the infrastructure platform, and the microservice team and the application team will be deploying their microservices on their own. We have configured it in a way that it could be easily accessible for developers as well as the platform engineers; we just platformize things. Earlier, I was using ECS, and the reason we use Amazon EKS is for better adaptation of Kubernetes, fitting our multi-tenant model.
My main use cases for Amazon EKS are securing the clusters and providing mesh gateways between the clusters.
I use Amazon EKS ( /products/amazon-eks-reviews ) to provide the computing power for my applications. We have over thirty clusters in Amazon EKS ( /products/amazon-eks-reviews ). Our team uses Amazon EKS to deploy new applications using Helm and to manage our infrastructure. We use Amazon EKS to scale and deploy more applications using different namespaces. Amazon EKS services help us provide clusters where we deploy APIs, services, cron jobs, and other applications to support our services.
I deal with application development. I have used AWS ( /products/amazon-aws-reviews ) services for configuring elastic search, deploying in pods, and using the CI/CD pipeline with Jenkins ( /products/jenkins-reviews ) to build and deploy applications.
I am working mostly on AWS infrastructure services, such as EC2, EKS, RDS, CloudFormation, IAM, and CloudWatch. I have around one year of experience with Kubernetes and have been using AWS services continuously for three years. My responsibilities include working on server storage, containerization, monitoring, and access policies.
We use EKS in our company to run containerized applications. I work in the container ecosystem team, and we manage EKS clusters for our developer teams so they don't have to. We provide them with the necessary tools to run on top of the cluster. EKS helps us simplify and speed up cluster management. We don't have to take care of cluster updates; we just initiate the update, and AWS handles it. The same goes for some of the AWS-managed add-ons.
For EKS, we deployed a Django application. The application built the whole image and stored it in ECR (Elastic Container Registry). We stored the code repository in GitHub, but the image was in ECR. We also had another repository for the Kubernetes manifest files. So we were deploying it in a different image, and the code was in a different image. We had a whole pipeline for deployment, from CodePipeline to ECR, and then from ECR to Kubernetes. I work with different AWS solutions, such as Elastic Beanstalk, AWS Lambda, DynamoDB, and VPC. I use services like EC2, S3, and VPC every day, so I'm not including those. I've also used API Gateway, and currently, I also use AWS Bedrock.
I used Amazon EKS for my personal learning purposes. I used the solution to learn how to initiate and upgrade the Kubernetes cluster for testing in my own lab.
The use case for Amazon EKS is for a payment gateway corporation whose applications run on microservices. Their software team develops cloud-native applications. They use Amazon's public cloud for these applications but find it expensive. They want a less expensive solution for their customers. We suggest using Amazon EKS open-source solutions. By using these solutions on-premises, they don't have to pay Amazon.
We use Amazon EKS to manage containerization within our microservices environment.
I use the solution for its microservices. I used the product in some of my personal projects for deploying applications. From an organizational standpoint, the product is useful for its microservices.
We use Amazon EKS as an APM tool for the environment while migrating the monolithic architecture to microservices architecture. It helps us to test product functionality in a particular environment.
The solution can be described as a microservice, and it is also a fully containerized platform. The solution can be described as a stateless service. Amazon EKS can be a great solution for deployments since it supports autoscaling and keeps scaling as well. In my company, we only pay for the resources we use, and owing to such a concept, we use the solution in our company.
It's a great service because we can do a lot of things using it. It's easy to create clusters and services in pods there. So, the main purpose is to create clusters and services and define some pods there.
The main use case is Cloud and IT applications.
We deploy different solutions on the EKS cluster for our clients to use.
I use EKS as an application management system and a second application server. It's connected to Amazon RDS.
I have clients that run on Kubernetes engines.
The product helps to create a new environment fast.
We run all our microservices across the globe with Amazon EKS. We also use it for development, testing, and maintenance.
I use Amazon EKS for telco event monitoring.
It's mainly deployed on a public cloud.
We just implemented the acquisition project for our cloud application environment, and we implemented Amazon EKS.
Our primary case is using Amazon EKS with all of our data in our MapReduce, map clusters, and our data clusters. And from there, we just input the information using Python and do our analysis using that.
Our client is doing some image analysis, and we need a robust system that won't go down during the image rotation, so we are using Amazon EKS. With this solution, our services will not go down during their work, and data will remain safe and available to the user.
Our client in the healthcare industry has multiple clinics and patients who use the solution to interact with their portal and insert patient details. Patient information is managed via databases created in the solution.
We use this solution for containerization and push containers into the EKS or CI/CD pipeline in the DevOps pipeline. It's very easy and well-managed for autoscale as we can manage our node groups. In addition, we can tailor autoscaling to our needs.
Amazon EKS is basically a model provided by Amazon that allows you to create and deploy multiple microservices and manage containers. Once the Kubernetes is installed, we can directly create the container, set up ports, and set up new services. We currently have Java containers running. We have more than 500 people using this solution. We are on version 21.
I have tried to host the enterprise content management application of IBM FileNet on Amazon EKS. That's the main use case.
We use Amazon Elastic Kubernetes Service (EKS) to manage our containers and to run and scale Kubernetes applications in the cloud.
Amazon Elastic Kubernetes Service (EKS) solution integrates AWS cloud with Kubernetes. Kubernetes is an open-source container technology that is popular right now. It has the ability to replicate applications for scaling.