Try our new research platform with insights from 80,000+ expert users
it_user261489 - PeerSpot reviewer
Program and Project Manager at a tech services company with 10,001+ employees
Real User
Jun 30, 2015
As with all public clouds, there is still a dilemma with security, but provides a rich set of services for IaaS, PaaS, and SaaS.

AWS is a good platform for non-MySQL, MySQL, and Hadoop databases, but it’s not as good for RDBMS ones like MS SQL. It still has many missing features -- like replication, backup policy, and the ability to store/attach databases from a local drive -- but it has many good features for big data. 

Its new AppStream service will pre-process graphics, including 3D renderings, and blast the results to mobile clients. Its Kinesis service for streaming data sets the stage for building big data apps on AWS, the basic architecture for the internet of things. As for its Hadoop capabilities, AWS launched its Elastic MapReduce (EMR) a long time back. It is the best cloud services provider for open source software for databases, operating systems hosting apps, and many other customized applications.

As discussed with many tech professionals, there is still a dilemma with security like there was a decade ago when online e-commerce business started and people weren’t prepared to share their credit card and bank details. Now, as online shopping is common, I am expecting the same trend will grow for public cloud very soon. AWS security is very good and they are following all the required security regulations as much other public cloud providers are doing, as they know any security breach could impact their entire business.

Pricing is another key concern when private cloud is used for big business and multiple growth on data. The price is a big debate and requires lot of analysis, as it is a question for big organizations. But no doubt, AWS is quite good for a small setup as it’s very cost effective and provides an eco-setup.

AWS is quite for good cloud services such as IaaS, PaaS, and SaaS. It provides a rich set of services and integrated monitoring tools alongside a competitive pricing model. AWS offers a full range of computer and storage offerings, including on-demand instances and specialized services such as Amazon EMR, and Cluster GPU instances. Amazon Cloud Trail and Amazon CloudWatch services are very good monitoring and AWS Identity and Access Management (IAM) is a good administration and security feature for administrators to use.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user544134 - PeerSpot reviewer
it_user544134IT Coordinator at a non-tech company with 10,001+ employees
Real User

Thank you for the information on the AWS solution.

PeerSpot user
Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Consultant
Top 10
May 1, 2015
Lambda and other AWS enhancements

A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

Some recent AWS announcements prior to re:Invent include

AWS vCenter Portal

Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

AWS re:invent content

November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.

November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.

AWS re:Invent announcements

Announcements and enhancements made by AWS during re:Invent include:

  • Key Management Service (KMS)
  • Amazon RDS for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application development, deployed and life-cycle management tools
  • AWS Service Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

Key Management Service (KMS)

Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

AWS Database

For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below). Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

Amazon RDS for Aurora

Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

Amazon EC2 C4 instances

AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

Amazon EC2 Container Service

Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

AWS Lambda

In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.

Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.

Various application code deployment models

Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.

How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.

Why use AWS Lambda vs. an EC2 instance

Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premise environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

View AWS Lambda pricing along with free tier information here.

Amazon EBS Enhancements

AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.

Application development, deployed and life-cycle management tools

In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.

AWS Config (Preview e.g. early access prior to full release)

Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.

AWS Service Catalog

AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.

AWS CodeDeploy

To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

AWS CodeCommit

For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.

AWS CodePipeline

To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.

What this all means

AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. 

Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.

Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. 

AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.

The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Amazon AWS
January 2026
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,082 professionals have used our research since 2012.
PeerSpot user
Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Consultant
Top 10
Apr 27, 2015
I like the ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS.

Cloud Conversations: AWS S3 Cross Region Replication storage enhancements

Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.

The Problem, Issue, Challenge, Opportunity and Need

The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.

Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).

Understanding the challenge and designing a strategy

The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).

What is AWS S3 Cross-region replication

Highlights of AWS S3 Cross-region replication include:

  • AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
  • S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
  • Policy based replication tied into S3 versioning and life-cycle rules
  • Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
  • Keeps region to region data replication and movement within AWS networks (potential cost advantage)

To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.

  • Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
  • As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
  • As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
  • Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
  • Click here to see current AWS S3 fees for various regions

S3 Cross-region replication and alternative approaches

There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.

However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.

AWS S3 cross-region hands on experience (first look)

For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.

I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).

While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.

It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.

Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Consultant
Top 10
Mar 26, 2015
I like the new ability for moving S3 objects within AWS, however I will continue to use other tools for moving data in and out of AWS.

Cloud Conversations: AWS S3 Cross Region Replication storage enhancements

Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.

The Problem, Issue, Challenge, Opportunity and Need

The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.

Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).

Understanding the challenge and designing a strategy

The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).

What is AWS S3 Cross-region replication

Highlights of AWS S3 Cross-region replication include:

  • AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
  • S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
  • Policy based replication tied into S3 versioning and life-cycle rules
  • Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
  • Keeps region to region data replication and movement within AWS networks (potential cost advantage)

To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.

  • Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
  • As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
  • As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
  • Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
  • Click here to see current AWS S3 fees for various regions

S3 Cross-region replication and alternative approaches

There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.

However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.

AWS S3 cross-region hands on experience (first look)

For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.

I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).

While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.

It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.

Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user189768 - PeerSpot reviewer
Salesforce/Amazon/AWS Trainer at a tech consulting company with 51-200 employees
Consultant
Feb 9, 2015
The product is simple and can be learned with online documentation.

What is most valuable?

EC2, EBS, Security, and RDS services are all good.

How has it helped my organization?

Currently I'm using the product for learning purposes.

For how long have I used the solution?

I have used if for over a year.

What was my experience with deployment of the solution?

No issues encountered.

What do I think about the stability of the solution?

No issues encountered.

What do I think about the scalability of the solution?

Not yet.

How are customer service and technical support?

Customer Service:

Very nice.

Technical Support:

Very good.

Which solution did I use previously and why did I switch?

Yes I used Microsoft Azure but it only provides a free trial for one month. This duration is not sufficient to learn cloud services. Hence I switched to AWS as Amazon provides AWS cloud as a free trial for one year. That is an ample amount of time to grasp the cloud concepts and gain hands-on experience.

What about the implementation team?

The product is simple and can be learned with online documentation.

What was our ROI?

Very convenient.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
CTO at a healthcare company with 51-200 employees
Vendor
Top 20
Aug 24, 2014
The technical support was a 7 on a scale of 1-10, but dynamic usage and flexibility.

What is most valuable?

Dynamic usage and flexibility in choosing configurations. Also the fact that Amazon’s security team is much larger than anything I could ever assemble gives me reliance that this run time environment is going to be more secure than anything I can deploy.

How has it helped my organization?

I needed to stand up a prototype server that did not conform to my corporate IT standards. By using AWS I was able to stand up my prototype in a few hours, run my demo and be done.

What needs improvement?

The connection between the billing console and the management console is not obvious so shutting down a machine was hard to find initially and resulted in excess billing.

For how long have I used the solution?

I have been using this solution for 5 years.

What was my experience with deployment of the solution?

No issues with deployment.

What do I think about the stability of the solution?

No issues with stability.

What do I think about the scalability of the solution?

No issues with scalability.

How are customer service and technical support?

Customer Service: Customer service was pretty good. It was responsive but it took 2-3 iterations on the billing/Management issue before they understood the problem I ran into.Technical Support: The technical support was a 7 on a scale of 1-20

Which solution did I use previously and why did I switch?

I have used Amazon Elastic Beanstalk and Windows Azure. My primary choice to use AWS was because the prototype server stack was specified as an AMI (Amazon Machine Image).

How was the initial setup?

If you have not used AWS, its not as straightforward as it could be to choose what stack configuration a particular AMI requires before loading it. OTOH the “Amazon Web Service Pricing Calculator” is currently the gold standard for cloud vendors.

What about the implementation team?

We implemented in-house.

What was our ROI?

Not applicable, the ROI came from the agility to quickly standup the environment I needed.

What's my experience with pricing, setup cost, and licensing?

Approximately $200/mo.

Which other solutions did I evaluate?

I have used Azure, and Horuko.

What other advice do I have?

Use the AWS pricing calculator to understand how the services fit together.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Senior Architect at a tech services company with 10,001+ employees
Consultant
Aug 3, 2014
Great platform to spin up servers in minutes; provides multiple tools for administrators; UI can be puzzling sometimes

Valuable Features

EBS, Availability Zones, VPC, Support for high I/O instances

Customer Service and Technical Support

Customer Service: Could be made much betterTechnical Support: Excellent

Initial Setup

Slightly complex because of the user interface which is not simple for first time users
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Architect at a consultancy with 501-1,000 employees
Consultant
Top 20
Jul 28, 2014
We have been able to leverage the agility of Amazon to work faster. AWS's customer service is ridiculously good.

What is most valuable?

The features which are most valuable are EC2, S3, and the networking functionality. EC2 allows me to provision new servers in minutes. S3 allows me infinite, redundant, easily accessible storage. The networking functionality (VPC, Security Groups, subnets, etc) allow me to create robust networks that make sense. Availability Zones allow me to design systems that are resistant to failure.

How has it helped my organization?

One example is our devops people can provision new products and systems almost immediately. They have set up an instance of GitHub Enterprise which has become our "source of truth." We have created proofs-of-concept in hours to days to test and evaluate new products across the enterprise. We have been able to leverage the agility of AWS to work faster and, in some cases, "fail fast" so we can get on to the next thing, which works.

What needs improvement?

Probably customer education and awareness, especially in the Cyber Security area. Many people are mistrustful of public cloud offerings or misunderstand how things work. We'd like to use AWS a lot more for various workloads, but gaining approval to do the things we want to do is currently our biggest roadblock. This isn't necessarily AWS's fault, but hopefully they have the capacity to gain acceptance in the broader Cyber Security community.

For how long have I used the solution?

I have been using the product for two years.

What was my experience with deployment of the solution?

Our main issues are learning how to deploy the most efficiently. We use Puppet and Jenkins to deploy. Other issues are employee awareness and training (which instance type to use, where to put things, which keys or security groups to use, etc).

What do I think about the stability of the solution?

No more than expected number of stability issues. We have the occasional EBS volume go down but we expect that.

What do I think about the scalability of the solution?

None at all. AWS is infinitely scalable. Though on one occasion, our preferred instance type was not available in the Availability Zone we wanted it in.

How are customer service and technical support?

Customer Service: AWS's customer service is ridiculously good. Back when I was admin of an account that only used a few hundred dollars a month, I got top-notch support from my account manager. He set up a couple of conference calls with Solution Architects with no hesitation. Now I preside over an account with significantly more usage, and the customer service remains great.Technical Support: AWS has some really smart people who can analyse my technical questions and give me a cogent, useful answer in short order. Their tech support is top-notch.

Which solution did I use previously and why did I switch?

We have also use Terremark e-Cloud, but their cost and lack of features turned us off. I don't believe it was an either-or situation, though. We used both but are moving out of e-Cloud and are staying in AWS.

How was the initial setup?

I was not with my current agency during the initial set-up phase.

What about the implementation team?

The agency used a vendor team. I'm with that vendor team and I think we're pretty good, but I'm biased.

What was our ROI?

We don't calculate ROI, but AWS definitely helps us fulfull the agency's mission, which is how we measure things here.

Which other solutions did I evaluate?

I personally have evaluated Azure and Google Cloud Platform. Neither seemed as robust or mature as AWS.

What other advice do I have?

Jump right in and make liberal use of AWS' technical support. Even now, I see people hesitating to ask and trying to figure it out for themselves. AWS is always ready to help. It's both complicated and useful enough that it's very easy to build things in a suboptimal way if you don't think things through and follow their guidance. So get all your hands-on staff to take the training they offer and don't be shy about asking for help. The training is big.
Disclosure: My company has a business relationship with this vendor other than being a customer. AWS premier partner
PeerSpot user
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.
Updated: January 2026
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.