Try our new research platform with insights from 80,000+ expert users
PeerSpot user
System Developer at a tech services company with 5,001-10,000 employees
Real User
Apr 19, 2017
The EC2 Container Service is one of the most valuable features.

What is most valuable?

  • EC2 Container Service
  • RDS
  • SQS
  • SNS
  • SWF
  • DynamoDB
  • Elastic Beanstalk
  • S3
  • Cloudwatch

How has it helped my organization?

  • Management of code and assets has become extremely simple.
  • Faster development time.
  • Applications are extremely scalable.
  • Round-the-clock monitoring ability.

What needs improvement?

  • Latency: EC2 Container Service is not quite zero downtime as claimed.
  • Not enough or in-clear documentation for some products.

For how long have I used the solution?

I have used it for two years.

Buyer's Guide
Amazon AWS
January 2026
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,082 professionals have used our research since 2012.

What was my experience with deployment of the solution?

Deployment was fairly simple.

What do I think about the stability of the solution?

Stability was never an issue.

What do I think about the scalability of the solution?

We have not encountered any scalability issues.

Which solution did I use previously and why did I switch?

Cloud is the way to go and it had more features than the competitors.

Which other solutions did I evaluate?

Before choosing this product, we also evaluated Microsoft Azure.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Consultant
Top 10
Feb 23, 2017
Cloud conversations: Gaining cloud confidence from insights into AWS outages

PART I

In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

Have AWS and public cloud services become a lightning rod for when things go wrong?

Here is some coverage of various cloud incidents:

Huffington post coverage of February 2011 Google Gmail incident
Microsoft Azure coverage by Allthingsd.com
Neowin.net covering Microsoft Xbox incident
Google’s Gmail blog coverage of Gmail outage
Forbes article Amazon AWS Takes Down Netflix on Christmas Eve
Over at Performance Critical Apps they assert the AWS incident was Netflix fault
From The Virtualization Practice: Amazon Ruining Public Cloud Computing?
Here is Netflix architect Adrian Cockcroft discussing the recent incident
From StorageIOblog Amazon Web Services (AWS) and the Netflix Fix?
From CRN, here are some cloud service availability status via Nasuni

The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

AWS service status dashboard
Bluehost server status dashboard
Google App status dashboard
HP cloud service status console (requires login)
Microsoft Azure service status dashboard
Microsoft Xbox service status dashboard
Rackspace service status dashboards

PART II
There is good information, insight and lessons to be learned from cloud outages and other incidents.

Sorry cynics no that does not mean an end to clouds, as they are here to stay. However when and where to use them, along with what best practices, how to be ready and configure for use are part of the discussion. This means that clouds may not be for everybody or all applications, or at least today. For those who are into clouds for the long haul (either all in or partially) including current skeptics, there are many lessons to be learned and leveraged.

In order to gain confidence in clouds, some questions that I routinely am asked include are clouds more or less reliable than what you are doing? Depends on what you are doing, and how you will be using the cloud services. If you are applying HA and other BC or resiliency best practices, you may be able to configure and isolate from the more common situations. On the other hand, if you are simply using the cloud services as a low-cost alternative selecting the lowest price and service class (SLAs and SLOs), you might get what you paid for. Thus, clouds are a shared responsibility, the service provider has things they need to do, and the user or person designing how the service will be used have some decisions making responsibilities.

Keep in mind that high availability (HA), resiliency, business continuance (BC) along with disaster recovery (DR) are the sum of several pieces. This includes people, best practices, processes including change management, good design eliminating points of failure and isolating or containing faults, along with how the components or technology used (e.g. hardware, software, networks, services, tools). Good technology used in goods ways can be part of a highly resilient flexible and scalable data infrastructure. Good technology used in the wrong ways may not leverage the solutions to their full potential.

While it is easy to focus on the physical technologies (servers, storage, networks, software, facilities), many of the cloud services incidents or outages have involved people, process and best practices so those need to be considered.

These incidents or outages bring awareness, a level set, that this is still early in the cloud evolution lifecycle and to move beyond seeing clouds as just a way to cut cost, and seeing the importance and value HA, resiliency, BC and DR. This means learning from mistakes, taking action to correct or fix errors, find and cut points of failure are part of a technology maturing or the use of it. These all tie into having services with service level agreements (SLAs) with service level objectives (SLOs) for availability, reliability, durability, accessibility, performance and security among others to protect against mayhem or other things that can and do happen.

The reason I mentioned earlier that AWS had another incident is that like their peers or competitors who have incidents in the past, AWS appears to be going through some growing, maturing, evolution related activities. During summer 2012 there was an AWS incident that affected Netflix (read more here: AWS and the Netflix Fix?). It should also be noted that there were earlier AWS outages where Netflix (read about Netflix architecture here) leveraged resiliency designs to try and prevent mayhem when others were impacted.

Is AWS a lightning rod for things to happen, a point of attraction for Mayhem and others?

Granted given their size, scope of services and how being used on a global basis AWS is blazing new territory and experiences, similar to what other information services delivery platforms did in the past. What I mean is that while taken for granted today, open systems Unix, Linux, Windows-based along with client-server, midrange or distributed systems, not to mention mainframe hardware, software, networks, processes, procedures, best practices all went through growing pains.

There are a couple of interesting threads going on over in various LinkedIn Groups based on some reporters stories including on speculation of what happened, followed with some good discussions of what actually happened and how to prevent recurrence of them in the future.

Over in the Cloud Computing, SaaS & Virtualization group forum, this thread is based on a Forbes article (Amazon AWS Takes Down Netflix on Christmas Eve) and involves conversations about SLAs, best practices, HA and related themes. Have a look at the story the thread is based on and some of the assertions being made, and ensuing discussions.

Also over at LinkedIn, in the Cloud Hosting & Service Providers group forum, this thread is based on a story titled Why Netflix’ Christmas Eve Crash Was Its Own Fault with a good discussion on clouds, HA, BC, DR, resiliency and related themes.

Over at the Virtualization Practice, there is a piece titled Is Amazon Ruining Public Cloud Computing? with comments from me and Adrian Cockcroft (@Adrianco) a Netflix Architect (you can read his blog here). You can also view some presentations about the Netflix architecture here.

What this all means

Saying you get what you pay for would be too easy and perhaps not applicable.

There are good services free, or low-cost, just like good free content and other things, however vice versa, just because something costs more, does not make it better.

Otoh, there are services that charge a premium however may have no better if not worse reliability, same with content for fee or perceived value that is no better than what you get free.

Additional related material

Cloud conversations: confidence, certainty and confidentiality
Only you can prevent cloud data loss (shared responsibility)
The blame game: Does cloud storage result in data loss?
Amazon Web Services (AWS) and the Netflix Fix?
Cloud conversations: AWS Government Cloud (GovCloud)
Everything Is Not Equal in the Data center
Cloud and Virtual Data Storage Networking (CRC) – Intel Recommended Reading List

Some closing thoughts:

Clouds are real and can be used safely; however, they are a shared responsibility.
Only you can prevent cloud data loss, which means do your homework, be ready.
If something can go wrong, it probably will, particularly if humans are involved.
Prepare for the unexpected and clarify assumptions vs. realities of service capabilities.
Leverage fault isolation and containment to prevent rolling or spreading disasters.
Look at cloud services beyond lowest cost or for cost avoidance.
What is your organizations culture for learning from mistakes vs. fixing blame?
Ask yourself if you, your applications and organization are ready for clouds.
Ask your cloud providers if they are ready for you and your applications.
Identify what your cloud concerns are to decide what can be done about them.
Do a proof of concept to decide what types of clouds and services are best for you.

Do not be scared of clouds, however be ready, do your homework, learn from the mistakes, misfortune and errors of others. Establish and leverage known best practices while creating new ones. Look at the past for guidance to the future, however avoid clinging to, and bringing the baggage of the past to the future. Use new technologies, tools and techniques in new ways vs. using them in old ways.

Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

[To view all of the links mentioned in this post, go to:
storageioblog.com/cloud-conversations-gaining-cloud-confidence-from-insights-into-aws-outages/ ]

Some updates:

storageioblog.com/november-2013-server-storageio-update-newsletter/

storageioblog.com/fall-2013-aws-cloud-storage-compute-enhancements/

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user6186 - PeerSpot reviewer
it_user6186Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Top 10Consultant

AWS EFS (Elastic File Service) is now available with AWS clouds.

Buyer's Guide
Amazon AWS
January 2026
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,082 professionals have used our research since 2012.
PeerSpot user
Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Consultant
Top 10
Feb 22, 2017
Amazon cloud storage options enhanced with Glacier

In case you missed it, Amazon Web Services (AWS) has enhanced their cloud services (Elastic Cloud Compute or EC2) along with storage offerings. These include Relational Database Service (RDS), DynamoDB, Elastic Block Store (EBS), and Simple Storage Service (S3). Enhancements include new functionality along with availability or reliability in the wake of recent events (outages or service disruptions). Earlier this year AWS announced their Cloud Storage Gateway solution that you can read an analysis here. More recently AWS announced provisioned IOPS among other enhancements (see AWS whats new page here).

Before announcing Glacier, options for Amazon storage services relied on general purpose S3 or EBS with other Amazon services. S3 has provided users the ability to select different availability zones (e.g. geographical regions where data is stored) along with level of reliability for different price points for their applications or services being offered.

Note that AWS S3 flexibility lends itself to individuals or organizations using it for various purposes. This ranges from storing backup or file sharing data to being used as a target for other cloud services. S3 pricing options vary depending on which availability zones you select as well as if standard or reduced redundancy. As its name implies, reduced redundancy trades lower availability recovery time objective (RTO) in exchange for lower cost per given amount of space capacity.

AWS has now announced a new class or tier of storage service called Glacier, which as its name implies moves very slow and capable of supporting large amounts of data. In other words, targeting inactive or seldom accessed data where emphasis is on ultra-low cost in exchange for a longer RTO. In exchange for an RTO that AWS is stating that it can be measured in hours, your monthly storage cost can be as low as 1 cent per GByte or about 12 cents per year per GByte plus any extra fees (See here).

Here is a note that I received from the Amazon Web Services (AWS) team:
----------------------
Dear Amazon Web Services Customer,
We are excited to announce the immediate availability of Amazon Glacier – a secure, reliable and extremely low cost storage service designed for data archiving and backup. Amazon Glacier is designed for data that is infrequently accessed, yet still important to keep for future reference. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance. With Amazon Glacier, customers can reliably and durably store large or small amounts of data for as little as $0.01/GB/month. As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.

Amazon Glacier is:

Low cost- Amazon Glacier is an extremely low-cost, pay-as-you-go storage service that can cost as little as $0.01 per gigabyte per month, irrespective of how much data you store.
Secure – Amazon Glacier supports secure transfer of your data over Secure Sockets Layer (SSL) and automatically stores data encrypted at rest using Advanced Encryption Standard (AES) 256, a secure symmetrix-key encryption standard using 256-bit encryption keys.
Durable- Amazon Glacier is designed to give average annual durability of 99.999999999% for each item stored.
Flexible -Amazon Glacier scales to meet your growing and often unpredictable storage requirements. There is no limit to the amount of data you can store in the service.
Simple- Amazon Glacier allows you to offload the administrative burdens of operating and scaling archival storage to AWS, and makes long term data archiving especially simple. You no longer need to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.
Designed for use with other Amazon Web Services – You can use AWS Import/Export to accelerate moving large amounts of data into Amazon Glacier using portable storage devices for transport. In the coming months, Amazon Simple Storage Service (Amazon S3) plans to introduce an option that will allow you to seamlessly move data between Amazon S3 and Amazon Glacier using data lifecycle policies.

Amazon Glacier is currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions.

A few clicks in the AWS Management Console are all it takes to setup Amazon Glacier. You can learn more by visiting the Amazon Glacier detail page, reading Jeff Barrs blog post, or joining our September 19th webinar.
Sincerely,
The Amazon Web Services Team
----------------------

What is AWS Glacier?

Glacier is low-cost for lower performance (e.g. access time) storage suited to data applications including archiving, inactive or idle data that you are not in a hurry to retrieve. Pay as you go pricing that can be as low as $0.01 USD per GByte per month (and other optional fees may apply, see here) depending on availability zone. Availability zone or regions include US West coast (Oregon or Northern California), US East Coast (Northern Virginia), Europe (Ireland) and Asia (Tokyo).

Now what is understood should have to be discussed, however just to be safe, pity the fool who complains about signing up for AWS Glacier due to its penny per month per GByte cost and it being too slow for their iTunes or videos as you know its going to happen. Likewise, you know that some creative vendor or their surrogate is going to try to show a miss-match of AWS Glacier vs. their faster service that caters to a different usage model; it is just a matter of time.

Lets be clear, Glacier is designed for low-cost, high-capacity, slow access of infrequently accessed data such as an archive or other items. This means that you will be more than disappointed if you try to stream a video, or access a document or photo from Glacier as you would from S3 or EBS or any other cloud service. The reason being is that Glacier is designed with the premise of low-cost, high-capacity, high availability at the cost of slow access time or performance. How slow? AWS states that you may have to wait several hours to reach your data when needed, however that is the tradeoff. If you need faster access, pay more or find a different class and tier of storage service to meet that need, perhaps for those with the real need for speed, AWS SSD capabilities ;).

Here is a link to a good post over at Planforcloud.com comparing Glacier vs. S3, which is like comparing apples and oranges; however, it helps to put things into context.

In terms of functionality, Glacier security includes secure socket layer (SSL), advanced encryption standard (AES) 256 (256-bit encryption keys) data at rest encryption along with AWS identify and access management (IAM) policies.

Persistent storage designed for 99.999999999% durability with data automatically placed in different facilities on multiple devices for redundancy when data is ingested or uploaded. Self-healing is accomplished with automatic background data integrity checks and repair.

Scale and flexibility are bound by the size of your budget or credit card spending limit along with what availability zones and other options you choose. Integration with other AWS services including Import/Export where you can ship large amounts of data to Amazon using different media and mediums. Note that AWS has also made a statement of direction (SOD) that S3 will be enhanced to seamless move data in and out of Glacier using data policies.

Part of stretching budgets for organizations of all size is to avoid treating all data and applications the same (key theme of data protection modernization). This means classifying and addressing how and where different applications and data are placed on various types of servers, storage along with revisiting modernizing data protection.

While the low-cost of Amazon Glacier is an attention getter, I am looking for more than just the lowest cost, which means I am also looking for reliability, security among other things to gain and keep confidence in my cloud storage services providers. As an example, a few years ago I switched from one cloud backup provider to another not based on cost, rather functionality and ability to leverage the service more extensively. In fact, I could switch back to the other provider and save money on the monthly bills; however I would end up paying more in lost time, productivity and other costs.

What do I see as the barrier to AWS Glacier adoption?

Simple, getting vendors and other service providers to enhance their products or services to leverage the new AWS Glacier storage category. This means backup/restore, BC and DR vendors ranging from Amazon (e.g. releasing S3 to Glacier automated policy based migration), Commvault, Dell (via their acquisitions of Appassure and Quest), EMC (Avamar, Networker and other tools), HP, IBM/Tivoli, Jungledisk/Rackspace, NetApp, Symantec and others, not to mention cloud gateway providers will need to add support for this new capabilities, along with those from other providers.

As an Amazon EC2 and S3 customer, it is great to see Amazon continue to expand their cloud compute, storage, networking and application service offerings. I look forward to actually trying out Amazon Glacier for storing encrypted archive or inactive data to compliment what I am doing. Since I am not using the Amazon Cloud Storage Gateway, I am looking into how I can use Rackspace Jungledisk to manage an Amazon Glacier repository similar to how it manages my S3 stores.

Some more related reading:
Only you can prevent cloud data loss
Data protection modernization, more than swapping out media
Amazon Web Services (AWS) and the NetFlix Fix?
AWS (Amazon) storage gateway, first, second and third impressions

As of now, it looks like I will have to wait for either Jungledisk adds native support as they do today for managing my S3 storage pool today, or, the automated policy based movement between S3 and Glacier is transparently enabled.

[To view all of the links mentioned in this post, go to:storageioblog.com/amazon-cloud-storage-options-enhanced-with-glacier/ ]

Some updates:

storageioblog.com/november-2013-server-storageio-update-newsletter/

storageioblog.com/fall-2013-aws-cloud-storage-compute-enhancements/

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user

I can help you guys with anything that you need to ask me about the consultancy things and all of them only on aws consulting. It is because it is a great paltform and people always help me there.
Website: www.clickittech.com/

PeerSpot user
Chief Technology Officer at a tech services company with 51-200 employees
Real User
Jan 9, 2016
An amazing platform to build on but IAM policies, and cross account access needs improvement.

What is most valuable?

The whole IaaS model is an invaluable service. The ease of deployment, maintenance, and scalability, and pay as you go model make AWS an amazing platform to build on.

How has it helped my organization?

AWS sitting at the core of our service, and we have been able to provide an amazing number of features, that were otherwise very expensive, and labor intensive to put in place, these include high availability, business continuity planning, disaster recovery, among others.

What needs improvement?

AWS has an amazing feature set but I have not used all of them to be able to have a well rounded opinion about improvement. However, of the features I have used, I would say IAM policies, and cross account access would probably be one of the main areas of improvement. Amazon is working on a "Service Catalog" which could potentially fill some of these holes.

For how long have I used the solution?

I've used it for three years.

What was my experience with deployment of the solution?

Surprisingly, since starting to use AWS, the process has been quite simple. The deployment was very smooth. Despite this, it does take a bit of getting used to when working with VPCs, and networking in an AWS context, but that's a fairly quick learning curve that can be attained easily.

What do I think about the stability of the solution?

Like anything, failures happen every once in a while. I have experienced some failed hardware under my instances, which caused a brief outage. The stability of the service, however, is also much more reliant on the architecture of the application than the stability of the AWS infrastructure. In any case, AWS has been quite stable over all.

What do I think about the scalability of the solution?

Scalability is one of AWS' strengths. Scaling resources, be it an AWS EC2 instance, or an RDS instance is a snap. Also, scaling into multiple geographic regions in the world is also possible, and quite a realistic view in that environment.

How are customer service and technical support?

Customer Service:

My experience with AWS customer service has been stellar. Everyone I come into contact with from Sales, to Technical Support are always friendly, and courteous.

Technical Support:

The technical support team is quite knowledgeable, and there is no question asked that doesn't get addressed with full attention, complete with references, examples, and a recap of conversations that were conducted.

Their technical support processes are clearly well thought out. I can always know what communication to expect, and the level of help that I can expect to receive. I have yet to call them on an issue where a resolution wasn't reached on the first or second contact.

Which solution did I use previously and why did I switch?

Previously, I used co-location services. The reason I switched is quite obvious:

  • Cost
  • Constant overheads
  • Constant challenge of meeting budgets with consistent cutting edge technology

AWS has removed all these variables, and allowed me to concentrate on growing my services without having to worry about aging servers, or under capacity hardware, etc.

How was the initial setup?

Understanding AWS is actually quite easy. There are some notions that require a bit of previous knowledge to grasp. The good news is that the documentation available about the different services is quite extensive, which can give anyone a head start in launching their AWS services. The complexity of using AWS is directly related to the robustness of the application/service that is being deployed. The more AWS services are integrated together, the more complex the deployment will become.

What about the implementation team?

All AWS services were deployed in-house, with assistance from AWS support teams.

What was our ROI?

Because there is no initial investment in AWS services (it's a pay as you go service in its basic form) the ROI is immediate. Because AWS costs are consistently being reduced, it is a great way to build services, offered at affordable prices, while still getting good returns on investment.

What's my experience with pricing, setup cost, and licensing?

As mentioned above, AWS does not really have initial setup costs. It's like a utility company; you use the service, and pay for your usage. The daily cost is dependent upon the service being deployed at that point in time. For the flexibility, and consistent cutting edge technology that AWS operates on, it's well worth the price.

Which other solutions did I evaluate?

I have evaluated Azure, and Google as IaaS. Quite honestly, Google was too convoluted for my purposes, and although Azure had some nice "Microsoft-y" features that AWS doesn't necessarily have, I still felt that it was much easier to get started with AWS, than it is with the other services.

What other advice do I have?

Don't be afraid of "The Cloud". As prominent as it is today, a lot of people, and small businesses, are still afraid of storing their data away from their physical office. There are a ton of advantages in using AWS for your infrastructure instead of on-premises equipment. Give it a serious look before dismissing it. There is a lot that can be added here, but that could be an article all on its own.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user194427 - PeerSpot reviewer
it_user194427Chief Technology Officer at a tech services company with 51-200 employees
Real User

Not necessarily anything that other products would lack, it would actually be more of a "nice to have". It's definitely not a deal breaker by any means. I take the idea from the concept that AWS has with AMIs, for instance, or places where there are public repositories of UDF for scripts... same type of thing, for IAMs. there are a lot of out-of-the-box IAM policies that user can benefit from, and rather than re-inventing the wheel, it would be nice if they're compiled in a central place. That said, there's nothing that a Google search can fix :)

See all 6 comments
PeerSpot user
Independent Analyst and Advisory Consultant at a tech consulting company with 51-200 employees
Consultant
Top 10
Sep 27, 2015
EFS is NFS version 4 based however it does not support Windows SMB/CIFS, HDFS or other NAS access protocols.

Cloud Conversations: AWS EFS Elastic File System (Cloud NAS) First Preview Look

Amazon Web Services (AWS) recently announced new Elastic File System (EFS) providing Network File System (NFS) NAS (Network Attached Storage) capabilities for AWS Elastic Cloud Compute (EC2) instances. EFS AWS compliments other AWS storage offerings including Simple Storage Service (S3) along with Elastic Block Storage (EBS), Glacier and Relational Data Services (RDS) among others.

Ok, that’s a lot of buzzwords and acronyms so lets break this down a bit.

AWS EFS and Cloud Storage, Beyond Buzzword Bingo

  • EC2 – Instances exist in various Availability Zones (AZ’s) in different AWS Regions. Compute instance with various operating systems including Windows and Ubuntu among others that also can be pre-configured with applications such as SQL Server or web services among others. EC2 instances vary from low-cost to high-performance compute, memory, GPU, storage or general purposed optimized. For example, some EC2 instances rely solely on EBS, S3, RDS or other AWS storage offerings while others include on-board Solid State Disk (SSD) like DAS SSD found on traditional servers. EC2 instances on EBS volumes can be snapshot to S3 storage which in turn can be replicated to another region.
  • EBS – Scalable block accessible storage for EC2 instances that can be configured for performance or bulk storage, as well as for persistent images for EC2 instances (if you choose to configure your instance to be persistent)
  • EFS – New file (aka NAS) accessible storage service accessible from EC2 instances in various AZ’s in a given AWS region
  • Glacier – Cloud based near-line (or by some comparisons off-line) cold-storage archives.
  • RDS – Relational Database Services for SQL and other data repositories
  • S3 – Provides durable, scalable low-cost bulk (aka object) storage accessible from inside AWS as well as via externally. S3 can be used by EC2 instances for bulk durable storage as well as being used as a target for EBS snapshots.
  • Learn more about EC2, EBS, S3, Glacier, Regions, AZ’s and other AWS topics in this primer here

What is EFS

Implements NFS V4 (SNIA NFS V4 primer) providing network attached storage (NAS) meaning data sharing. AWS is indicating initial pricing for EFS at $0.30 per GByte per month. EFS is designed for storage and data sharing from multiple EC2 instances in different AZ’s in the same AWS region with scalability into the PBs.

What EFS is not

Currently it seems that EFS has an end-point inside AWS accessible via an EC2 instance like EBS. This appears to be like EBS where the storage service is accessible only to AWS EC2 instances unlike S3 which can be accessible from the out-side world as well as via EC2 instances.

Note however, that depending on how you configure your EC2 instance with different software, as well as configure a Virtual Private Cloud (VPC) and other settings, it is possible to have an application, software tool or operating system running on EC2 accessible from the outside world. For example, NAS software such as those from SoftNAS and NetApp among many others can be installed on an EC2 instance and with proper configuration, as well as being accessible to other EC2 instances, they can also be accessed from outside of AWS (with proper settings and security).

AWS EFS at this time is NFS version 4 based however does not support Windows SMB/CIFS, HDFS or other NAS access protocols. In addition AWS EFS is accessible from multiple AZ’s within a region. To share NAS data across regions some other software would be required.

EFS is not yet as of this writing released and AWS is currently accepting requests to join the EFS preview here.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

AWS continues to extend its cloud platform include both compute and storage offerings. EFS compliments EBS along with S3, Glacier and RDS. For many environments NFS support will be welcome while for others CIFS/SMB would be appreciated and others are starting to find that value in HDFS accessible NAS. In addition, AWS has also added a new tier for inactive data in S3 for nearline storage as opposed to having to use Glacier.

Overall I like this announcement and look forward to moving beyond the preview.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Linux administrator with 10,001+ employees
Real User
Sep 25, 2015
we have 750 hours of Amazon EC2 Linux t2.micro instance usage, but it's expensive.

Valuable Features:

750 hours of Amazon EC2 Linux t2.micro instance usage (1 GiB of memory and 32-bit and 64-bit platform support) -- It's enough hours to run continuously each month.

Improvements to My Organization:

It gives us the time to check.

Room for Improvement:

Charges are high at the moment.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user57903 - PeerSpot reviewer
it_user57903Principal at a tech company with 51-200 employees
Top 20Real User

Aws has trusted advisor. Also look at park my cloud. Also if you use spot instances it will help you reduce costs. TCO - total cost of ownership

See all 5 comments
it_user7707 - PeerSpot reviewer
Owner at a tech consulting company with 51-200 employees
Consultant
Aug 23, 2015
Amazon Web Services: Security Processes in the EC2 Cloud

Customer trust and confidence is at the heart of Amazon’s business and with so many customers using Amazon’s platforms to run their business securely and efficiently, Amazon has gone to great lengths to operate and manage a comprehensive control environment. The environment supports secure Amazon Web Services cloud web offerings by ensuring that all necessary policies and processes are used in compliance with AWS certifications.

Within the last few years Amazon Web Services security has achieved notable certifications which include SAS70 Type II audits, PCI DSS Level 1 which involves meeting Payment Card Industry Data Security Standards, ISO 27001 for Information Security Management Systems, and compliance within the Federal Information Security Management Act (FISMA) to properly serve government agency FedRAMP requirements for AWS GovCloud on the Amazon platform.

When Amazon introduced Amazon EC2 it started a process rolling for business customers to run their applications in Amazon’s computing environment. EC2 is the Elastic Compute Cloud which allows business customers to access Amazon’s secure cloud environment through a virtual machine. The platform deploys EC2 security which also supports Amazon Web Services for FedRAMP compliance.

Using Amazon EC2 business customers can create an image of their operating system and applications which is known as an Amazon Machine Image. Once the image is created it is uploaded to Amazon S3 which is Amazon’s Simple Storage Service. The AMI is then registered in Amazon EC2 allowing the customer to summon virtual machines as they are needed. The result is an AWS Virtual Private Cloud for business customers to conduct operations without the exorbitant expense of IT infrastructure. For this reason, Amazon must ensure the environment meets all compliance and security standards hence the acquisition of the certification described earlier.

Amazon EC2 Security Processes

Amazon’s approach to AWS security involves layered security processes which maintain data integrity and provide secure EC2 instances while still maintaining configuration flexibility to meet the individual requirements of EC2 business customers.

  • Administration Hosts: For business customers who require access to the management platform, Amazon uses a level of security to accommodate administration hosts without posing a risk to data integrity and other users. Through the use of AWS Identity and Access Management, this is accomplished by auditing all access activity and using a log to track the activity. If the user accessing the management platform terminates their authentication privileges then the privileges are automatically discontinued which ensures secure AWS applications.
  • Customer Controlled Instances: Amazon EC2 allows for virtual instances which are solely controlled by the customer. Business customers exercise full control and at no time can Amazon intervene by logging in to the customer’s operating system. For this reason, a set of practices is in place to guide the customer on authentication processes for AWS VPC in order to access the virtual instances. This involves designing an authentication and privilege system which can be enabled and disabled according to changing needs of virtual machine users.
  • Firewall: As part of the AWS Security Center, EC2 Business customers have access to a complex firewall solution which can be configured to meet the individual needs of each business customer. For example, the firewall for Amazon EC2 is typically configured by default to block all traffic. If the customer wants to allow inbound traffic they must open the necessary ports to allow inbound traffic while blocking unwanted traffic. The firewall also provides a host of options for setting specific protocols for inbound traffic such as by IP address and other identifications. Added security is in place since the business customer must use their x.509 certificate to change firewall configurations.
  • Xen: Another layer of AWS security for EC2 is the Xen Hypervisor which separates different instances running on the same virtual machine. The firewall is situated in the Xen Hypervisor which means packets for instances must pass through the firewall thereby adding enhanced security to isolated instances.

Finally, Amazon Web Services Cloud uses a layer of security known as Amazon EBS or Elastic Block Storage which restricts access to data snapshots to the specific Amazon Web Services account which created it. Business customers can make the data snapshots available to other AWS accounts however; this process should be carefully considered since there may be files with sensitive information.

Prior to releasing Elastic Block Storage to the customer, Amazon wipes old data in accordance with the National Industrial Security Program guidelines. Plus EBS allows business customers to encrypt their data on the block device using algorithms that comply with individual security standards.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user7707 - PeerSpot reviewer
it_user7707Owner at a tech consulting company with 51-200 employees
Consultant

Hi Henry,

we'll post something on S3 security as well soon. aws.amazon.com/s3/faqs/

See all 2 comments
it_user234747 - PeerSpot reviewer
Practice Manager - Cloud, Automation & DevOps at a tech services company with 501-1,000 employees
Real User
Jun 30, 2015
It has a massive library of services for you to use in developing cloud-based solutions.

Originally posted at https://vcdx133.com/2015/06/12/tech101-amazon-web-services

As part of my NPX preparation (AWS Certified Solutions Architect – Professional is one of the recommended qualifications) and my RapidMatter GitHub project (will run from AWS), I have been delving into the world of Amazon Web Services. One statement: “Wow!” I can see why they are the world leader in Public Cloud services.

Here is the cool thing, as an Enterprise/Cloud Architect you have a MASSIVE library of services (40+ at time of writing) that you can use to develop Cloud-based solutions for your customers. As you read through the list below, you will see the fundamental building blocks for every solution. By having this service matrix, you do not have to reinvent the wheel; it already exists and is ready to go. Thus, you can focus on making sure your customer requirements are being met with elegant and innovative designs.

Getting Started (takes 5 minutes)

  1. You have a PC with a responsive and usable Internet connection
  2. Create an AWS account
  3. Provide a valid Credit Card
  4. Provide a valid phone number that must be verified
  5. Start using AWS immediately – there is a free tier (1 year trial period) for some services in some regions (micro instances)
  6. The UI is very intuitive and easy to use
  7. WARNING: You can spin-up most of the service catalogue. Do not forget and leave them running, your credit card will be charged

Core Services of AWS

  • EC2 – Elastic Cloud Compute – Virtual Machines you can provision (Instances) from a massive library of templates (AMI – Amazon Machine Images free and paid from the AWS Community/Marketplace) no
  • EBS – Elastic Block Store – Persistent Virtual Disks for your VMs (Instances)
  • S3 – Simple Storage Service – Scalable, Object-based Storage in the Cloud
  • Glacier – Archive Storage in the Cloud

Under The Hood: AWS uses a heavily customised version of Xen as its hypervisor.

Pricing Models

  • On-Demand – Pay-as-you-go
  • Reserved Instances – Pay up front
  • Spot Requests – Bid for excess AWS resources against other AWS users

Compute

  • EC2 Container Service – Run and Manage Docker Containers
  • Lambda – Run Code in Response to Events

Storage & Content Delivery

  • Storage Gateway – Integrates On-Premises IT Environments with Cloud Storage
  • Elastic File System – Fully Managed File System for EC2

Edge Services (to be close to all of your customers around the world)

  • Route53 – Scalable DNS and Domain Name Registration
  • CloudFront – Global Content Delivery Network – Caches static content regionally

Simple Micro-services that just work

  • SQS – Simple Message Queue Service
  • SES – Simple Email Service
  • SWF – Simple Workflow Service
  • AppStream – Low Latency Application Streaming
  • Elastic Transcoder – Easy-to-use Scalable Media Transcoding
  • CloudSearch – Managed Search Service

Databases

  • RDS – Relational Database Service – MySQL, Oracle, SQL Server & Amazon Aurora
  • DynamoDB – Predictable and Scalable NoSQL Data Store
  • ElastiCache – In-Memory Cache
  • Redshift – Managed Petabyte-Scale Data Warehouse Service

Networking

  • VPC – Virtual Private Cloud – Isolated Cloud Resources
  • Direct Connect – Dedicated Network Connection to AWS

Administration & Security

  • Directory Service – Managed Directory Services in the cloud
  • Identity & Access Management – Access Control and Key Management
  • Trusted Advisor – AWS Cloud Optimisation Expert
  • CloudTrail – User Activity and Change Tracking
  • Config – Resource Configurations and Inventory
  • CloudWatch – Resource and Application Monitoring

Deployment & Management

  • CloudFormation – Templated AWS Resource Creation (for Sysadmins)
  • Elastic Beanstalk – AWS Application Container (for Developers)
  • OpsWorks – DevOps Application Management Service
  • CodeDeploy – Automated Deployments

Analytics

  • EMR – Managed Hadoop Framework
  • Kinesis – Real-time Processing of Streaming Big Data
  • Data Pipeline – Orchestration for Data-Driven Workflows
  • Machine Learning – Build Smart Applications Quickly and Easily

Mobile Services

  • Cognito – User Identity and App Data Synchronisation
  • Mobile Analytics – Understand App Usage Data at Scale
  • SNS – Simple Notification Service – Push Notification Service

Enterprise Applications

  • WorkSpaces – Desktops in the Cloud (VDI)
  • WorkDocs – Secure Enterprise Storage and Sharing
  • WorkMail – Secure Email and Calendaring Service

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.
Updated: January 2026
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.