If you do work on an object in one third the usual time, your power output is

Q:  How much does Amazon S3 cost?

With Amazon S3, you pay only for what you use. There is no minimum charge. You can estimate your monthly bill using the AWS Pricing Calculator.

We charge less where our costs are less. Some prices vary across Amazon S3 Regions. Billing prices are based on the location of your S3 bucket. There is no Data Transfer charge for data transferred within an Amazon S3 Region via a COPY request. Data transferred via a COPY request between AWS Regions is charged at rates specified in the pricing section of the Amazon S3 detail page. There is no Data Transfer charge for data transferred between Amazon EC2 (or any AWS service) and Amazon S3 within the same region, for example, data transferred within the US East (Northern Virginia) Region. However, data transferred between Amazon EC2 (or any AWS service) and Amazon S3 across all other regions is charged at rates specified on the Amazon S3 pricing page, for example, data transferred between Amazon EC2 US East (Northern Virginia) and Amazon S3 US West (Northern California). For S3 on Outposts pricing, please visit the Outposts pricing page.

Q:  How will I be charged and billed for my use of Amazon S3?

There are no set up charges or commitments to begin using the service. At the end of the month, you will automatically be charged for that month’s usage. You can view your charges for the current billing period at any time on the Amazon Web Services web site, by logging into your Amazon Web Services account, and clicking “Billing and Cost Management console” under “Your Web Services Account.”

With the AWS Free Usage Tier*, you can get started with Amazon S3 for free in all regions except the AWS GovCloud Regions. Upon sign up, new AWS customers receive 5 GB of Amazon S3 Standard storage, 20,000 Get Requests, 2,000 Put Requests, and 100 GB of data transfer out (to internet, other AWS regions, or CloudFront) each month for one year. Unused monthly usage will not roll over to the next month.

Amazon S3 charges you for the following types of usage. Note that the calculations below assume there is no AWS Free Tier in place.

Storage Used:

Amazon S3 storage pricing is summarized on the Amazon S3 Pricing page.

The volume of storage billed in a month is based on the average storage used throughout the month. This includes all object data and metadata stored in buckets that you created under your AWS account. We measure your storage usage in “TimedStorage-ByteHrs,” which are added up at the end of the month to generate your monthly charges.

Storage Example:

Assume you store 100 GB (107,374,182,400 bytes) of data in Amazon S3 Standard in your bucket for 15 days in March, and 100 TB (109,951,162,777,600 bytes) of data in Amazon S3 Standard for the final 16 days in March.

At the end of March, you would have the following usage in Byte-Hours: Total Byte-Hour usage = [107,374,182,400 bytes x 15 days x (24 hours / day)] + [109,951,162,777,600 bytes x 16 days x (24 hours / day)] = 42,259,901,212,262,400 Byte-Hours. Please calculate hours based on the actual number of days in a given month. For example, in our example we are using March which has 31 days or 744 hours.

Let's convert this to GB-Months: 42,259,901,212,262,400 Byte-Hours / 1,073,741,824 bytes per GB / 744 hours per month = 52,900 GB-Months

This usage volume crosses two different volume tiers. The monthly storage price is calculated below assuming the data is stored in the US East (Northern Virginia) Region: 50 TB Tier: 51,200 GB x $0.023 = $1,177.60 50 TB to 450 TB Tier: 1,700 GB x $0.022 = $37.40

Total Storage cost = $1,177.60 + $37.40 = $1,215.00

Network Data Transferred In:

Amazon S3 Data Transfer In pricing is summarized on the Amazon S3 Pricing page. This represents the amount of data sent to your Amazon S3 buckets. 

Network Data Transferred Out:

Amazon S3 Data Transfer Out pricing is summarized on the Amazon S3 Pricing page. For Amazon S3, this charge applies whenever data is read from any of your buckets from a location outside of the given Amazon S3 Region.

Data Transfer Out pricing rate tiers take into account your aggregate Data Transfer Out from a given region to the internet across Amazon EC2, Amazon S3, Amazon RDS, Amazon SimpleDB, Amazon SQS, Amazon SNS and Amazon VPC. These tiers do not apply to Data Transfer Out from Amazon S3 in one AWS Region to another AWS Region.

Data Transfer Out Example:
Assume you transfer 1 TB of data out of Amazon S3 from the US East (Northern Virginia) Region to the internet every day for a given 31-day month. Assume you also transfer 1 TB of data out of an Amazon EC2 instance from the same region to the internet over the same 31-day month.

Your aggregate Data Transfer would be 62 TB (31 TB from Amazon S3 and 31 TB from Amazon EC2). This equates to 63,488 GB (62 TB * 1024 GB/TB).

This usage volume crosses three different volume tiers. The monthly Data Transfer Out charge is calculated below assuming the Data Transfer occurs in the US East (Northern Virginia) Region: 10 TB Tier: 10,239 GB (10×1024 GB/TB – 1 (free)) x $0.09 = $921.51 10 TB to 50 TB Tier: 40,960 GB (40×1024) x $0.085 = $3,481.60

50 TB to 150 TB Tier: 12,288 GB (remainder) x $0.070 = $860.16

Total Data Transfer Out charge = $921.51+ $3,481.60 + $860.16= $5,263.27

Data Requests:

Amazon S3 Request pricing is summarized on the Amazon S3 Pricing Chart.

Request Example: Assume you transfer 10,000 files into Amazon S3 and transfer 20,000 files out of Amazon S3 each day during the month of March. Then, you delete 5,000 files on March 31st. Total PUT requests = 10,000 requests x 31 days = 310,000 requests Total GET requests = 20,000 requests x 31 days = 620,000 requests

Total DELETE requests = 5,000×1 day = 5,000 requests

Assuming your bucket is in the US East (Northern Virginia) Region, the Request charges are calculated below: 310,000 PUT Requests: 310,000 requests x $0.005/1,000 = $1.55 620,000 GET Requests: 620,000 requests x $0.004/10,000 = $0.25

5,000 DELETE requests = 5,000 requests x $0.00 (no charge) = $0.00

Data Retrieval:

Amazon S3 data retrieval pricing applies for the S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-IA storage classes and is summarized on the Amazon S3 Pricing page.

Data Retrieval Example:
Assume in one month you retrieve 300 GB of S3 Standard-IA, with 100 GB going out to the internet, 100 GB going to EC2 in the same AWS region, and 100 GB going to CloudFront in the same AWS Region.

Your data retrieval charges for the month would be calculated as 300 GB x $0.01/GB = $3.00. Note that you would also pay network data transfer charges for the portion that went out to the internet.

Please see here for details on billing of objects archived to Amazon S3 Glacier.

* * Your usage for the free tier is calculated each month across all regions except the AWS GovCloud Region and automatically applied to your bill—unused monthly usage will not roll over. Restrictions apply. See offer terms for more details.

Q:  Why do prices vary depending on which Amazon S3 Region I choose?

We charge less where our costs are less. For example, our costs are lower in the US East (Northern Virginia) Region than in the US West (Northern California) Region.

Q: How am I charged for using Versioning?

Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning (let’s assume the current month is 31 days long):

1) Day 1 of the month: You perform a PUT of 4 GB (4,294,967,296 bytes) on your bucket. 2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.

When analyzing the storage costs of the above operations, please note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month:

Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours.

Conversion to Total GB-Months
5,257,039,970,304 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month

The cost is calculated based on the current rates for your region on the Amazon S3 Pricing page.

Q:  How am I charged for accessing Amazon S3 through the AWS Management Console?

Normal Amazon S3 pricing applies when accessing the service through the AWS Management Console. To provide an optimized experience, the AWS Management Console may proactively execute requests. Also, some interactive operations result in more than one request to the service.

Q:  How am I charged if my Amazon S3 buckets are accessed from another AWS account?

Normal Amazon S3 pricing applies when your storage is accessed by another AWS Account. Alternatively, you may choose to configure your bucket as a Requester Pays bucket, in which case the requester will pay the cost of requests and downloads of your Amazon S3 data.

You can find more information on Requester Pays bucket configurations in the Amazon S3 Documentation.

Q:  Do your prices include taxes?

Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax.

Learn more about taxes on AWS services »


Page 2

Amazon S3 has various features you can use to organize and manage your data in ways that support specific use cases, enable cost efficiencies, enforce security, and meet compliance requirements. Data is stored as objects within resources called “buckets”, and a single object can be up to 5 terabytes in size. S3 features include capabilities to append metadata tags to objects, move and store data across the S3 Storage Classes, configure and enforce data access controls, secure data against unauthorized users, run big data analytics, monitor data at the object and bucket levels, and view storage usage and activity trends across your organization. Objects can be accessed through S3 Access Points or directly through the bucket hostname.

Amazon S3’s flat, non-hierarchical structure and various management features are helping customers of all sizes and industries organize their data in ways that are valuable to their businesses and teams. All objects are stored in S3 buckets and can be organized with shared names called prefixes. You can also append up to 10 key-value pairs called S3 object tags to each object, which can be created, updated, and deleted throughout an object’s lifecycle. To keep track of objects and their respective tags, buckets, and prefixes, you can use an S3 Inventory report that lists your stored objects within an S3 bucket or with a specific prefix, and their respective metadata and encryption status. S3 Inventory can be configured to generate reports on a daily or a weekly basis.

With S3 bucket names, prefixes, object tags, and S3 Inventory, you have a range of ways to categorize and report on your data, and subsequently can configure other S3 features to take action. Whether you store thousands of objects or a billion, S3 Batch Operations makes it simple to manage your data in Amazon S3 at any scale. With S3 Batch Operations, you can copy objects between buckets, replace object tag sets, modify access controls, and restore archived objects from S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, with a single S3 API request or a few clicks in the S3 console. You can also use S3 Batch Operations to run AWS Lambda functions across your objects to execute custom business logic, such as processing data or transcoding image files. To get started, specify a list of target objects by using an S3 Inventory report or by providing a custom list, and then select the desired operation from a pre-populated menu. When an S3 Batch Operation request is done, you'll receive a notification and a completion report of all changes made. Learn more about S3 Batch Operations by watching the video tutorials. 

Amazon S3 also supports features that help maintain data version control, prevent accidental deletions, and replicate data to the same or different AWS Region. With S3 Versioning, you can easily preserve, retrieve, and restore every version of an object stored in Amazon S3, which allows you to recover from unintended user actions and application failures. To prevent accidental deletions, enable Multi-Factor Authentication (MFA) Delete on an S3 bucket. If you try to delete an object stored in an MFA Delete-enabled bucket, it will require two forms of authentication: your AWS account credentials and the concatenation of a valid serial number, a space, and the six-digit code displayed on an approved authentication device, like a hardware key fob or a Universal 2nd Factor (U2F) security key.

With S3 Replication, you can replicate objects (and their respective metadata and object tags) to one or more destination buckets into the same or different AWS Regions for reduced latency, compliance, security, disaster recovery, and other use cases. You can configure S3 Cross-Region Replication (CRR) to replicate objects from a source S3 bucket to one or more destination buckets in different AWS Regions. S3 Same-Region Replication (SRR) replicates objects between buckets in the same AWS Region. While live replication like CRR and SRR automatically replicates newly uploaded objects as they are written to your bucket, S3 Batch Replication allows you to replicate existing objects. You can use S3 Batch Replication to backfill a newly created bucket with existing objects, retry objects that were previously unable to replicate, migrate data across accounts, or add new buckets to your data lake. Amazon S3 Replication Time Control (S3 RTC) helps you meet compliance requirements for data replication by providing an SLA and visibility into replication times.

Amazon S3 Multi-Region Access Points accelerate performance by up to 60% when accessing data sets that are replicated across multiple AWS Regions. Based on AWS Global Accelerator, S3 Multi-Region Access Points consider factors like network congestion and the location of the requesting application to dynamically route your requests over the AWS network to the lowest latency copy of your data. S3 Multi-Region Access Points provide a single global endpoint that you can use to access a replicated data set, spanning multiple buckets in S3. This allows you to build multi-region applications with the same simple architecture that you would use in a single region, and then to run those applications anywhere in the world.

You can also enforce write-once-read-many (WORM) policies with S3 Object Lock. This S3 management feature blocks object version deletion during a customer-defined retention period so that you can enforce retention policies as an added layer of data protection or to meet compliance obligations. You can migrate workloads from existing WORM systems into Amazon S3, and configure S3 Object Lock at the object- and bucket-levels to prevent object version deletions prior to a pre-defined Retain Until Date or Legal Hold Date. Objects with S3 Object Lock retain WORM protection, even if they are moved to different storage classes with an S3 Lifecycle policy. To track what objects have S3 Object Lock, you can refer to an S3 Inventory report that includes the WORM status of objects. S3 Object Lock can be configured in one of two modes. When deployed in Governance mode, AWS accounts with specific IAM permissions are able to remove S3 Object Lock from objects. If you require stronger immutability in order to comply with regulations, you can use Compliance Mode. In Compliance Mode, the protection cannot be removed by any user, including the root account.

In addition to these management capabilities, use Amazon S3 features and other AWS services to monitor and control your S3 resources. Apply tags to S3 buckets to allocate costs across multiple business dimensions (such as cost centers, application names, or owners), then use AWS Cost Allocation Reports to view the usage and costs aggregated by the bucket tags. You can also use Amazon CloudWatch to track the operational health of your AWS resources and configure billing alerts for estimated charges that reach a user-defined threshold. Use AWS CloudTrail to track and report on bucket- and object-level activities, and configure S3 Event Notifications to trigger workflows and alerts or invoke AWS Lambda when a specific change is made to your S3 resources. S3 Event Notifications automatically transcodes media files as they’re uploaded to S3, processes data files as they become available, and synchronizes objects with other data stores. Additionally, you can verify integrity of data transferred to and from Amazon S3, and can access the checksum information at any time using the GetObjectAttributes S3 API or an S3 Inventory report. You can choose from four supported checksum algorithms (SHA-1, SHA-256, CRC32, or CRC32C) for data integrity checking on your upload and download requests depending on your application needs.

In addition to these management capabilities, you can use S3 features and other AWS services to monitor and control how your S3 resources are being used. You can apply tags to S3 buckets in order to allocate costs across multiple business dimensions (such as cost centers, application names, or owners), and then use AWS Cost Allocation Reports to view usage and costs aggregated by the bucket tags. You can also use Amazon CloudWatch to track the operational health of your AWS resources and configure billing alerts that are sent to you when estimated charges reach a user-defined threshold. Another AWS monitoring service is AWS CloudTrail, which tracks and reports on bucket-level and object-level activities. You can configure S3 Event Notifications to trigger workflows, alerts, and invoke AWS Lambda when a specific change is made to your S3 resources. S3 Event Notifications can be used to automatically transcode media files as they are uploaded to Amazon S3, process data files as they become available, or synchronize objects with other data stores.


Page 3

Amazon S3 was launched 15 years ago on Pi Day, March 14, 2006, and created the first generally available AWS service. Over that time, data storage and usage has exploded, and the world has never been the same. Amazon S3 has virtually unlimited scalability and unmatched availability, durability, security, and performance. 

Watch the videos below to hear from AWS leaders and experts as they take you back in time reviewing the history of AWS and the key decisions involved in the building and evolution of Amazon S3. 

15 years of Amazon S3 - Foundations of Cloud Infrastructure

15 years of Amazon S3 - Security is Job Zero

15 years of Amazon S3 - Building an Evolvable System

Watch the videos below to hear how Amazon S3 was architected for availability and durability inside AWS Regions and Availability Zones. Understand how to get started with Amazon S3, best practices, and how to optimize for costs, data protection, and more.

Beyond eleven nines - How Amazon S3 is built for durability

Architecting for high availability on Amazon S3

Back to the basics: Getting started with Amazon S3

Amazon S3 foundations: Best practices for Amazon S3

Amazon S3 storage classes primer

Optimize and manage data on Amazon S3

Amazon S3 Replication: For data protection & application acceleration

Backup to Amazon S3 and Amazon S3 Glacier

Modernizing your data archive with Amazon S3 Glacier

Using AWS, you gain the control and confidence you need to securely run your business with the most flexible and secure cloud computing environment available today. With AWS, you can improve your ability to meet core security and compliance requirements, such as data locality, protection, and confidentiality with our comprehensive services and features. 

Watch the security videos to learn about the foundations of data security featuring Amazon S3.

Foundations of data security

Securing Amazon S3 with guardrails and fine-grained access controls

15 years of Amazon S3 - Security is Job Zero

Managing access to your Amazon S3 buckets and objects

Demo - Amazon S3 security posture management and threat detection

Protecting your data in Amazon S3

Advanced networking with Amazon S3 and AWS PrivateLink

Proactively identifying threats & protecting sensitive data in S3

S3 Block Public Access overview and demo

Demo - Monitor your Amazon S3 inventory & identify sensitive data

Demo - Amazon S3 and VPC Endpoints

Serverless is a way to describe the services, practices, and strategies that enable you to build more agile applications so you can innovate and respond to change faster. Watch the serverless and S3 videos to see how AWS Lambda can be the best compute option when working with data in Amazon S3.

Serverless on Amazon S3 - Introducing S3 Object Lambda

S3 Object Lambda - add code to Amazon S3 GET requests to process data

Building serverless applications with Amazon S3

S3 & Lambda - flexible pattern at the core of serverless applications

The best compute for your storage - Amazon S3 & AWS Lambda

Live coding - Uploading media to Amazon S3 from web & mobile apps

Amazon S3 is the largest and most performant object storage service for structured and unstructured data—and the storage service of choice to build a data lake. Watch these videos to learn how to build a data lake on Amazon S3, and how you can use native AWS services or leverage AWS partners to run data analytics, artificial intelligence (AI), machine learning (ML), high-performance computing (HPC), and media data processing applications to gain insights from your unstructured datasets.

Building modern data lakes on Amazon S3

Harness the power of your data with the AWS Lake House Architecture

Amazon S3 Strong Consistency

Build a data lake on Amazon S3

AWS offers a wide variety of services and partner tools to help you migrate your data to Amazon S3. Learn how AWS Storage Gateway and AWS DataSync can remove the friction out of the data migration process as you dive into the solutions and architectural considerations for accelerating data migration to the cloud from on-premises systems.

Accelerating your migration to S3

Managed file transfers to S3 over SFTP, FTPS, and FTP

Optimize and manage data on Amazon S3


Page 4

With Amazon S3’s wide range of features, you can quickly and centrally manage data at scale, enforce finely-tuned access policies, protect data from errors and threats, store data across numerous storage classes to optimize cost and performance, and audit and report on numerous aspects of your stored datasets (such as access requests, usage, and billing). Watch these videos for an introduction to some of the most used S3 features and visit the developer resources to get started.

Amazon S3 is designed to deliver 99.999999999% data durability. S3 automatically creates copies of all uploaded objects and stores them across at least three Availability Zones (AZs). This means your data is protected by a multi-AZ resiliency model and against site-level failures. Watch the video to learn more about what the 11 9's of durability means for your data and global resiliency.

If you do work on an object in one third the usual time, your power output is

Learn more about product pricing

Pay only for what you use. There is no minimum fee.

Learn more 

If you do work on an object in one third the usual time, your power output is

Sign up for a free account

Instantly get access to the AWS Free Tier and start experimenting with Amazon S3. 

Sign up 

If you do work on an object in one third the usual time, your power output is

Start building in the console

Get started building with Amazon S3 in the AWS Console.

Get started 


Page 5

Amazon S3 is the most secure, durable, and scalable storage to build your data lake. S3 hosts tens of thousands of data lakes for customers such as Sysco, Bristol Myers Squibb, GE, and Siemens, who are using them to securely scale with their needs and to discover business insights every minute.

If you do work on an object in one third the usual time, your power output is

Georgia-Pacific built a central data lake based on Amazon S3, allowing it to efficiently ingest and analyze structured and unstructured data at scale.

"AWS enables us to source, store, enrich, and deliver data in a centralized way, which we couldn’t do previously. Using this new model, we believe we can run more production lines in a more predictable manner. Using AWS, we can ensure the highest quality product running at the fastest possible rate, so we can best serve our customers.”"

Steve Bakalar, Vice President of IT/Digital Transformation, Georgia-Pacific

Read the case study >>

If you do work on an object in one third the usual time, your power output is

Sysco consolidated its data into a single data lake built on Amazon S3 and Amazon S3 Glacier to run analytics on its data and gain business insights.

"We're using S3 as our main data lake repository and S3 Glacier for archival. Our data lake in S3 allows us to transform, data load, extract, and query the data directly on S3. With S3 Glacier, we were able to reduce storage costs by over 40%."

Wesley Story, VP Business Technology Americas - Sysco 

Read the case study >>

If you do work on an object in one third the usual time, your power output is

With a data lake based on Amazon S3 capable of collecting 6 TB of log data per day, the security staff at Siemens can perform forensic analysis on years' worth of data without compromising the performance or availability of the Siemens security incident and event management (SIEM) solution.

"Our goal was to use cloud-based artificial intelligence to process these huge amounts of data and make immediate decisions about how best to counter any detected threats," says Jan Pospisil, a senior data scientist at CDC. "Given the objective of an AI-enabled, high-speed, fully automated, and highly scalable platform, the decision to use AWS was easy."

Jan Pospisil, Senior Data Scientist - Siemens Cyber Defense Center

Read case study >>

If you do work on an object in one third the usual time, your power output is

Bristol Myers Squibb collects a variety of clinical data from external sources, such as academic medical centers, healthcare providers, and other collaborations. The wide assortment of sources for data results in broad variations in data formats. Amazon S3 & AWS Storage Gateway play central roles at Bristol Myers Squibb by moving scientific data into clinical data lakes.

“Bristol Myers Squibb has been using Amazon S3 and Storage Gateway for years, moving hundreds of terabytes of scientific data from our local premises to the AWS Cloud, daily. We have found AWS services to be efficient, reliable, and cost-effective, often bringing in more flexibility and scalability while reducing our dependency on hardware infrastructure.”

Oleg Moiseyenko, Senior Cloud Architect - Bristol Myers Squibb

Read the customer blog post >>

If you do work on an object in one third the usual time, your power output is

GE Healthcare is known for its medical imaging equipment and diagnostic imaging agents, but has—over the last several years—continued in its digital transformation. The company launched the GE Health Cloud to provide radiologists and other healthcare professionals with a single portal to access enterprise imaging applications to view, process, and easily share images and patient cases.  

“Every day, healthcare data flows through millions of medical devices, including more than 500,000 GE Healthcare medical imaging devices globally. Amazon S3 is the cornerstone of our solution, and it gives us the durability and reliability we need for storing critical data.”

Mitch Jackson, VP of cloud strategy and technology - GE Healthcare Digital 

Read the case study >>

If you do work on an object in one third the usual time, your power output is

INVISTA is one of the world’s largest integrated producers of chemical intermediates, polymers, and fibers. INVISTA data is no longer siloed at sites around the world because of an ambitious initiative to transform its operations by moving from business intelligence (BI) to artificial intelligence (AI). The data now resides in an Amazon Web Services (AWS) data lake. 

“With our old solution, it took us two months the first time we tried to get just one plant site's historical data into a data scientist's hands for analysis. Through our optimization and right-sizing efforts, migrating our data centers to AWS is saving us more than $2 million a year,"”

Tanner Gonzalez, Analytics Leader - INVISTA

Read the case study >>

AWS offers a complete set of cloud storage services for backup and archiving. Amazon S3 Glacier and S3 Glacier Deep Archive are secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup.

If you do work on an object in one third the usual time, your power output is

Celgene is a global biopharmaceutical company that develops drug therapies for cancer and inflammatory disorders. Celgene runs many HPC workloads on hundreds of Amazon EC2 instances and uses Amazon S3 and Amazon S3 Glacier for durable long-term storage of petabytes of genomic data. 

"Some of our genomic files are very large in size, even after compression, so we need the robust storage capabilities of Amazon S3 and Amazon Glacier.” 

Lance Smith, Associate Director of IT - Celgene

Read the case study >>

If you do work on an object in one third the usual time, your power output is

Ryanair switched tape backups to the cloud using AWS Storage Gateway’s Tape Gateway and stored them in Amazon S3 Glacier and Amazon S3 Glacier Deep Archive for long-term storage. Ryanair eliminated the need for resources for ongoing support and management of physical tapes, and realized 65% savings in backup costs. Ryanair is Europe's largest airline group, flying more than 150 million passengers per year to more than 200 destinations on 2,400 daily flights.

Watch the case study video >>

If you do work on an object in one third the usual time, your power output is

Autodesk is a leader in 3D design, engineering, and entertainment software. Autodesk makes software for people who make things. Autodesk needed to backup 2.4 petabytes of data to Amazon S3 Glacier to reduce on-premises storage costs.

Autodesk decided to use Amazon S3 because of the low cost, pay-as-you-go model, high durability, and availability. It also has lifecycle management capabilities for long-term archival storage to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive. The goal was to move this dataset to S3 as soon as possible and eventually lifecycle it to Amazon S3 Glacier for long-term retention. This petabyte scale data migration from on-premises to S3 was accomplished swiftly with minimal effort and was completely self-managed with AWS DataSync.

Read the customer blog post >>

If you do work on an object in one third the usual time, your power output is

Nasdaq is home to over 4000 company listings and the market technology provider to over 100 marketplaces around the globe in 50 countries.

Nasdaq has some of its most critical data on Amazon S3 and S3 Glacier and AWS has been a trusted partner for many years. Watch this video to learn how AWS enables Nasdaq to meet their long-term data retention polices, lifecycle management needs for all data types, compliance and security requirements, and scaling demands in a highly-regulated industry.

Watch the case study video >>

If you do work on an object in one third the usual time, your power output is

Growth is Ambra Health's hallark. Since its founding, the medical data and image-management software-as-a-service (SaaS) provider has grown to manage more than five billion medical images.

“Using AWS, we can easily scale our medical image management platform to meet the needs of healthcare customers worldwide. It was very easy to deploy and be operational globally. We didn’t have to put a lot of resources into building new data centers and training people to manage them.”

Andrew Duckworth, Vice President of Business Development - Ambra Health

Read the case study >>

Many customers use Amazon S3 to store enterprise application data, as well as to store cloud native application production data. With Amazon S3, you can upload any amount of data and access it anywhere in order to deploy applications faster and reach more end users.

If you do work on an object in one third the usual time, your power output is

Nielsen is a global measurement and data analytics company, measuring what consumers watch and the advertising they’re exposed to. In 2019, Nielsen migrated its National Television Audience Measurement platform to AWS, and built a cloud-native local television rating application. To do so, the company built a data lake capable of storing 30 petabytes of data in Amazon S3, increasing their scale from measuring 40,000 households to more than 30 million households each day.

"We drastically increased the amount of data ingested, processed, and reported to our clients each day. Working with AWS and the services they provide allows us to do all of that at a much faster pace, with much greater velocity than we could have ever achieved before."

Scott Brown, general manager of TV & Audio - Nielsen

Read the case study >>

If you do work on an object in one third the usual time, your power output is

Fileforce provides more than 300 domestic and global corporate customers with cloud file storage and document management services. Customers use the Fileforce cloud-based application to securely store and manage their business content in the same folder structure as their on-premises file storage solutions. Fileforce began running its application on Amazon EC2 instances and using Amazon S3 for data storage.

Read the case study >>

If you do work on an object in one third the usual time, your power output is

3M Health Information Systems needed the agility to develop and deploy new applications faster, and determined that moving to the cloud was the best way to address its challenges of scalability, speed, and security.

3M HIS's applications run on hundreds of Amazon EC2 instance and use Amazon S3 for application data storage. 

Read case the study >>

If you do work on an object in one third the usual time, your power output is

Myriota was founded to revolutionize the Internet of Things (IoT) by offering disruptively low-cost and long-battery-life global connectivity. Based in Adelaide, Australia, a focal point of the Australian space industry and home of the Australian Space Agency, Myriota has a growing portfolio of more than 20 patents, and support from major Australian and international investors. With deep heritage in telecommunications research, world-first transmission of IoT data direct to nanosatellite was achieved in 2013. Myriota has made this ground-breaking technology commercially available for partners worldwide.

“We depend on Amazon S3 as a critical staging area where we hold data for processing the core of our data platform. AWS Transfer for SFTP has helped us simplify the security of sensor data coming over from our customers' widespread sites over the Myriota Network of satellites to Amazon S3.”

Andrew Beck, Director of Service Delivery – Myriota

If you do work on an object in one third the usual time, your power output is

Thousands of hiring managers worldwide rely on Lever software to find, nurture, and manage their job candidates in a central location. Since its founding, Lever has run its application environment on the AWS Cloud, taking advantage of services including Amazon EC2 for on-demand compute capacity, and Amazon S3 for storing customer data.

Read the case study >>

If you do work on an object in one third the usual time, your power output is

Monzo has grown from an idea to a fully regulated bank on the AWS Cloud. A bank that “lives on your smartphone,” Monzo has already handled £1 billion worth of transactions for half a million customers in the UK. Monzo runs more than 1600 core-banking microservices on AWS, using services including Amazon EC2, and Amazon S3.

"By using AWS, we can run a bank with more than 4 million customers with just eight people on our infrastructure and reliability team."

Matt Heath, Distributed Systems Engineer - Monzo Bank

Read the case study >>

Customers use Amazon S3 to save on storage costs, and new storage classes and features continually help optimize storage costs even further. With S3 Intelligent-Tiering and S3 Glacier Deep Archive, customers can automate storage cost savings, or use the lowest-cost cloud storage across three availability zones.

If you do work on an object in one third the usual time, your power output is

Founded in 2008, Zalando is Europe’s leading online platform for fashion and lifestyle with over 32 million active customers. Amazon S3 is the cornerstone of the data infrastructure of Zalando, and they have utilized S3 Storage Classes to optimize storage costs.

"We are saving 37% annually in storage costs by using Amazon S3 Intelligent-Tiering to automatically move objects that have not been touched within 30 days to the infrequent-access tier."

Max Schultze, Lead Data Engineer - Zalando

Read the customer blog post >>

If you do work on an object in one third the usual time, your power output is

Teespring, an online platform that lets creators turn unique ideas into custom merchandise, experienced rapid business growth, and the company’s data also grew exponentially—to a petabyte—and continued to increase. Like many cloud native companies, Teespring addressed the problem by using AWS, specifically storing data on Amazon S3. 

By using Amazon S3 Glacier and S3 Intelligent-Tiering, Teespring now saves more than 30 percent on its monthly storage costs.

"Just as Teespring simplified the process for bringing physical products to market, AWS simplified how businesses approach cloud and infrastructure. Because of the services AWS provides and the ease with which we can implement them."

James Brady, Vice President of Engineering - Teespring

Read the customer blog post >>

If you do work on an object in one third the usual time, your power output is

AppsFlyer, a marketing analytics and attribution platform, built its data lake on Amazon S3 to collect terabytes of data each day, enabling the company to improve their analytics products and increase customer satisfaction. AppsFlyer further optimizes its storage costs using Amazon S3 Intelligent-Tiering.

Watch the case study video >>

If you do work on an object in one third the usual time, your power output is

Using AWS, SimilarWeb manages large volumes of data, with which its data scientists build algorithms to improve its market-intelligence platform. By using Amazon S3 Intelligent-Tiering, SimilarWeb is able to democratize that data for its employees and save 20 percent on storage costs.

Watch the case study video >>

If you do work on an object in one third the usual time, your power output is

Photobox wanted to get out of the business of owning and maintaining its own IT infrastructure so it could redeploy resources toward innovation in artificial intelligence and other areas to create a better customer experience. Photobox is an online, personalized photo-products company that serves millions of customers each year in over ten markets.

By migrating from its EMC Isilon and IBM Cleversafe on-premises storage arrays to Amazon S3 using AWS Snowball Edge, Photobox saved a significant amount on costs on storage for its 10 PB of photo storage.

Watch the case study video >>

If you do work on an object in one third the usual time, your power output is

Union Bank of the Philippines (UnionBank) aims to improve what it calls “prosperity inclusion,” by attracting a total of 50 million customers by the year 2020. Key to this objective is its digital transformation on AWS.

Since moving to Amazon S3 and S3 Glacier, the bank is saving 20 million pesos (US$380,500) annually, a figure that would double when it completely migrates its Tier 1 workloads. This excludes the savings in electricity cooling the backup tape or the reduction in staff hours required to monitor, change, and store the tapes.

Read the case study >>


Page 6

Skip to Main Content

Amazon S3 on Outposts delivers object storage to your on-premises AWS Outposts environment to meet local data processing and data residency needs. Using the S3 APIs and features, S3 on Outposts makes it easy to store, secure, tag, retrieve, report on, and control access to the data on your Outpost. AWS Outposts is a fully managed service that extends AWS infrastructure, services, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience.

In addition to helping you meet data residency requirements, you can use S3 on Outposts to satisfy demanding performance needs by keeping data close to on-premises applications. S3 on Outposts provides a new Amazon S3 storage class, named ‘S3 Outposts’, which uses the same S3 APIs, and is designed to durably and redundantly store data across multiple devices and servers on your Outposts. You can add 26 TB, 48 TB, 96 TB, 240 TB, or 380 TB of S3 storage capacity to your Outposts (the 26 TB S3 option is only supported on Outposts with 11 TB EBS configured). You can create up to 100 buckets per AWS account on each Outpost. AWS DataSync, a service that makes it easy to move data to and from AWS Storage services, supports S3 on Outposts, so you can automate data transfer between your Outposts and AWS Regions, choosing what to transfer, when to transfer, and how much network bandwidth to use.

S3 on Outposts: Bringing S3 on-premises (4:03)

S3 on Outposts makes it easy to deploy object storage on-premises because your Outpost comes delivered with S3 capacity installed and is monitored, patched, and updated by AWS. Capacity can be selected in 26TB, 48TB, 96 TB, 240TB, or 380TB. With S3 on Outposts you can reduce the time, resources, operational risk, and maintenance downtime required for managing storage.

Process and securely store data locally in your on-premises environment and transfer data to S3 in an AWS Region for further processing or archival. S3 on Outposts provides on-premises object storage to minimize data transfers and buffer from network variations, while providing you the ability to easily transfer data between Outposts and AWS Regions by using AWS DataSync.

S3 on Outposts uses the same S3 APIs on-premises as in the cloud for features like policy based access control, encryption, lifecycle expiration actions, and tagging on-premises as in the cloud. Unlike other hybrid solutions that require use of different APIs, manual software updates, and purchase of third-party hardware and support, S3 on Outposts delivers a consistent hybrid experience.

Meet data residency, regulatory, or compliance requirements by storing and retrieving your data on-premises using S3 on Outposts. With S3 on Outposts, you can meet data residency, regulatory, or compliance requirements by keeping data on-premises, and also take advantage of the agility, costs, and innovation that AWS can deliver. S3 on Outposts enables customers to meet data residency or regulatory requirements by keeping data on an Outpost on-premises within a country, state/province, or location where there is not an AWS Region today.

For on-premises applications that require high-throughput local processing, such as medical imaging in hospitals, autonomous vehicle data capture, and manufacturing processes, S3 on Outposts can process and store data locally to satisfy these demanding workloads. S3 on Outposts provides local object storage to minimize data transfers and buffer from network variations, and with DataSync you can easily schedule data transfers using between an AWS Region and an Outpost.

For customers building or testing applications on-premises before eventually moving these applications to an AWS Region, they can now minimize the changes required to their applications. S3 on Outposts provides an intermediate step in the cloud migration journey, because S3 on Outposts delivers the ability to build portable applications on Outposts on-premises that can easily be moved to AWS.

To use S3 on Outposts, you select an AWS Outposts configuration that supports S3 on Outposts or add S3 storage to an existing Outpost. Next, you use the Outposts console or API to create buckets and S3 Access Points on your Outpost.

You then use the AWS CLI or SDK to store and retrieve objects from the S3 buckets on your Outposts just as you do for S3 buckets in an AWS Region. You can view and manage your buckets from the Outposts console and use AWS DataSync to manage object transfers between an Outpost and Amazon S3 in an AWS Region.

S3 on Outposts now supports sharing across multiple accounts

Read the what's new post to learn how to share across accounts.

S3 on Outposts supports access for applications outside of the VPC 

Read the what's new post covering S3 on Outposts feature update for access for applications outside the Outposts VPC. 

Amazon S3 on Outposts now available

Read the AWS News blog post with a demo and details on S3 on Outposts

What's New: S3 on Outposts

Read the what's new post covering S3 on Outposts general availability. 

If you do work on an object in one third the usual time, your power output is

Check out the S3 on Outposts FAQs

Learn more about S3 on Outposts by reading the FAQs.

Learn more 

If you do work on an object in one third the usual time, your power output is

Learn more about AWS Outposts

Instantly get access to the AWS Free Tier. 

Learn more 

If you do work on an object in one third the usual time, your power output is

Get started with S3 on Outposts

Get started with AWS Outposts in the AWS Management Console.

Sign up