How can a company protect their Amazon S3 data from a regional disaster?

S3 Replication is a fully-managed feature available for Amazon Simple Storage Service (S3) customers. It can automatically replicate S3 objects to help you reduce costs, protect your data, and achieve compliance with regulatory requirements.

Here are the two S3 storage replication options:

  • Cross-Region Replication (CRR)—copies S3 objects across multiple Amazon Regions (ARs), representing geographically separate Amazon data centers.
  • Same-Region Replication (SRR)—copies S3 objects between buckets in different availability zones (AZs), which are separate data centers in the same AR.

AWS also offers a Replication Time Control (RTC) Service Level Agreement (SLA) that guarantees object replication in less than fifteen minutes.

In this article:

Amazon S3 Cross-Region Replication (CRR) and Same-Region Replication (SRR)

AWS S3 Cross-Region Replication (CRR)

CRR can help you reduce latency, maintain compliance, enforce security, and implement disaster recovery. The feature lets you replicate objects into other Amazon Regions (ARs), including your object’s metadata and object tags.

How CRR works

You can configure S3 CRR to copy objects into one or more buckets in a different AR, for improved resilience. The feature lets you set up replication between buckets, shared prefixes, and individual objects. You can manage replication for individual objects using object tags.

CRR Use Cases

Here are key use cases for S3 CRR:

  • Compliance—the default configuration in S3 is set to store data across multiple geographically-distant Availability Zones (AZs). However, compliance often requires storing data in specific geographical locations—CRR can help achieve this.
  • Latency performance—customers and end-users are sometimes located in several geographic locations. CRR can help you improve their user experience by minimizing latency for data access.

Amazon S3 Same-Region Replication (SRR)

Amazon S3 Same-Region Replication (SRR) provides fully automated replication of S3 objects to another AZ, within the same AR. It is available in all AWS commercial regions as well as AWS GovCloud (US).

How SRR works

SRR uses asynchronous replication, meaning that objects are not copied to the other AZ immediately after they are created or modified. You can configure SRR using the S3 Management Console, API, or SDK.

SRR identifies objects for which you requested replication at the prefix, bucket, or tag level, and starts replication. You can set the AWS account that owns the original copy to own the replicated object. Alternatively, you can use a different account for the copies to protect them from accidental deletion.

Here are key use cases for SRR:

  • Aggregate logs from several S3 buckets to process in the same AR. You can also use it to configure live replication between development and test environments.
  • Make a copy of your S3 objects and keep it in the same AR to satisfy data compliance and sovereignty requirements.

 SRR Use Cases

Here are key use cases for SRR:

  • Aggregate logs into a single bucket—in some cases, you may need to store logs in several buckets or across different accounts. SRR lets you easily replicate these logs into one bucket located in one AR. You can then process logs in one location.
  • Replication between developer and test accounts—sometimes, you may need to share data between test and developer accounts. SRR lets you use rules to replicate objects and metadata between multiple accounts.
  • Abide by data sovereignty laws—some laws require storing data in separate AWS accounts and prohibit moving the data from a specific AR. SRR lets you backup critical data even when compliance requirements do not allow moving the data across ARs.


Related content: Read our guide to
S3 buckets

How to Set Up AWS S3 Replication

You can set up S3 replication from one bucket to another by adding a replication rule to your source bucket. Here is a quick step-by-step tutorial on how to set up this kind of replication:

1. Go to the AWS S3 management console, sign in to your account, and select the name of the source bucket.

2. Go to the Management tab in the menu, and choose the Replication option. Next, choose Add rule.

How can a company protect their Amazon S3 data from a regional disaster?

Image Source: AWS

3. Under the Set resource configuration, choose the Entire bucket option. Note that when replicating buckets encrypted with AWS Key Management Service (KMS), this stage also requires choosing the correct key.

How can a company protect their Amazon S3 data from a regional disaster?

Image Source: AWS

4. Under the Set destination configuration, choose the Buckets in this account option. If you wish to replicate to another account, select this option and specify a bucket policy for the destination.

5. To change the storage class of the object after replication, go to the Destination options configuration, and select a different storage class for the destination objects.

6. You can also set the replication time. Go to Replication time control settings, and select the Replication time control option. This configuration provides you with 99.99% assurance that the system will replicate new objects within 15 minutes. However, this service level agreement (SLA) incurs additional costs.

7. Step 3—Configure options—lets you create a new AWS identity and access management (IAM) rule. However, if you already have an existing role with replication permissions, you can use it instead of creating a new one.

8. Go to the Status configuration and choose Enabled. To create the rule, choose Next. The replication should now start working. To verify this, you can wait several minutes and then check the destination bucket.

Limitations of AWS S3 Replication using Replication Rule

Here are key limitations of replication rules:

  • Difficult to set up for sources outside S3—it is relatively easy to set up S3 replication for S3 sources. However, configuring replication to source outside S3—inside AWS or in another cloud—may require writing custom modules.
  • Limited ability to apply transformation—enterprises often require applying transformation before replicating a date.
  • Non-transparent pricing—it can be difficult to understand how Replication time control is priced and implemented, and this may result in operational overhead.

S3-Compatible Storage On-Premises with Cloudian

Cloudian® HyperStore® is a massive-capacity object storage device that is fully compatible with Amazon S3. It can store up to 1.5 Petabytes in a 4U Chassis device, allowing you to store up to 18 Petabytes in a single data center rack. HyperStore comes with fully redundant power and cooling, and performance features including 1.92TB SSD drives for metadata, and 10Gb Ethernet ports for fast data transfer.

HyperStore is an object storage solution you can plug in and start using with no complex deployment. It also offers advanced data protection features, supporting use cases like compliance, healthcare data storage, disaster recovery, ransomware protection and data lifecycle management.

Learn more about Cloudian® HyperStore®.

Q: How do I get started with S3 CloudWatch Metrics?

You can use the Amazon Web Services Management Console to enable the generation of 1-minute CloudWatch metrics for your S3 bucket or configure filters for the metrics using a prefix or object tag, or access point. Alternately, you can call the S3 PUT Bucket Metrics API to enable and configure publication of S3 storage metrics. Storage metrics will be available in CloudWatch within 15 minutes of being enabled.

Q: Can I align storage metrics to my applications or business organizations?

Yes, you can configure S3 CloudWatch metrics to generate metrics for your S3 bucket or configure filters for the metrics using a prefix or object tag. For example, you can monitor a spark application that accesses data under the prefix “/Bucket01/BigData/SparkCluster” as metrics filter 1 and define a second metrics filter with the tag “Dept, 1234” as metrics filter 2. An object can be a member of multiple filters, e.g., an object within the prefix “/Bucket01/BigData/SparkCluster” and with the tag “Dept,1234” will be in both metrics filter 1 and 2. In this way, metrics filters can be aligned to business applications, team structures or organizational budgets, allowing you to monitor and alert on multiple workloads separately within the same S3 bucket.

Q: What alarms can I set on my storage metrics?

You can use CloudWatch to set thresholds on any of the storage metrics counts, timers, or rates and fire an action when the threshold is breached. For example, you can set a threshold on the percentage of 4xx Error Responses and when at least 3 data points are above the threshold fire a CloudWatch alarm to alert a Dev Ops engineer.

Q. How am I charged for using S3 CloudWatch Metrics?

S3 CloudWatch Metrics are priced as custom metrics for Amazon CloudWatch. Please see Amazon CloudWatch pricing page for general information about S3 CloudWatch metrics pricing.

Q: What are Object Tags?

S3 Object Tags are key-value pairs applied to S3 objects which can be created, updated or deleted at any time during the lifetime of the object. With these, you’ll have the ability to create Identity and Access Management (IAM) policies, setup S3 Lifecycle policies, and customize storage metrics. These object-level tags can then manage transitions between storage classes and expire objects in the background.

Q: How do I apply Object Tags to my objects?

You can add tags to new objects when you upload them or you can add them to existing objects. Up to ten tags can be added to each S3 object and you can use either the Amazon Web Services Management Console, the REST API, the Amazon CLI, or the Amazon SDKs to add object tags.

Q: Why should I use Object Tags?

Object Tags are a new tool you can use to enable simple management of your S3 storage. With the ability to create, update, and delete tags at any time during the lifetime of your object, your storage can adapt to the needs of your business. These tags allow you to control access to objects tagged with specific key-value pairs, allowing you to further secure confidential data for only a select group or user. Object tags can also be used to label objects that belong to a specific project or business unit, which could be used in conjunction with lifecycle policies to manage transitions to the S3 Standard – Infrequent Access and Amazon S3 Glacier storage classes.

Q: Why should I use Object Tags?

Object Tags are a new tool you can use to enable simple management of your S3 storage. With the ability to create, update, and delete tags at any time during the lifetime of your object, your storage can adapt to the needs of your business. These tags allow you to control access to objects tagged with specific key-value pairs, allowing you to further secure confidential data for only a select group or user. Object tags can also be used to label objects that belong to a specific project or business unit, which could be used in conjunction with lifecycle policies to manage transitions to the S3 Standard – Infrequent Access and Amazon S3 Glacier storage classes.

Q: How can I update the Object Tags on my objects?

Object Tags can be changed at any time during the lifetime of your S3 object, you can use either the Amazon Web Services Management Console, the REST API, the Amazon CLI, or the Amazon SDKs to change your object tags. Note that all changes to tags outside of the Amazon Web Services Management Console are made to the full tag set. If you have five tags attached to a particular object and want to add a sixth, you need to include the original five tags in that request.

Q: Will my Object Tags be replicated if I use Cross-Region Replication?

Object Tags can be replicated across regions using Cross-Region Replication. For more information about setting up Cross-Region Replication, please visit How to Set Up Cross-Region Replication in the Amazon S3 Developer Guide.

For customers with Cross-Region Replication already enabled, new permissions are required in order for tags to replicate. For more information on the policies required, please visit "How to Set Up Cross-Region Replication" in the Amazon S3 Developer Guide.

Q: How much do Object Tags cost?

Please see the Amazon S3 pricing page for more information.

Q: What is Lifecycle Management?

S3 Lifecycle management provides the ability to define the lifecycle of your object with a predefined policy and reduce your cost of storage. You can set lifecycle transition policy to automatically migrate Amazon S3 objects to Standard - Infrequent Access (Standard - IA), Amazon S3 Glacier Flexible Retrieval, and/or Amazon S3 Glacier Deep Archive based on the age of the data. You can also set lifecycle expiration policies to automatically remove objects based on the age of the object. You can set a policy for multipart upload expiration, which expires incomplete multipart upload based on the age of the upload.

Q: How do I set up a lifecycle management policy?

You can set up and manage lifecycle policies in the S3 Console, S3 REST API, Amazon SDKs, or Amazon Command Line Interface (CLI). You can specify the policy at the prefix or at the bucket level.

Q: How much does it cost to use lifecycle management?

There is no additional cost to set up and apply lifecycle policies. A transition request is charged per object when an object becomes eligible for transition according to the lifecycle rule.

Q. What can I do with Lifecycle Management Policies?

As data matures, it can become less critical, less valuable and subject to compliance requirements. Amazon S3 includes an extensive library of policies that help you automate data migration processes. For example, you can set infrequently accessed objects to move into lower cost storage tier (like Standard-Infrequent Access) after a period of time. After another period, it can be moved into Amazon S3 Glacier Flexible Retrieval for archive and compliance, and eventually deleted. These rules can invisibly lower storage costs and simplify management efforts and may be leveraged across the Amazon family of storage services. And these policies also include good stewardship practices to remove objects and attributes that are no longer needed to manage cost and optimize performance.

Q: How can I use Amazon S3’s lifecycle policy to lower my Amazon S3 storage costs?

With Amazon S3’s lifecycle policies, you can configure your objects to be migrated to Standard - Infrequent Access (Standard - IA), archived to Amazon S3 Glacier Flexible Retrieval or Amazon S3 Glacier Deep Archive, or deleted after a specific period of time. You can use this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a transition to Standard - IA or Amazon S3 Glacier Flexible Retrieval, and/or an expiration. For example, you could create a rule that archives into Amazon S3 Glacier all objects with the common prefix “logs/” 30 days from creation, and expires these objects after 365 days from creation. You can also create a separate rule that only expires all objects with the prefix “backups/” 90 days from creation. Lifecycle policies apply to both existing and new S3 objects, ensuring that you can optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration. Within a lifecycle rule, the prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). You can specify a transition action to have your objects archived and an expiration action to have your objects removed. For time period, provide the creation date (e.g. January 31, 2015) or the number of days from creation date (e.g. 30 days) after which you want your objects to be archived or removed. You may create multiple rules for different prefixes. And finally, you may use lifecycle policies to automatically expire incomplete uploads, preventing billing on partial file uploads.

Q: How can I configure my objects to be deleted after a specific time period?

You can set a lifecycle expiration policy to remove objects from your buckets after a specified number of days. You can define the expiration rules for a set of objects in your bucket through the Lifecycle Configuration policy that you apply to the bucket. Each Object Expiration rule allows you to specify a prefix and an expiration period. The prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). For expiration period, provide the number of days from creation date (i.e. age) after which you want your objects removed. You may create multiple rules for different prefixes. For example, you could create a rule that removes all objects with the prefix “logs/” 30 days from creation, and a separate rule that removes all objects with the prefix “backups/” 90 days from creation.

After an Object Expiration rule is added, the rule is applied to objects that already exist in the bucket as well as new objects added to the bucket. Once objects are past their expiration date, they are identified and queued for removal. You will not be billed for storage for objects on or after their expiration date, though you may still be able to access those objects while they are in queue before they are removed. As with standard delete requests, Amazon S3 doesn’t charge you for removing objects using Object Expiration. You can set Expiration rules for your versioning-enabled or versioning-suspended buckets as well.

Q: Why would I use a lifecycle policy to expire incomplete multipart uploads?

The lifecycle policy that expires incomplete multipart uploads allows you to save on costs by limiting the time non-completed multipart uploads are stored. For example, if your application uploads several multipart object parts, but never commits them, you will still be charged for that storage. This policy lowers your S3 storage bill by automatically removing incomplete multipart uploads and the associated storage after a predefined number of days.

Q: Can I set up Amazon S3 Event Notifications to send notifications when S3 Lifecycle transitions or expires objects?

Yes, you can set up Amazon S3 Event Notifications to notify you when S3 Lifecycle transitions or expires objects. For example, you can send S3 Event Notifications to an Amazon SNS topic, Amazon SQS queue, or Amazon Lambda function when S3 Lifecycle moves objects to a different S3 storage class or expires objects.

Q: What is Amazon S3 Replication?

Amazon S3 Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same Amazon Web Services account or by different accounts. You can replicate new objects written into the bucket to one or more destination buckets between different Amazon Web Services China Regions (S3 Cross-Region Replication), or within the same Amazon Web Services Region (S3 Same-Region Replication). You can also replicate existing bucket contents (S3 Batch Replication), including existing objects, objects that previously failed to replicate, and objects replicated from another source.

Q: What is Amazon S3 Cross-Region Replication (CRR)?

CRR is an Amazon S3 feature that automatically replicates data between buckets across different Amazon Web Services China Regions. With CRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use CRR to provide lower-latency data access to users within the Amazon Web Services China Regions. CRR can also help if you have a compliance requirement to store copies of data hundreds of miles apart. You can use CRR to change account ownership for the replicated objects to protect data from accidental deletion. To learn more about CRR, please visit the replication developer guide.

Q: What is Amazon S3 Same-Region Replication (SRR)?

SRR is an Amazon S3 feature that automatically replicates data between buckets within the same Amazon Web Services Region. With SRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use SRR to create one or more copies of your data in the same Amazon Web Services Region. SRR helps you address data sovereignty and compliance requirements by keeping a copy of your data in a separate Amazon Web Services account in the same region as the original. You can use SRR to change account ownership for the replicated objects to protect data from accidental deletion. You can also use SRR to easily aggregate logs from different S3 buckets for in-region processing, or to configure live replication between test and development environments. To learn more about SRR, please visit the replication developer guide.

Q: What is Amazon S3 Batch Replication?

Amazon S3 Batch Replication replicates existing objects between buckets. You can use S3 Batch Replication to backfill a newly created bucket with existing objects, retry objects that were previously unable to replicate, migrate data across accounts, or add new buckets to your data lake. You can get started with S3 Batch Replication with just a few clicks in the S3 console or a single API request.

Q: How do I enable Amazon S3 Replication (Cross-Region Replication and Same-Region Replication)?

Amazon S3 Replication (CRR and SRR) is configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags. You add a replication configuration on your source bucket by specifying a destination bucket in the same or different Amazon Web Services China Regions for replication.

You can use the S3 Management Console, API, Amazon CLI, Amazon SDKs, or Amazon CloudFormation to enable replication. Versioning must be enabled for both the source and destination buckets to enable replication.

Q: How do I use S3 Batch Replication?

You would first need to enable S3 Replication at the bucket level. See the previous question for how you can do so. You may then initiate an S3 Batch Replication job in the S3 console after creating a new replication configuration, changing a replication destination in a replication rule from the replication configuration page, or from the S3 Batch Operations Create Job page. Alternatively, you can initiate an S3 Batch Replication jobs via the Amazon CLI or SDKs.

Q: Can I use S3 Replication with S3 Lifecycle rules?

With S3 Replication, you can establish replication rules to make copies of your objects into another storage class, in the same or a different regions within China. Lifecycle actions are not replicated, and if you want the same lifecycle configuration applied to both source and destination buckets, enable the same lifecycle configuration on both.

For example, you can configure a lifecycle rule to migrate data from the S3 Standard storage class to the S3 Standard-IA on the destination bucket.

With S3 Batch Replication, in addition to Lifecycle actions not replicated from the source, we recommend you to pause Lifecycle in the destination while the Batch Replication job is active if there are active Lifecycle rules in the destination. This is because certain Lifecycle policies depend on the version stack state to transition objects. While Batch Replication is still replicating objects, the versions stack in the destination bucket will be different than the one in the source bucket. Lifecycle can incorrectly rely on the incomplete version stack to transition objects.

You can find more information about lifecycle configuration and replication on the S3 Replication developer guide.

Q: Can I use S3 Replication to replicate to more than one destination bucket?
Yes. S3 Replication allows customers to replicate their data to multiple destination buckets in the same, or different Amazon Web Services China Regions. When setting up, you simply specify the new destination bucket in your existing replication configuration or create a new replication configuration with multiple destination buckets. For each new destination you specify, you have the flexibility to choose storage class of destination bucket, encryption type, replication metrics and notifications, and other properties.

Q: Can I use S3 Replication to setup two-way replication between S3 buckets?


Yes. To setup two-way replication, you create a replicate rule from S3 bucket A to S3 bucket B and setup another replication rule from S3 bucket B to S3 bucket A. When setting up the replication rule from S3 bucket B to S3 bucket A, please enable Sync Replica Modifications to replicate replica metadata changes. With replica modification sync, you can easily replicate metadata changes like object access control lists (ACLs), object tags, or object locks on the replicated objects.

Q: Are objects securely transferred and encrypted throughout replication process?

Yes, objects remain encrypted throughout the replication process. The encrypted objects are transmitted securely via SSL from the source region to the destination region (CRR) or within the same region (SRR).

Q: Can I use replication across Amazon Web Services China accounts to protect against malicious or accidental deletion?

Yes, for CRR and SRR, you can set up replication across Amazon Web Services China accounts to store your replicated data in a different account in the target region. You can use Ownership Overwrite in your replication configuration to maintain a distinct ownership stack between source and destination, and grant destination account ownership to the replicated storage.

Q: Can I replicate delete markers from one bucket to another?

Yes, you can replicate delete markers from source to destination if you have delete marker replication enabled in your replication configuration. When you replicate delete markers, Amazon S3 will behave as if the object was deleted in both buckets. You can enable delete marker replication for a new or existing replication rule. You can apply delete marker replication to the entire bucket or to Amazon S3 objects that have a specific prefix, with prefix based replication rules. Amazon S3 Replication does not support delete marker replication for object tag based replication rules. To learn more about enabling delete marker replication see Replicating delete markers from one bucket to another.

Q: Can I replicate data from other Amazon Web Services Regions to China? Can a customer replicate from one China Region bucket outside of China Regions?

No, Amazon S3 Replication is not available between Amazon Web Services China Regions and Amazon Web Services Regions outside of China. You are only able to replicate within the Amazon Web Services China regions.

Q: Can I replicate existing objects?

Yes, you can use S3 Batch Replication to replicate existing objects between buckets.

Q: Can I re-try replication if object fail to replicate initially?

Yes, you can use S3 Batch Replication to re-try objects that fail to replicate initially.

Q: What encryption types does S3 Replication support?

S3 Replication supports all encryption types that S3 offers. S3 offers both server-side encryption and client-side encryption – the former requests S3 to encrypt the objects for you, and the latter is for you to encrypt data on the client-side before uploading it to S3. For server-side encryption, S3 offers server-side encryption with Amazon S3-managed keys (SSE-S3), server-side encryption with KMS keys stored in Amazon Key Management Service (SSE-KMS), and server-side encryption with customer-provided keys (SSE-C). For further details on these encryption types and how they work, visit the S3 documentation on using encryption.

Q: What is the pricing for S3 Replication (CRR and SRR)?

You pay the Amazon S3 charges for storage, copy requests, and for CRR you pay the inter-region data transfer OUT for the replicated copy of data to the destination region. Copy requests and inter-region data transfer are charged based on the source region. Storage for replicated data is charged based on the target region. If the source object is uploaded using the multipart upload feature, then it is replicated using the same number of parts and part size. For example, a 100 GB object uploaded using the multipart upload feature (800 parts of 128 MB each) will incur request cost associated with 802 requests (800 Upload Part requests + 1 Initiate Multipart Upload request + 1 Complete Multipart Upload request) when replicated. After replication, the 100 GB will incur storage charges based on the destination region. Please visit the S3 pricing page for pricing. 

If you are using S3 Batch Replication to replicate objects across accounts, you will incur the S3 Batch Operations charges, in addition to the replication PUT requests and Data Transfer OUT charges (note that S3 RTC is not applicable to Batch Replication.). The Batch Operations charges include the Job and Object charges, which are respectively based on the number of jobs and number of objects processed.

Q: What is Amazon S3 Replication Time Control?

Amazon S3 Replication Time Control provides predictable replication performance and helps you meet compliance or business requirements. S3 Replication Time Control is designed to replicate most objects in seconds, and 99.99% of objects within 15 minutes. S3 Replication Time Control is backed by a Service Level Agreement (SLA) commitment that 99.9% of objects will be replicated in 15 minutes for each replication region pair during any billing month. Replication Time Control works with all S3 Replication features. To learn more, please visit the replication developer guide.

Q: How do I enable Amazon S3 Replication Time Control?

You can enable S3 Replication Time Control as an option for each replication rule. You can create a new S3 Replication policy with S3 Replication Time Control, or enable the feature on an existing policy.

You can use the S3 Management Console, API, Amazon Web Services CLI, Amazon Web Services SDKs, or Amazon Web Services CloudFormation to configure replication. To learn more, please visit overview of setting up S3 Replication in the Amazon S3 Developer Guide.

Q: What are Amazon S3 Replication metrics and events?

Amazon S3 Replication metrics and events provide visibility into Amazon S3 Replication. With S3 Replication metrics, you can monitor the total number of operations, the size of objects that are pending replication, and the replication latency between source and destination buckets for each S3 Replication rule. S3 Replication metrics are enabled by default when S3 RTC is enabled on a replication rule. For S3 CRR and S3 SRR you will have the option to enable S3 Replication metrics and events for each replication rule. Replication metrics are available through the Amazon S3 console and through Amazon CloudWatch. 

S3 Replication events will notify of you of replication failures so you can quickly diagnose and correct issues. If you have S3 Replication Time Control (S3 RTC) enabled, you will also receive notifications when an object takes more than 15 minutes to replicate, and when that object replicates successfully to their destination. Like other Amazon S3 events, S3 Replication events are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or Amazon Lambda.

Q: How do I enable Amazon S3 Replication metrics and events?

You can enable Amazon S3 Replication metrics and events for new or existing replication rules, and they are enabled by default for S3 Replication Time Control enabled rules. You can access S3 Replication metrics through the Amazon S3 console and Amazon CloudWatch. Like other Amazon S3 events, S3 Replication events are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or Amazon Lambda. To learn more, please visit the documentation on monitoring progress with replication metrics and Amazon S3 Event Notifications in the Amazon S3 Developer Guide.

Q: What is the Amazon S3 Replication Time Control Service Level Agreement (SLA)?

Amazon S3 Replication Time Control is designed to replicate 99.99% of your objects within 15 minutes, and is backed by a Service Level Agreement. If fewer than 99.9% of your objects are replicated in 15 minutes for each replication region pair during a monthly billing cycle, the S3 RTC SLA provides a service credit on any object that takes longer than 15 minutes to replicate. The service credit will be divided into Source Region Service Credit and Destination Region Service Credit. The Source Region Service Credit covers a percentage of all the charges that are specific to inter-region data transfer and the RTC feature fee associated with any object affected in the monthly billing cycle affected. The Destination Region Service Credit covers a percentage of the charges that are specific to the replication bandwidth and request charges, and the cost associated with storing your replica in the destination region in the monthly billing cycle affected. To learn more, read the S3 Replication Time Control SLA.

Q: What is the pricing for S3 Replication and S3 Replication Time Control?

For S3 Replication, Cross-Region Replication (CRR) and Same-Region Replication (SRR), you pay the S3 charges for storage in the selected destination S3 storage classes, the storage charges for the primary copy, replication PUT requests, and applicable infrequent access storage retrieval charges. For CRR, you also pay for inter-Region Data Transfer OUT from S3 to each destination Region. When you use S3 Replication Time Control, you also pay a Replication Time Control Data Transfer charge and S3 Replication Metrics charges that are billed at the same rate as Amazon CloudWatch custom metrics. For more information, please visit the S3 pricing page.

If the source object is uploaded using the multipart upload feature, then it is replicated using the same number of parts and part size. For example, a 100-GB object uploaded using the multipart upload feature (800 parts of 128 MB each) will incur request cost associated with 802 requests (800 Upload Part requests + 1 Initiate Multipart Upload request + 1 Complete Multipart Upload request) when replicated. You will incur a request charge of ¥ 0.00405 (802 requests x ¥ 0.00405 per 1,000 requests) and (if the replication was between different Amazon Web Services Regions) a charge of ¥ 60.03 (¥ 0.6003 per GB transferred x 100 GB) for inter-region data transfer. After replication, the 100 GB will incur storage charges based on the destination Region.