diff --git a/doc_source/LogFormat.md b/doc_source/LogFormat.md index a84c710..26e5638 100644 --- a/doc_source/LogFormat.md +++ b/doc_source/LogFormat.md @@ -22,9 +22,6 @@ The following is an example log consisting of five log records\. **Note** Any field can be set to `-` to indicate that the data was unknown or unavailable, or that the field was not applicable to this request\. -**Important** -During the next few weeks, we are adding a new field, `aclRequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclRequired` field, the rollout hasn't been completed in your Region\. - **Topics** + [Log record fields](#log-record-fields) + [Additional logging for copy operations](#AdditionalLoggingforCopyOperations) @@ -244,9 +241,6 @@ A string that indicates whether the request required an access control list \(AC Yes ``` -**Important** -During the next few weeks, we are adding a new field, `aclRequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclRequired` field, the rollout hasn't been completed in your Region\. - ## Additional logging for copy operations A copy operation involves a `GET` and a `PUT`\. For that reason, we log two records when performing a copy operation\. The previous section describes the fields related to the `PUT` part of the operation\. The following list describes the fields in the record that relate to the `GET` part of the copy operation\. @@ -460,9 +454,6 @@ A string that indicates whether the request required an access control list \(AC Yes ``` -**Important** -During the next few weeks, we are adding a new field, `aclRequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclRequired` field, the rollout hasn't been completed in your Region\. - ## Custom access log information You can include custom information to be stored in the access log record for a request\. To do this, add a custom query\-string parameter to the URL for the request\. Amazon S3 ignores query\-string parameters that begin with `x-`, but includes those parameters in the access log record for the request, as part of the `Request-URI` field of the log record\. diff --git a/doc_source/MakingRequests.md b/doc_source/MakingRequests.md index 78583a6..6eba37f 100644 --- a/doc_source/MakingRequests.md +++ b/doc_source/MakingRequests.md @@ -47,7 +47,7 @@ For information on signing requests using temporary security credentials in your For more information about IAM support for temporary security credentials, see [Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) in the *IAM User Guide*\. -For added security, you can require multifactor authentication \(MFA\) when accessing your Amazon S3 resources by configuring a bucket policy\. For information, see [Adding a bucket policy to require MFA](example-bucket-policies.md#example-bucket-policies-use-case-7)\. After you require MFA to access your Amazon S3 resources, the only way you can access these resources is by providing temporary credentials that are created with an MFA key\. For more information, see the [AWS Multi\-Factor Authentication](https://aws.amazon.com/mfa/) detail page and [Configuring MFA\-Protected API Access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*\. +For added security, you can require multifactor authentication \(MFA\) when accessing your Amazon S3 resources by configuring a bucket policy\. For information, see [Requiring MFA](example-bucket-policies.md#example-bucket-policies-MFA)\. After you require MFA to access your Amazon S3 resources, the only way you can access these resources is by providing temporary credentials that are created with an MFA key\. For more information, see the [AWS Multi\-Factor Authentication](https://aws.amazon.com/mfa/) detail page and [Configuring MFA\-Protected API Access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*\. diff --git a/doc_source/MultiFactorAuthenticationDelete.md b/doc_source/MultiFactorAuthenticationDelete.md index 1c90d67..4e2e6fd 100644 --- a/doc_source/MultiFactorAuthenticationDelete.md +++ b/doc_source/MultiFactorAuthenticationDelete.md @@ -34,7 +34,7 @@ To use MFA delete, you can use either a hardware or virtual MFA device to genera ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/MFADevice.png) -MFA delete and MFA\-protected API access are features intended to provide protection for different scenarios\. You configure MFA delete on a bucket to help ensure that the data in your bucket cannot be accidentally deleted\. MFA\-protected API access is used to enforce another authentication factor \(MFA code\) when accessing sensitive Amazon S3 resources\. You can require any operations against these Amazon S3 resources to be done with temporary credentials created using MFA\. For an example, see [Adding a bucket policy to require MFA](example-bucket-policies.md#example-bucket-policies-use-case-7)\. +MFA delete and MFA\-protected API access are features intended to provide protection for different scenarios\. You configure MFA delete on a bucket to help ensure that the data in your bucket cannot be accidentally deleted\. MFA\-protected API access is used to enforce another authentication factor \(MFA code\) when accessing sensitive Amazon S3 resources\. You can require any operations against these Amazon S3 resources to be done with temporary credentials created using MFA\. For an example, see [Requiring MFA](example-bucket-policies.md#example-bucket-policies-MFA)\. For more information about how to purchase and activate an authentication device, see [Multi\-factor authentication](http://aws.amazon.com/iam/details/mfa/)\. diff --git a/doc_source/MultiRegionAccessPointPermissions.md b/doc_source/MultiRegionAccessPointPermissions.md index 9ce0a6f..28a99ac 100644 --- a/doc_source/MultiRegionAccessPointPermissions.md +++ b/doc_source/MultiRegionAccessPointPermissions.md @@ -51,18 +51,24 @@ You can't edit the Block Public Access settings after the Multi\-Region Access P The following example Multi\-Region Access Point policy grants an AWS Identity and Access Management \(IAM\) user access to list and download files from your Multi\-Region Access Point\. To use this example policy, replace the `user input placeholders` with your own information\. ``` -{ - "Version": "2012-10-17", - "Statement" : [ - { - "Effect": "Allow", - "Principal": { "AWS": "arn:aws:iam::111122223333:JohnDoe" }, - "Action": ["s3:ListBucket", "s3:GetObject"], - "Resource": [ - "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias", - "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias/object/*" - ] - } + { + "Version":"2012-10-17", + "Statement":[ + { + "Effect":"Allow", + "Principal":{ + "AWS":"arn:aws:iam::111122223333:JohnDoe" + }, + "Action":[ + "s3:ListBucket", + "s3:GetObject" + ], + "Resource":[ + "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias", + "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias/object/*" + ] + } + ] } ``` diff --git a/doc_source/S3OutpostsCapacity.md b/doc_source/S3OutpostsCapacity.md index e419be8..afaa69b 100644 --- a/doc_source/S3OutpostsCapacity.md +++ b/doc_source/S3OutpostsCapacity.md @@ -1,21 +1,19 @@ # Managing S3 on Outposts capacity with Amazon CloudWatch metrics -If there is not enough space to store an object on your Outpost, the API returns an insufficient capacity exemption \(ICE\)\. To avoid this, you can create CloudWatch alerts that tell you when storage utilization exceeds a certain threshold\. For more information, see [Amazon S3 on Outposts metrics in CloudWatch](metrics-dimensions.md#s3-outposts-cloudwatch-metrics)\. - -You can use this method to free up space by explicitly deleting data, using a lifecycle expiration policy, or copying data from your Amazon S3 on Outposts bucket to an S3 bucket in an AWS Region by using AWS DataSync\. For more information about using DataSync, see [Getting Started with AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/getting-started.html) in the *AWS DataSync User Guide*\. +To help manage the fixed S3 capacity on your Outpost, we recommend that you create CloudWatch alerts that tell you when your storage utilization exceeds a certain threshold\. For more information about the CloudWatch metrics for S3 on Outposts, see [CloudWatch metrics](#S3OutpostsCloudWatchMetrics)\. If there is not enough space to store an object on your Outpost, the API returns an insufficient capacity exemption \(ICE\)\. To free up space, you can create CloudWatch alarms that trigger explicit data deletion, or use a lifecycle expiration policy to expire objects\. To save data before deletion, you can use AWS DataSync to copy data from your Amazon S3 on Outposts bucket to an S3 bucket in an AWS Region\. For more information about using DataSync, see [Getting Started with AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/getting-started.html) in the *AWS DataSync User Guide*\. ## CloudWatch metrics -The `S3Outposts` namespace includes the following metrics for Amazon S3 on Outposts buckets\. You can monitor the total number of S3 on Outposts bytes provisioned, the total free bytes available for objects, and the total size of all objects for a given bucket\. +The `S3Outposts` namespace includes the following metrics for Amazon S3 on Outposts buckets\. You can monitor the total number of S3 on Outposts bytes provisioned, the total free bytes available for objects, and the total size of all objects for a given bucket\. Bucket or account\-related metrics exist for all direct S3 usage\. Indirect S3 usage, such as storing Amazon Elastic Block Store local snapshots or Amazon Relational Database Service backups on an Outpost, consumes S3 capacity, but is not included in bucket or account\-related metrics\. For more information about Amazon EBS local snapshots, see [ Amazon EBS local snapshots on Outposts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshots-outposts.html)\. To see your Amazon EBS cost report, visit [https://console\.aws\.amazon\.com/billing/](https://console.aws.amazon.com/billing/)\. **Note** S3 on Outposts supports only the following metrics, and no other Amazon S3 metrics\. -Because S3 on Outposts has fixed capacity, you can create CloudWatch alerts that alert you when your storage utilization exceeds a certain threshold\. +Because S3 on Outposts has a fixed capacity limit, we recommend creating CloudWatch alarms to notify you when your storage utilization exceeds a certain threshold\. -| Metric | Description | -| --- | --- | -| OutpostTotalBytes | The total provisioned capacity in bytes for an Outpost\. Units: Bytes Period: 5 minutes | -| OutpostFreeBytes | The count of free bytes available on an Outpost to store customer data\. Units: Bytes Period: 5 minutes | -| BucketUsedBytes | The total size of all objects for the given bucket\. Units: Counts Period: 5 minutes | -| AccountUsedBytes | The total size of all objects for the specified Outposts account\. Units: Bytes Period: 5 minutes | \ No newline at end of file +| Metric | Description | Time Period | Units | Type | +| --- | --- | --- | --- | --- | +| OutpostTotalBytes | The total provisioned capacity in bytes for an Outpost\. | 5 minutes | Bytes | S3 on Outposts | +| OutpostFreeBytes | The count of free bytes available on an Outpost to store customer data\. | 5 minutes | Bytes | S3 on Outposts | +| BucketUsedBytes | The total size of all objects for the given bucket\. | 5 minutes | Bytes | S3 on Outposts\. Direct S3 usage only\. | +| AccountUsedBytes | The total size of all objects for the specified Outposts account\. | 5 minutes | Bytes | S3 on Outposts\. Direct S3 usage only\. | \ No newline at end of file diff --git a/doc_source/ShareObjectPreSignedURL.md b/doc_source/ShareObjectPreSignedURL.md index 985edbe..01f64b3 100644 --- a/doc_source/ShareObjectPreSignedURL.md +++ b/doc_source/ShareObjectPreSignedURL.md @@ -12,6 +12,9 @@ For more information about who can create a presigned URL, see [Who can create a You can generate a presigned URL for an object without writing any code by using the S3 console or AWS Explorer for Visual Studio\. You can also generate a presigned URL programmatically using the AWS SDKs for Java, \.NET, [Ruby](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html), [PHP](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.S3Client.html#_createPresignedRequest), [Node\.js](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property), [Python](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.generate_presigned_url), and [Go](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/s3-example-presigned-urls.html)\. +**Note** +When you generate a presigned URL, make sure that the parameters in your request match the signature exactly\. For example, if you don't specify a content type when generating the URL, you must omit the content type when uploading an object\. Also, wildcards are not supported, and using one in your presigned URL will result in an error\. + ### Using the S3 console You can use the AWS Management Console to generate a presigned URL for an object by following these steps\. diff --git a/doc_source/UsingClientSideEncryption.md b/doc_source/UsingClientSideEncryption.md index 6b8efbf..6ed245e 100644 --- a/doc_source/UsingClientSideEncryption.md +++ b/doc_source/UsingClientSideEncryption.md @@ -7,7 +7,7 @@ To enable client\-side encryption, you have the following options: + Use a key that you store within your application\. **Note** -Amazon S3 supports only symmetric encryption KMS keys or HMAC keys\. For more information about these keys, see [Special\-purpose keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-types.html) in the *AWS Key Management Service Developer Guide*\. +Amazon S3 supports only symmetric encryption KMS keys\. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*\. **AWS Encryption SDK** The [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/) is a client\-side encryption library that is separate from the language–specific SDKs\. You can use this encryption library to more easily implement encryption best practices in Amazon S3\. Unlike the Amazon S3 encryption clients in the language–specific AWS SDKs, the AWS Encryption SDK is not tied to Amazon S3 and can be used to encrypt or decrypt data to be stored anywhere\. diff --git a/doc_source/UsingEncryption.md b/doc_source/UsingEncryption.md index f3bb83a..9a4957e 100644 --- a/doc_source/UsingEncryption.md +++ b/doc_source/UsingEncryption.md @@ -1,5 +1,8 @@ # Protecting data using encryption +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + Data protection refers to protecting data while in\-transit \(as it travels to and from Amazon S3\) and at rest \(while it is stored on disks in Amazon S3 data centers\)\. You can protect data in transit using Secure Socket Layer/Transport Layer Security \(SSL/TLS\) or client\-side encryption\. You have the following options for protecting data at rest in Amazon S3: + **Server\-Side Encryption** – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects\. diff --git a/doc_source/UsingKMSEncryption.md b/doc_source/UsingKMSEncryption.md index 1c69662..1f3b893 100644 --- a/doc_source/UsingKMSEncryption.md +++ b/doc_source/UsingKMSEncryption.md @@ -1,5 +1,8 @@ # Using server\-side encryption with AWS Key Management Service \(SSE\-KMS\) +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + Server\-side encryption is the encryption of data at its destination by the application or service that receives it\. AWS Key Management Service \(AWS KMS\) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud\. Amazon S3 uses server\-side encryption with AWS KMS \(SSE\-KMS\) to encrypt your S3 object data\. Also, when SSE\-KMS is requested for the object, the S3 checksum as part of the object's metadata, is stored in encrypted form\. For more information about checksum, see [Checking object integrity](checking-object-integrity.md)\. If you use KMS keys, you can use AWS KMS through the [AWS Management Console](https://console.aws.amazon.com/kms) or the [AWS KMS API](https://docs.aws.amazon.com/kms/latest/APIReference/) to do the following: @@ -57,7 +60,7 @@ When you request that your data be decrypted, Amazon S3 and AWS KMS perform the 1. Amazon S3 decrypts the encrypted data key, using the plaintext data key, and removes the plaintext data key from memory as soon as possible\. **Important** -When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 supports only symmetric encryption KMS keys or HMAC keys\. For more information about these keys, see [Special\-purpose keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-types.html) in the *AWS Key Management Service Developer Guide*\. +When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 supports only symmetric encryption KMS keys\. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*\. To identify requests that specify SSE\-KMS, you can use the **All SSE\-KMS requests** and **% all SSE\-KMS requests** metrics in Amazon S3 Storage Lens metrics\. S3 Storage Lens is a cloud\-storage analytics feature that you can use to gain organization\-wide visibility into object\-storage usage and activity\. For more information, see [ Assessing your storage activity and usage with S3 Storage Lens](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens.html?icmpid=docs_s3_user_guide_UsingKMSEncryption.html)\. For a complete list of metrics, see [ S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_UsingKMSEncryption.html)\. diff --git a/doc_source/UsingServerSideEncryption.md b/doc_source/UsingServerSideEncryption.md index 2c4cb9f..3d78659 100644 --- a/doc_source/UsingServerSideEncryption.md +++ b/doc_source/UsingServerSideEncryption.md @@ -1,5 +1,8 @@ # Using server\-side encryption with Amazon S3\-managed encryption keys \(SSE\-S3\) +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + Server\-side encryption protects data at rest\. Amazon S3 encrypts each object with a unique key\. As an additional safeguard, it encrypts the key itself with a key that it rotates regularly\. Amazon S3 server\-side encryption uses one of the strongest block ciphers available to encrypt your data, 256\-bit Advanced Encryption Standard \(AES\-256\)\. There are no additional fees for using server\-side encryption with Amazon S3\-managed keys \(SSE\-S3\)\. However, requests to configure the default encryption feature incur standard Amazon S3 request charges\. For information about pricing, see [Amazon S3 pricing](http://aws.amazon.com/s3/pricing/)\. diff --git a/doc_source/WhatsNew.md b/doc_source/WhatsNew.md index 7c9148e..4cb9cba 100644 --- a/doc_source/WhatsNew.md +++ b/doc_source/WhatsNew.md @@ -136,7 +136,7 @@ The following table describes the important changes in each release of the *Amaz | Support for Website Page Redirects | For a bucket that is configured as a website, Amazon S3 now supports redirecting a request for an object to another object in the same bucket or to an external URL\. For more information, see [\(Optional\) Configuring a webpage redirect](how-to-page-redirect.md)\. For information about hosting websites, see [Hosting a static website using Amazon S3](WebsiteHosting.md)\. | October 4, 2012 | | Support for Cross\-Origin Resource Sharing \(CORS\) | Amazon S3 now supports Cross\-Origin Resource Sharing \(CORS\)\. CORS defines a way in which client web applications that are loaded in one domain can interact with or access resources in a different domain\. With CORS support in Amazon S3, you can build rich client\-side web applications on top of Amazon S3 and selectively allow cross\-domain access to your Amazon S3 resources\. For more information, see [Using cross\-origin resource sharing \(CORS\)](cors.md)\. | August 31, 2012 | | Support for Cost Allocation Tags | Amazon S3 now supports cost allocation tagging, which allows you to label S3 buckets so you can more easily track their cost against projects or other criteria\. For more information about using tagging for buckets, see [Using cost allocation S3 bucket tags](CostAllocTagging.md)\. | August 21, 2012 | -| Support for MFA\-protected API access in bucket policies | Amazon S3 now supports MFA\-protected API access, a feature that can enforce AWS Multi\-Factor Authentication for an extra level of security when accessing your Amazon S3 resources\. It is a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code\. For more information, go to [AWS Multi\-Factor Authentication](https://aws.amazon.com/iam/details/mfa/)\. You can now require MFA authentication for any requests to access your Amazon S3 resources\. To enforce MFA authentication, Amazon S3 now supports the `aws:MultiFactorAuthAge` key in a bucket policy\. For an example bucket policy, see [Adding a bucket policy to require MFA](example-bucket-policies.md#example-bucket-policies-use-case-7)\. | July 10, 2012 | +| Support for MFA\-protected API access in bucket policies | Amazon S3 now supports MFA\-protected API access, a feature that can enforce AWS Multi\-Factor Authentication for an extra level of security when accessing your Amazon S3 resources\. It is a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code\. For more information, go to [AWS Multi\-Factor Authentication](https://aws.amazon.com/iam/details/mfa/)\. You can now require MFA authentication for any requests to access your Amazon S3 resources\. To enforce MFA authentication, Amazon S3 now supports the `aws:MultiFactorAuthAge` key in a bucket policy\. For an example bucket policy, see [Requiring MFA](example-bucket-policies.md#example-bucket-policies-MFA)\. | July 10, 2012 | | Object Expiration support | You can use Object Expiration to schedule automatic removal of data after a configured time period\. You set object expiration by adding lifecycle configuration to a bucket\. | 27 December 2011 | | New Region supported | Amazon S3 now supports the South America \(São Paulo\) Region\. For more information, see [Methods for accessing a bucket](access-bucket-intro.md)\. | December 14, 2011 | | Multi\-Object Delete | Amazon S3 now supports Multi\-Object Delete API that enables you to delete multiple objects in a single request\. With this feature, you can remove large numbers of objects from Amazon S3 more quickly than using multiple individual DELETE requests\. For more information, see [Deleting Amazon S3 objects](DeletingObjects.md)\. | December 7, 2011 | diff --git a/doc_source/about-object-ownership.md b/doc_source/about-object-ownership.md index e16713a..2bd52b9 100644 --- a/doc_source/about-object-ownership.md +++ b/doc_source/about-object-ownership.md @@ -80,9 +80,6 @@ Before you disable ACLs for an existing bucket, complete the following prerequis When you disable ACLs, permissions granted by bucket and object ACLs no longer affect access\. Before you disable ACLs, review your bucket and object ACLs\. To identify Amazon S3 requests that required ACLs for authorization, you can use the `aclRequired` field in Amazon S3 server access logs or AWS CloudTrail\. For more information, see [Using Amazon S3 access logs to identify requests](using-s3-access-logs-to-identify-requests.md) and [Identifying Amazon S3 requests using CloudTrail](cloudtrail-request-identification.md)\. -**Important** -During the next few weeks, we are adding a new field, `aclRequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclRequired` field, the rollout hasn't been completed in your Region\. - If your bucket ACLs grant read or write permissions to others outside of your account, you must migrate these permissions to your bucket policy before you can apply the bucket owner enforced setting\. If you don't migrate bucket ACLs that grant read or write access outside of your account, your request to apply the bucket owner enforced setting fails and returns the [InvalidBucketAclWithObjectOwnership](object-ownership-error-responses.md#object-ownership-error-responses-invalid-acl) error code\. For example, if you want to disable ACLs for a bucket that receives server access logs, you must migrate the bucket ACL permissions for the S3 log delivery group to the logging service principal in a bucket policy\. For more information, see [Grant access to S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs)\. diff --git a/doc_source/access-analyzer.md b/doc_source/access-analyzer.md index ec07282..7a460bc 100644 --- a/doc_source/access-analyzer.md +++ b/doc_source/access-analyzer.md @@ -1,6 +1,6 @@ # Reviewing bucket access using Access Analyzer for S3 -Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization\. For each public or shared bucket, you receive findings into the source and level of public or shared access\. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list \(ACL\), a bucket policy, a Multi\-Region Access Point policy, or an access point policy\. Armed with this knowledge, you can take immediate and precise corrective action to restore your bucket access to what you intended\. +Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization\. For each public or shared bucket, you receive findings into the source and level of public or shared access\. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list \(ACL\), a bucket policy, a Multi\-Region Access Point policy, or an access point policy\. With these findings, you can take immediate and precise corrective action to restore your bucket access to what you intended\. When reviewing an at\-risk bucket in Access Analyzer for S3, you can block all public access to the bucket with a single click\. We recommend that you block all access to your buckets unless you require public access to support a specific use case\. Before you block all public access, ensure that your applications will continue to work correctly without public access\. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md)\. @@ -110,7 +110,7 @@ If you did not intend to grant access to the public or other AWS accounts, inclu 1. Review or change your bucket policy as required\. - For more information, see [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md)\. + For more information, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md)\. 1. If you want to change or view a Multi\-Region Access Point policy: diff --git a/doc_source/add-bucket-policy.md b/doc_source/add-bucket-policy.md index d3d9fb9..b42aa04 100644 --- a/doc_source/add-bucket-policy.md +++ b/doc_source/add-bucket-policy.md @@ -1,37 +1,41 @@ -# Adding a bucket policy using the Amazon S3 console +# Adding a bucket policy by using the Amazon S3 console -You can use the Amazon S3 console to add a new bucket policy or edit an existing bucket policy\. A bucket policy is a resource\-based AWS Identity and Access Management \(IAM\) policy\. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it\. Object permissions apply only to the objects that the bucket owner creates\. For more information about bucket policies, see [Overview of managing access](access-control-overview.md)\. +You can use the [AWS Policy Generator](http://aws.amazon.com/blogs/aws/aws-policy-generator/) and the Amazon S3 console to add a new bucket policy or edit an existing bucket policy\. A bucket policy is a resource\-based AWS Identity and Access Management \(IAM\) policy\. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it\. Object permissions apply only to the objects that the bucket owner creates\. For more information about bucket policies, see [Overview of managing access](access-control-overview.md)\. -By default, when another AWS account uploads an object to your S3 bucket, that account \(the object writer\) owns the object, has access to it, and can grant other users access to it through access control lists \(ACLs\)\. You can use Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner, automatically own every object in your bucket\. As a result, access control for your data is based on policies, such as IAM policies, S3 bucket policies, virtual private cloud \(VPC\) endpoint policies, and AWS Organizations service control policies \(SCPs\)\. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md)\. - -Make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy\. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)\. These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices\. To learn more about validating policies using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*\. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html)\. +Make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy\. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)\. These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices\. To learn more about validating policies by using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*\. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html)\. **To create or edit a bucket policy** 1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console\.aws\.amazon\.com/s3/](https://console.aws.amazon.com/s3/)\. +1. In the left navigation pane, choose **Buckets**\. + 1. In the **Buckets** list, choose the name of the bucket that you want to create a bucket policy for or whose bucket policy you want to edit\. -1. Choose **Permissions**\. +1. Choose the **Permissions** tab\. -1. Under **Bucket policy**, choose **Edit**\. This opens the Edit bucket policy page\. +1. Under **Bucket policy**, choose **Edit**\. The **Edit bucket policy** page appears\. -1. On the **Edit bucket policy **page, explore **Policy examples** in the *Amazon S3 User Guide*, choose **Policy generator** to generate a policy automatically, or edit the JSON in the **Policy** section\. +1. On the **Edit bucket policy** page, do one of the following: + + To see examples of bucket policies in the *Amazon S3 User Guide*, choose **Policy examples**\. + + To generate a policy automatically, or edit the JSON in the **Policy** section, choose **Policy generator**\. - If you choose **Policy generator**, the AWS Policy Generator opens in a new window: + If you choose **Policy generator**, the AWS Policy Generator opens in a new window\. - 1. On the **AWS Policy Generator** page, in **Select Type of Policy**, choose **S3 Bucket Policy**\. + 1. On the **AWS Policy Generator** page, for **Select Type of Policy**, choose **S3 Bucket Policy**\. - 1. Add a statement by entering the information in the provided fields, and then choose **Add Statement**\. Repeat for as many statements as you would like to add\. For more information about these fields, see the [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*\. + 1. Add a statement by entering the information in the provided fields, and then choose **Add Statement**\. Repeat this step for as many statements as you would like to add\. For more information about these fields, see the [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*\. **Note** -For convenience, the **Edit bucket policy** page displays the **Bucket ARN **\(Amazon Resource Name\) of the current bucket above the **Policy** text field\. You can copy this ARN for use in the statements on the **AWS Policy Generator** page\. +For your convenience, the **Edit bucket policy** page displays the **Bucket ARN **\(Amazon Resource Name\) of the current bucket above the **Policy** text field\. You can copy this ARN for use in the statements on the **AWS Policy Generator** page\. 1. After you finish adding statements, choose **Generate Policy**\. 1. Copy the generated policy text, choose **Close**, and return to the **Edit bucket policy** page in the Amazon S3 console\. -1. In the **Policy** box, edit the existing policy or paste the bucket policy from the Policy generator\. Make sure to resolve security warnings, errors, general warnings, and suggestions before you save your policy\. +1. In the **Policy** box, edit the existing policy or paste the bucket policy from the AWS Policy Generator\. Make sure to resolve security warnings, errors, general warnings, and suggestions before you save your policy\. +**Note** +Bucket policies are limited to 20 KB in size\. -1. \(Optional\) Preview how your new policy affects public and cross\-account access to your resource\. Before you save your policy, you can check whether it introduces new IAM Access Analyzer findings or resolves existing findings\. If you don’t see an active analyzer, [create an account analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-enabling) in IAM Access Analyzer\. For more information, see [Preview access](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-access-preview.html) in the *IAM User Guide*\. +1. \(Optional\) Choose **Preview external access** in the lower\-right corner to preview how your new policy affects public and cross\-account access to your resource\. Before you save your policy, you can check whether it introduces new IAM Access Analyzer findings or resolves existing findings\. If you don’t see an active analyzer, choose **Go to Access Analyzer** to [ create an account analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-enabling) in IAM Access Analyzer\. For more information, see [Preview access](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-access-preview.html) in the *IAM User Guide*\. -1. Choose **Save changes**, which returns you to the Bucket Permissions page\. \ No newline at end of file +1. Choose **Save changes**, which returns you to the **Permissions** tab\. \ No newline at end of file diff --git a/doc_source/analytics-storage-class.md b/doc_source/analytics-storage-class.md index 569d3f2..f3e1bba 100644 --- a/doc_source/analytics-storage-class.md +++ b/doc_source/analytics-storage-class.md @@ -83,7 +83,7 @@ The Amazon S3 console shows the access patterns grouped by the predefined object You can choose to have storage class analysis export analysis reports to a comma\-separated values \(CSV\) flat file\. Reports are updated daily and are based on the object age group filters you configure\. When using the Amazon S3 console you can choose the export report option when you create a filter\. When selecting data export you specify a destination bucket and optional destination prefix where the file is written\. You can export the data to a destination bucket in a different account\. The destination bucket must be in the same region as the bucket that you configure to be analyzed\. -You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to verify what AWS account owns the bucket and to write objects to the bucket in the defined location\. For an example policy, see [Granting permissions for Amazon S3 Inventory and Amazon S3 analytics](example-bucket-policies.md#example-bucket-policies-use-case-9)\. +You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to verify what AWS account owns the bucket and to write objects to the bucket in the defined location\. For an example policy, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1)\. After you configure storage class analysis reports, you start getting the exported report daily after 24 hours\. After that, Amazon S3 continues monitoring and providing daily exports\. diff --git a/doc_source/batch-ops-copy-example-bucket-key.md b/doc_source/batch-ops-copy-example-bucket-key.md index 712daf8..ab88632 100644 --- a/doc_source/batch-ops-copy-example-bucket-key.md +++ b/doc_source/batch-ops-copy-example-bucket-key.md @@ -23,7 +23,7 @@ To follow along with the steps in this procedure, you need an AWS account and at To get started, identify the S3 bucket that contains the objects to encrypt, and get a list of its contents\. An Amazon S3 Inventory report is the most convenient and affordable way to do this\. The report provides the list of the objects in a bucket along with associated metadata\. The **source bucket** refers to the inventoried bucket, and the **destination bucket** refers to the bucket where you store the inventory report file\. For more information about Amazon S3 Inventory source and destination buckets, see [Amazon S3 Inventory](storage-inventory.md)\. -The easiest way to set up an inventory is by using the AWS Management Console\. But you can also use the REST API, AWS Command Line Interface \(AWS CLI\), or AWS SDKs\. Before following these steps, be sure to sign in to the console and open the Amazon S3 console at [https://console\.aws\.amazon\.com/s3/](https://console.aws.amazon.com/s3/)\. If you encounter permission denied errors, add a bucket policy to your destination bucket\. For more information, see [Granting permissions for Amazon S3 Inventory and Amazon S3 analytics](example-bucket-policies.md#example-bucket-policies-use-case-9)\. +The easiest way to set up an inventory is by using the AWS Management Console\. But you can also use the REST API, AWS Command Line Interface \(AWS CLI\), or AWS SDKs\. Before following these steps, be sure to sign in to the console and open the Amazon S3 console at [https://console\.aws\.amazon\.com/s3/](https://console.aws.amazon.com/s3/)\. If you encounter permission denied errors, add a bucket policy to your destination bucket\. For more information, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1)\. **To get a list of objects using S3 Inventory** diff --git a/doc_source/bucket-encryption-tracking.md b/doc_source/bucket-encryption-tracking.md index b4997ed..2050a85 100644 --- a/doc_source/bucket-encryption-tracking.md +++ b/doc_source/bucket-encryption-tracking.md @@ -1,5 +1,8 @@ # Monitoring default encryption with CloudTrail and CloudWatch +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + You can track default encryption configuration requests for Amazon S3 buckets using AWS CloudTrail events\. The following API event names are used in CloudTrail logs: + `PutBucketEncryption` + `GetBucketEncryption` diff --git a/doc_source/bucket-encryption.md b/doc_source/bucket-encryption.md index 3c0fa7d..93bab9c 100644 --- a/doc_source/bucket-encryption.md +++ b/doc_source/bucket-encryption.md @@ -1,5 +1,8 @@ # Setting default server\-side encryption behavior for Amazon S3 buckets +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + With Amazon S3 default encryption, you can set the default encryption behavior for an S3 bucket so that all new objects are encrypted when they are stored in the bucket\. The objects are encrypted using server\-side encryption with either Amazon S3 managed keys \(SSE\-S3\) or AWS KMS keys stored in AWS Key Management Service \(AWS KMS\) \(SSE\-KMS\)\. When you configure your bucket to use default encryption with SSE\-KMS, you can also enable S3 Bucket Keys to decrease request traffic from Amazon S3 to AWS Key Management Service \(AWS KMS\) and reduce the cost of encryption\. For more information, see [Reducing the cost of SSE\-KMS with Amazon S3 Bucket Keys](bucket-key.md)\. diff --git a/doc_source/bucket-policies.md b/doc_source/bucket-policies.md index 90d2573..648cebc 100644 --- a/doc_source/bucket-policies.md +++ b/doc_source/bucket-policies.md @@ -1,21 +1,20 @@ # Using bucket policies -You can create and configure bucket policies to grant permission to your Amazon S3 resources\. - -A bucket policy is a resource\-based policy that you can use to grant access permissions to your bucket and the objects in it\. Only the bucket owner can associate a policy with a bucket\. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner\. These permissions do not apply to objects owned by other AWS accounts\. +A bucket policy is a resource\-based policy that you can use to grant access permissions to your Amazon S3 bucket and the objects in it\. Only the bucket owner can associate a policy with a bucket\. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner\. These permissions do not apply to objects that are owned by other AWS accounts\. By default, when another AWS account uploads an object to your S3 bucket, that account \(the object writer\) owns the object, has access to it, and can grant other users access to it through access control lists \(ACLs\)\. You can use Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner, automatically own every object in your bucket\. As a result, access control for your data is based on policies, such as IAM policies, S3 bucket policies, virtual private cloud \(VPC\) endpoint policies, and AWS Organizations service control policies \(SCPs\)\. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md)\. -Bucket policies use JSON\-based access policy language\. You can use bucket policies to add or deny permissions for the objects in a bucket\. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request \(for example, the IP address used to make the request\)\. For example, you can create a bucket policy that grants cross\-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects\. For more information, see [Bucket policy examples](example-bucket-policies.md)\. +Bucket policies use JSON\-based IAM policy language\. You can use bucket policies to add or deny permissions for the objects in a bucket\. Bucket policies can allow or deny requests based on the elements in the policy\. These elements include the requester, S3 actions, resources, and aspects or conditions of the request \(such as the IP address that's used to make the request\)\. -In your bucket policy, you can use wildcard characters on Amazon Resource Names \(ARNs\) and other values to grant permissions to a subset of objects\. For example, you can control access to groups of objects that begin with a common [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) or end with a given extension, such as `.html`\. +For example, you can create a bucket policy that does the following: ++ Grants other accounts cross\-account permissions to upload objects to your S3 bucket ++ Makes sure that you, the bucket owner, has full control of the uploaded objects -The topics in this section provide examples and show you how to add a bucket policy in the S3 console\. For information about IAM user policies, see [Using IAM user policies](user-policies.md)\. For information about bucket policy language, see [Policies and Permissions in Amazon S3](access-policy-language-overview.md) +For more information, see [Bucket policy examples](example-bucket-policies.md)\. -**Important** -Bucket policies are limited to 20 KB in size\. +The topics in this section provide examples and show you how to add a bucket policy in the S3 console\. For information about IAM user policies, see [Using IAM user policies](user-policies.md)\. For information about bucket policy language, see [Policies and Permissions in Amazon S3](access-policy-language-overview.md)\. **Topics** -+ [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md) ++ [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md) + [Controlling access from VPC endpoints with bucket policies](example-bucket-policies-vpc-endpoint.md) + [Bucket policy examples](example-bucket-policies.md) \ No newline at end of file diff --git a/doc_source/cloudtrail-logging-understanding-s3-entries.md b/doc_source/cloudtrail-logging-understanding-s3-entries.md index 7bacd1f..a6ab54e 100644 --- a/doc_source/cloudtrail-logging-understanding-s3-entries.md +++ b/doc_source/cloudtrail-logging-understanding-s3-entries.md @@ -140,9 +140,6 @@ The following example shows a CloudTrail log entry that demonstrates the [GET Se } ``` -**Important** -During the next few weeks, we are adding a new field, `aclRequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclRequired` field, the rollout hasn't been completed in your Region\. - ## Example: Amazon S3 on Outposts log file entries Amazon S3 on Outposts management events are available via AWS CloudTrail\. For more information, see [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md)\. In addition, you can optionally [enable logging for data events in AWS CloudTrail](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html)\. diff --git a/doc_source/cloudtrail-request-identification.md b/doc_source/cloudtrail-request-identification.md index 6251e47..9b7517c 100644 --- a/doc_source/cloudtrail-request-identification.md +++ b/doc_source/cloudtrail-request-identification.md @@ -318,9 +318,6 @@ WHERE AND eventTime BETWEEN '2022-05-10T00:00:00Z' and '2022-08-10T00:00:00Z' ``` -**Important** -During the next few weeks, we are adding a new field, `aclRequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclRequired` field, the rollout hasn't been completed in your Region\. - **Note** These query examples can also be useful for security monitoring\. You can review the results for `PutObject` or `GetObject` calls from unexpected or unauthorized IP addresses or requesters and for identifying any anonymous requests to your buckets\. This query only retrieves information from the time at which logging was enabled\. diff --git a/doc_source/configure-inventory.md b/doc_source/configure-inventory.md index 7ed84d6..e0a5d7c 100644 --- a/doc_source/configure-inventory.md +++ b/doc_source/configure-inventory.md @@ -20,7 +20,7 @@ The easiest way to set up an inventory is by using the AWS Management Console, b 1. **Add a bucket policy for the destination bucket\.** - You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to write objects to the bucket in the defined location\. For an example policy, see [Granting permissions for Amazon S3 Inventory and Amazon S3 analytics](example-bucket-policies.md#example-bucket-policies-use-case-9)\. + You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to write objects to the bucket in the defined location\. For an example policy, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1)\. 1. **Configure an inventory to list the objects in a source bucket and publish the list to a destination bucket\.** @@ -40,7 +40,7 @@ The easiest way to set up an inventory is by using the AWS Management Console, b ## Creating a destination bucket policy -If you create the inventory configuration through the S3 console, Amazon S3 automatically creates a bucket policy on the destination bucket that grants Amazon S3 write permission\. If you create the inventory configuration through the AWS CLI, SDKs, or the REST API, you must manually add a bucket policy on the destination bucket\. For more information, see [Granting permissions for Amazon S3 Inventory and Amazon S3 analytics](example-bucket-policies.md#example-bucket-policies-use-case-9)\. The Amazon S3 Inventory destination bucket policy allows Amazon S3 to write data for the inventory reports to the bucket\. +If you create the inventory configuration through the S3 console, Amazon S3 automatically creates a bucket policy on the destination bucket that grants Amazon S3 write permission\. If you create the inventory configuration through the AWS CLI, SDKs, or the REST API, you must manually add a bucket policy on the destination bucket\. For more information, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1)\. The Amazon S3 Inventory destination bucket policy allows Amazon S3 to write data for the inventory reports to the bucket\. If an error occurs when you try to create the bucket policy, you are given instructions on how to fix it\. For example, if you choose a destination bucket in another AWS account and don't have permissions to read and write to the bucket policy, you see an error message\. @@ -159,7 +159,7 @@ To encrypt the inventory list file with SSE\-KMS, you must grant Amazon S3 permi + **Legal hold status** – The legal hold status of the locked object\. For information about S3 Object Lock, see [How S3 Object Lock works](object-lock-overview.md)\. - + **Intelligent\-Tiering access tier** – Indicates the access tier \(frequent or infrequent\) of the object if it was stored in Intelligent\-Tiering\. For more information, see [Storage class for automatically optimizing data with changing or unknown access patterns](storage-class-intro.md#sc-dynamic-data-access)\. + + **Intelligent\-Tiering access tier** – Indicates the access tier of the object if it was stored in Intelligent\-Tiering\. For more information, see [Storage class for automatically optimizing data with changing or unknown access patterns](storage-class-intro.md#sc-dynamic-data-access)\. + **S3 Bucket Key status** – Indicates whether a bucket\-level key generated by AWS KMS applies to the object\. For more information, see [Reducing the cost of SSE\-KMS with Amazon S3 Bucket Keys](bucket-key.md)\. + **Checksum Algorithm** – Indicates the algorithm used to create the checksum for the object\. diff --git a/doc_source/default-bucket-encryption.md b/doc_source/default-bucket-encryption.md index 386ce94..210f3d4 100644 --- a/doc_source/default-bucket-encryption.md +++ b/doc_source/default-bucket-encryption.md @@ -1,5 +1,8 @@ # Enabling Amazon S3 default bucket encryption +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + You can set the default encryption behavior on an Amazon S3 bucket so that all objects are encrypted when they are stored in the bucket\. The objects are encrypted using server\-side encryption with either Amazon S3\-managed keys \(SSE\-S3\) or AWS Key Management Service \(AWS KMS\) keys\. When you configure default encryption using AWS KMS, you can also configure S3 Bucket Key\. For more information, see [Reducing the cost of SSE\-KMS with Amazon S3 Bucket Keys](bucket-key.md)\. @@ -46,7 +49,7 @@ If you use the AWS KMS option for your default encryption configuration, you are + **Enter KMS root key ARN**, and enter your AWS KMS key ARN\. **Important** You can only use KMS keys that are enabled in the same AWS Region as the bucket\. When you choose **Choose from your KMS keys**, the S3 console only lists 100 KMS keys per Region\. If you have more than 100 KMS keys in the same Region, you can only see the first 100 KMS keys in the S3 console\. To use a KMS key that is not listed in the console, choose **Custom KMS ARN**, and enter the KMS key ARN\. -When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 only supports symmetric encryption KMS keys or HMAC keys\. For more information about these keys, see [Special\-purpose keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-types.html) in the *AWS Key Management Service Developer Guide*\. +When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 only supports symmetric encryption KMS keys\. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*\. For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*\. For more information about using AWS KMS with Amazon S3, see [Using server\-side encryption with AWS Key Management Service \(SSE\-KMS\)](UsingKMSEncryption.md)\. diff --git a/doc_source/default-encryption-faq.md b/doc_source/default-encryption-faq.md new file mode 100644 index 0000000..63a2d68 --- /dev/null +++ b/doc_source/default-encryption-faq.md @@ -0,0 +1,41 @@ +# Amazon S3 now automatically encrypts all new objects + +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. SSE\-S3, which uses 256\-bit Advanced Encryption Standard \(AES\-256\), is automatically applied to all new buckets and to any existing S3 bucket that doesn't already have default encryption configured\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface \(AWS CLI\) and the AWS SDKs\. This process is currently occurring across all AWS Regions\. When this update is complete in all Regions, we will update the documentation\. + +The following sections answer questions about this update\. + +**How can I view the automatic encryption status today?** +Starting January 5, 2023, the automatic encryption status for S3 bucket default encryption configuration and all new object uploads is visible in AWS CloudTrail logs across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. Over the next few weeks, we will roll out this automatic encryption status to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and Amazon S3 API responses in the AWS CLI and AWS SDKs in all Regions\. + +**How will I know that this update is available in the Amazon S3 console, S3 Inventory, S3 Storage Lens, and Amazon S3 API responses in the AWS CLI and AWS SDKs?** +After this update is fully rolled out, we will update the Amazon S3 documentation to indicate that this change is now available in the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS CLI and the AWS SDKs\. + +**Will Amazon S3 change the default encryption settings for my existing buckets that already have default encryption configured?** +No\. There will be no changes to the default encryption configuration for an existing bucket that already has SSE\-S3 or server\-side encryption with AWS Key Management Service \(AWS KMS\) keys \(SSE\-KMS\) configured\. For more information about how to set the default encryption behavior for buckets, see [Setting default server\-side encryption behavior for Amazon S3 buckets](bucket-encryption.md)\. For more information about SSE\-S3 and SSE\-KMS encryption settings, see [Protecting data using server\-side encryption](serv-side-encryption.md)\. + +**Is Amazon S3 encrypting my existing buckets that are unencrypted?** +Yes\. Amazon S3 now configures default encryption on all existing unencrypted buckets to apply server\-side encryption with S3 managed keys \(SSE\-S3\) as the base level of encryption\. This change in encryption status will be rolled out over the next few weeks\. + +**How can I view the default encryption status of new object uploads?** +Currently, you can view the default encryption status of new object uploads in AWS CloudTrail logs\. To view your CloudTrail events, see [Viewing CloudTrail events in the CloudTrail console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html) in the *AWS CloudTrail User Guide*\. CloudTrail logs provide API tracking for `PUT` and `POST` requests to Amazon S3\. When default encryption is being used to encrypt objects in your buckets, the CloudTrail logs for `PUT` and `POST` API requests will include the following field as the name\-value pair: `"SSEApplied":"Default_SSE_S3"`\. Over the next few weeks, the automatic encryption status will also be rolled out to the bucket level encryption status in the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS CLI and the AWS SDKs\. + +**What do I have to do to take advantage of this change?** +You are not required to make any changes to your existing applications\. Because default encryption is enabled for all of your buckets, all new objects uploaded to Amazon S3 are automatically encrypted\. + +**Can I disable encryption for the new objects being written to my bucket?** +No\. SSE\-S3 is the new base level of encryption that's applied to all the new objects being uploaded to your bucket\. You can no longer disable encryption for new object uploads\. + +**Will my charges be affected?** +No\. Default encryption with SSE\-S3 is available at no additional cost\. You will be billed for storage, requests, and other S3 features, as usual\. For pricing, see [Amazon S3 pricing](http://aws.amazon.com/s3/pricing/)\. + +**Will Amazon S3 encrypt my existing objects that are unencrypted?** +No\. Beginning on January 5, 2023, Amazon S3 only automatically encrypts new object uploads\. To encrypt existing objects, you can use S3 Batch Operations to create encrypted copies of your objects\. These encrypted copies will retain the existing object data and name and will be encrypted by using the encryption keys that you specify\. For more details, see [Encrypting objects with Amazon S3 Batch Operations](http://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/) in the *AWS Storage Blog*\. + +**I did not enable encryption for my buckets before this release\. Do I need to change the way that I access objects?** +No\. Default encryption with SSE\-S3 automatically encrypts your data as it's written to Amazon S3 and decrypts it for you when you access it\. There is no change in the way that you access objects that are automatically encrypted\. + +**Do I need to change the way that I access my client\-side encrypted objects?** +No\. All client\-side encrypted objects that are encrypted before being uploaded into Amazon S3 arrive as encrypted ciphertext objects within Amazon S3\. These objects will now have an additional layer of SSE\-S3 encryption\. Your workloads that use client\-side encrypted objects will not require any changes to your client services or authorization settings\. + +**Note** +HashiCorp Terraform users that aren't using an updated version of the AWS Provider might see an unexpected drift after creating new S3 buckets with no customer defined encryption configuration\. To avoid this drift with the new automatic default encryption for your existing unencrypted and new S3 buckets, update your Terraform AWS Provider version to one of the following versions: 4\.49\.0, 3\.76\.1, or 2\.70\.2\. \ No newline at end of file diff --git a/doc_source/example-bucket-policies.md b/doc_source/example-bucket-policies.md index 1741e6a..cae6041 100644 --- a/doc_source/example-bucket-policies.md +++ b/doc_source/example-bucket-policies.md @@ -1,107 +1,337 @@ # Bucket policy examples -This section presents a few examples of typical use cases for bucket policies\. The policies use *DOC\-EXAMPLE\-BUCKET* strings in the resource value\. To test these policies, replace these strings with your bucket name\. For information about bucket policies, see [Using bucket policies](bucket-policies.md)\. For more information about policy language, see [Policies and Permissions in Amazon S3](access-policy-language-overview.md)\. +With Amazon S3 bucket policies, you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them\. You can even prevent authenticated users without the appropriate permissions from accessing your Amazon S3 resources\. -A bucket policy is a resource\-based policy that you can use to grant access permissions to your bucket and the objects in it\. Only the bucket owner can associate a policy with a bucket\. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner\. These permissions do not apply to objects owned by other AWS accounts\. +This section presents examples of typical use cases for bucket policies\. These sample policies use `DOC-EXAMPLE-BUCKET` as the resource value\. To test these policies, replace the `user input placeholders` with your own information \(such as your bucket name\)\. -By default, when another AWS account uploads an object to your S3 bucket, that account \(the object writer\) owns the object, has access to it, and can grant other users access to it through access control lists \(ACLs\)\. You can use Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner, automatically own every object in your bucket\. As a result, access control for your data is based on policies, such as IAM policies, S3 bucket policies, virtual private cloud \(VPC\) endpoint policies, and AWS Organizations service control policies \(SCPs\)\. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md)\. +To grant or deny permissions to a set of objects, you can use wildcard characters \(`*`\) in Amazon Resource Names \(ARNs\) and other values\. For example, you can control access to groups of objects that begin with a common [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) or end with a given extension, such as `.html`\. -For more information about bucket policies, see [Using bucket policies](bucket-policies.md)\. +For information about bucket policies, see [Using bucket policies](bucket-policies.md)\. For more information about AWS Identity and Access Management \(IAM\) policy language, see [Policies and Permissions in Amazon S3](access-policy-language-overview.md)\. **Note** -Bucket policies are limited to 20 KB in size\. +When testing permissions by using the Amazon S3 console, you must grant additional permissions that the console requires—`s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket`\. For an example walkthrough that grants permissions to users and tests those permissions by using the console, see [Controlling access to a bucket with user policies](walkthrough1.md)\. -You can use the [AWS Policy Generator](http://aws.amazon.com/blogs/aws/aws-policy-generator/) to create a bucket policy for your Amazon S3 bucket\. You can then use the generated document to set your bucket policy by using the [Amazon S3 console](https://console.aws.amazon.com/s3/home), through several third\-party tools, or through your application\. +**Topics** ++ [Requiring encryption](#example-bucket-policies-encryption) ++ [Managing buckets using canned ACLs](#example-bucket-policies-public-access) ++ [Managing object access with object tagging](#example-bucket-policies-object-tags) ++ [Managing object access by using global condition keys](#example-bucket-policies-global-condition-keys) ++ [Managing access based on specific IP addresses](#example-bucket-policies-IP) ++ [Managing access based on HTTP or HTTPS requests](#example-bucket-policies-HTTP-HTTPS) ++ [Managing user access to specific folders](#example-bucket-policies-folders) ++ [Managing access for access logs](#example-bucket-policies-access-logs) ++ [Managing access to an Amazon CloudFront OAI](#example-bucket-policies-cloudfront) ++ [Managing access for Amazon S3 Storage Lens](#example-bucket-policies-lens) ++ [Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports](#example-bucket-policies-s3-inventory) ++ [Requiring MFA](#example-bucket-policies-MFA) -**Important** -When testing permissions by using the Amazon S3 console, you must grant additional permissions that the console requires—`s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket`\. For an example walkthrough that grants permissions to users and tests them by using the console, see [Controlling access to a bucket with user policies](walkthrough1.md)\. +## Requiring encryption -**Topics** -+ [Granting permissions to multiple accounts with added conditions](#example-bucket-policies-use-case-1) -+ [Granting read\-only permission to an anonymous user](#example-bucket-policies-use-case-2) -+ [Limiting access to specific IP addresses](#example-bucket-policies-use-case-3) -+ [Restricting access to a specific HTTP referer](#example-bucket-policies-use-case-4) -+ [Granting permission to an Amazon CloudFront OAI](#example-bucket-policies-cloudfront) -+ [Adding a bucket policy to require MFA](#example-bucket-policies-use-case-7) -+ [Granting cross\-account permissions to upload objects while ensuring the bucket owner has full control](#example-bucket-policies-use-case-8) -+ [Granting permissions for Amazon S3 Inventory and Amazon S3 analytics](#example-bucket-policies-use-case-9) -+ [Restricting access to an Amazon S3 Inventory report](#example-bucket-policies-use-case-10) -+ [Granting permissions for Amazon S3 Storage Lens](#example-bucket-policies-lens) +### Require SSE\-KMS for all objects written to a bucket -## Granting permissions to multiple accounts with added conditions +The following example policy requires every object that is written to the bucket to be encrypted with server\-side encryption using AWS Key Management Service \(AWS KMS\) keys \(SSE\-KMS\)\. If the object isn't encrypted with SSE\-KMS, the request will be denied\. -The following example policy grants the `s3:PutObject` and `s3:PutObjectAcl` permissions to multiple AWS accounts and requires that any request for these operations include the `public-read` canned access control list \(ACL\)\. For more information, see [Amazon S3 actions](using-with-s3-actions.md) and [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\. +``` +{ +"Version": "2012-10-17", +"Id": "PutObjPolicy", +"Statement": [{ + "Sid": "DenyObjectsThatAreNotSSEKMS", + "Principal": "*", + "Effect": "Deny", + "Action": "s3:PutObject", + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", + "Condition": { + "Null": { + "s3:x-amz-server-side-encryption-aws-kms-key-id": "true" + } + } +}] +} +``` -**Warning** -Use caution when granting anonymous access to your Amazon S3 bucket or disabling block public access settings\. When you grant anonymous access, anyone in the world can access your bucket\. We recommend that you never grant anonymous access to your Amazon S3 bucket unless you specifically need to, such as with [static website hosting](WebsiteHosting.md)\. +### Require SSE\-KMS with a specific AWS KMS key for all objects written to a bucket + +The following example policy denies any objects from being written to the bucket if they aren’t encrypted with SSE\-KMS by using a specific KMS key ID\. Even if the objects are encrypted with SSE\-KMS by using a per\-request header or bucket default encryption, the objects cannot be written to the bucket if they haven't been encrypted with the specified KMS key\. Make sure to replace the KMS key ARN that's used in this example with your own KMS key ARN\. ``` - 1. { - 2. "Version": "2012-10-17", - 3. "Statement": [ - 4. { - 5. "Sid": "AddCannedAcl", - 6. "Effect": "Allow", - 7. "Principal": { - 8. "AWS": [ - 9. "arn:aws:iam::111122223333:root", -10. "arn:aws:iam::444455556666:root" -11. ] -12. }, -13. "Action": [ -14. "s3:PutObject", -15. "s3:PutObjectAcl" -16. ], -17. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", -18. "Condition": { -19. "StringEquals": { -20. "s3:x-amz-acl": [ -21. "public-read" -22. ] -23. } -24. } -25. } -26. ] -27. } -``` - -## Granting read\-only permission to an anonymous user - -The following example policy grants the `s3:GetObject` permission to any public anonymous users\. \(For a list of permissions and the operations that they allow, see [Amazon S3 actions](using-with-s3-actions.md)\.\) This permission allows anyone to read the object data, which is useful if you configure your bucket as a website and want everyone to be able to read objects in the bucket\. Before you use a bucket policy to grant read\-only permission to an anonymous user, you must disable block public access settings for your bucket\. For more information, see [Setting permissions for website access](WebsiteAccessPermissionsReqd.md)\. +{ +"Version": "2012-10-17", +"Id": "PutObjPolicy", +"Statement": [{ + "Sid": "DenyObjectsThatAreNotSSEKMSWithSpecificKey", + "Principal": "*", + "Effect": "Deny", + "Action": "s3:PutObject", + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", + "Condition": { + "ArnNotEqualsIfExists": { + "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-2:111122223333:key/01234567-89ab-cdef-0123-456789abcdef" + } + } +}] +} +``` + +## Managing buckets using canned ACLs + +### Granting permissions to multiple accounts to upload objects or set object ACLs for public access + +The following example policy grants the `s3:PutObject` and `s3:PutObjectAcl` permissions to multiple AWS accounts and requires that any requests for these operations must include the `public-read` canned access control list \(ACL\)\. For more information, see [Amazon S3 actions](using-with-s3-actions.md) and [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\. **Warning** -Use caution when granting anonymous access to your Amazon S3 bucket or disabling block public access settings\. When you grant anonymous access, anyone in the world can access your bucket\. We recommend that you never grant anonymous access to your Amazon S3 bucket unless you specifically need to, such as with [static website hosting](WebsiteHosting.md)\. +The `public-read` canned ACL allows anyone in the world to view the objects in your bucket\. Use caution when granting anonymous access to your Amazon S3 bucket or disabling block public access settings\. When you grant anonymous access, anyone in the world can access your bucket\. We recommend that you never grant anonymous access to your Amazon S3 bucket unless you specifically need to, such as with [static website hosting](WebsiteHosting.md)\. If you want to enable block public access settings for static website hosting, see [Tutorial: Configuring a static website on Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html)\. + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AddPublicReadCannedAcl", + "Effect": "Allow", + "Principal": { + "AWS": [ + "arn:aws:iam::111122223333:root", + "arn:aws:iam::444455556666:root" + ] + }, + "Action": [ + "s3:PutObject", + "s3:PutObjectAcl" + ], + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", + "Condition": { + "StringEquals": { + "s3:x-amz-acl": [ + "public-read" + ] + } + } + } + ] +} +``` + +### Grant cross\-account permissions to upload objects while ensuring that the bucket owner has full control + +The following example shows how to allow another AWS account to upload objects to your bucket while ensuring that you have full control of the uploaded objects\. This policy grants a specific AWS account \(*`111122223333`*\) the ability to upload objects only if that account includes the `bucket-owner-full-control` canned ACL on upload\. The `StringEquals` condition in the policy specifies the `s3:x-amz-acl` condition key to express the canned ACL requirement\. For more information, see [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\. ``` 1. { - 2. "Version": "2012-10-17", - 3. "Statement": [ - 4. { - 5. "Sid": "PublicRead", - 6. "Effect": "Allow", - 7. "Principal": "*", - 8. "Action": [ - 9. "s3:GetObject", -10. "s3:GetObjectVersion" -11. ], -12. "Resource": [ -13. "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" -14. ] -15. } -16. ] -17. } + 2. "Version":"2012-10-17", + 3. "Statement":[ + 4. { + 5. "Sid":"PolicyForAllowUploadWithACL", + 6. "Effect":"Allow", + 7. "Principal":{"AWS":"111122223333"}, + 8. "Action":"s3:PutObject", + 9. "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", +10. "Condition": { +11. "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"} +12. } +13. } +14. ] +15. } ``` -## Limiting access to specific IP addresses +## Managing object access with object tagging -The following example denies permissions to any user to perform any Amazon S3 operations on objects in the specified S3 bucket unless the request originates from the range of IP addresses that are specified in the condition\. +### Allow a user to read only objects that have a specific tag key and value -This statement identifies *`54.240.143.0/24`* as the range of allowed Internet Protocol version 4 \(IPv4\) IP addresses\. +The following permissions policy limits a user to only reading objects that have the `environment: production` tag key and value\. This policy uses the `s3:ExistingObjectTag` condition key to specify the tag key and value\. -The `Condition` block uses the `NotIpAddress` condition and the `aws:SourceIp` condition key, which is an AWS wide condition key\. For more information about these condition keys, see [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\. The `aws:SourceIp` IPv4 values use standard CIDR notation\. For more information, see [IAM JSON Policy Elements Reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Conditions_IPAddress) in the *IAM User Guide*\. +``` +{ + "Version": "2012-10-17", + "Statement": [ + {"Principal":{"AWS":[ + "arn:aws:iam::111122223333:role/JohnDoe" + "Effect": "Allow", + "Action": ["s3:GetObject","s3:GetObjectVersion"], + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", + ] + } + "Condition": { "StringEquals": {"s3:ExistingObjectTag/environment": "production" } } + } + ] +} +``` + +### Restrict which object tag keys that users can add + +The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object\. The condition uses the `s3:RequestObjectTagKeys` condition key to specify the allowed tag keys, such as `Owner` or `CreationDate`\. For more information, see [Creating a condition that tests multiple key values](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*\. + +The policy ensures that every tag key specified in the request is an authorized tag key\. The `ForAnyValue` qualifier in the condition ensures that at least one of the specified keys must be present in the request\. + +``` +{ + "Version": "2012-10-17", + "Statement": [ + {"Principal":{"AWS":[ + "arn:aws:iam::111122223333:role/JohnDoe" + ] + }, + "Effect": "Allow", + "Action": [ + "s3:PutObjectTagging" + ], + "Resource": [ + "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" + ], + "Condition": {"ForAnyValue:StringEquals": {"s3:RequestObjectTagKeys": [ + "Owner", + "CreationDate" + ] + } + } + } + ] +} +``` + +### Require a specific tag key and value when allowing users to add object tags + +The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object\. The condition requires the user to include a specific tag key \(such as `Project`\) with the value set to `X`\. + +``` +{ + "Version": "2012-10-17", + "Statement": [ + {"Principal":{"AWS":[ + "arn:aws:iam::111122223333:user/JohnDoe" + ] + }, + "Effect": "Allow", + "Action": [ + "s3:PutObjectTagging" + ], + "Resource": [ + "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" + ], + "Condition": {"StringEquals": {"s3:RequestObjectTag/Project": "X" + } + } + } + ] +} +``` + +### Allow a user to only add objects with a specific object tag key and value + +The following example policy grants a user permission to perform the `s3:PutObject` action so that they can add objects to a bucket\. However, the `Condition` statement restricts the tag keys and values that are allowed on the uploaded objects\. In this example, the user can only add objects that have the specific tag key \(`Department`\) with the value set to `Finance` to the bucket\. + +``` +{ + "Version": "2012-10-17", + "Statement": [{ + "Principal":{ + "AWS":[ + "arn:aws:iam::111122223333:user/JohnDoe" + ] + }, + "Effect": "Allow", + "Action": [ + "s3:PutObject" + ], + "Resource": [ + "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" + ], + "Condition": { + "StringEquals": { + "s3:RequestObjectTag/Department": "Finance" + } + } + }] +} +``` + +## Managing object access by using global condition keys + +[Global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) are condition context keys with an `aws` prefix\. AWS services can support global condition keys or service\-specific keys that include the service prefix\. You can use the `Condition` element of a JSON policy to compare the keys in a request with the key values that you specify in your policy\. + +### Restrict access to only Amazon S3 server access log deliveries + +In the following example bucket policy, the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) global condition key is used to compare the [Amazon Resource Name \(ARN\)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) of the resource, making a service\-to\-service request with the ARN that is specified in the policy\. The `aws:SourceArn` global condition key is used to prevent the Amazon S3 service from being used as a [confused deputy](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) during transactions between services\. Only the Amazon S3 service is allowed to add objects to the Amazon S3 bucket\. + +This example bucket policy grants `s3:PutObject` permissions to only the logging service principal \(`logging.s3.amazonaws.com`\)\. + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AllowPutObjectS3ServerAccessLogsPolicy", + "Principal": { + "Service": "logging.s3.amazonaws.com" + }, + "Effect": "Allow", + "Action": "s3:PutObject", + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET-logs/*", + "Condition": { + "StringEquals": { + "aws:SourceAccount": "111111111111" + }, + "ArnLike": { + "aws:SourceArn": "arn:aws:s3:::EXAMPLE-SOURCE-BUCKET" + } + } + }, + { + "Sid": "RestrictToS3ServerAccessLogs", + "Effect": "Deny", + "Principal": "*", + "Action": "s3:PutObject", + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET-logs/*", + "Condition": { + "ForAllValues:StringNotEquals": { + "aws:PrincipalServiceNamesList": "logging.s3.amazonaws.com" + } + } + } + ] +} +``` + +### Allow access to only your organization + +If you want to require all [IAM principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/intro-structure.html#intro-structure-principal) accessing a resource to be from an AWS account in your organization \(including the AWS Organizations management account\), you can use the `aws:PrincipalOrgID` global condition key\. + +To grant or restrict this type of access, define the `aws:PrincipalOrgID` condition and set the value to your [organization ID](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_details.html) in the bucket policy\. The organization ID is used to control access to the bucket\. When you use the `aws:PrincipalOrgID` condition, the permissions from the bucket policy are also applied to all new accounts that are added to the organization\. + +Here’s an example of a resource\-based bucket policy that you can use to grant specific IAM principals in your organization direct access to your bucket\. By adding the `aws:PrincipalOrgID` global condition key to your bucket policy, the principal account is now required to be in your organization to obtain access to the resource\. Even if you accidentally specify an incorrect account when granting access, the [aws:PrincipalOrgID global condition key](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgid) acts as an additional safeguard\. When this global key is used in a policy, it prevents all principals from outside of the specified organization from accessing the S3 bucket\. Only principals from accounts in the listed organization are able to obtain access to the resource\. + +``` +{ + "Version": "2012-10-17", + "Statement": [{ + "Sid": "AllowGetObject", + "Principal": { + "AWS": "*" + }, + "Effect": "Allow", + "Action": "s3:GetObject", + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", + "Condition": { + "StringEquals": { + "aws:PrincipalOrgID": ["o-aa111bb222"] + } + } + }] +} +``` + +## Managing access based on specific IP addresses + +### Restrict access to specific IP addresses + +The following example denies all users from performing any Amazon S3 operations on objects in the specified buckets unless the request originates from the specified range of IP addresses\. + +This policy's `Condition` statement identifies *`192.0.2.0/24`* as the range of allowed Internet Protocol version 4 \(IPv4\) IP addresses\. + +The `Condition` block uses the `NotIpAddress` condition and the `aws:SourceIp` condition key, which is an AWS wide condition key\. The `aws:SourceIp` condition key can only be used for public IP address ranges\. For more information about these condition keys, see [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\. The `aws:SourceIp` IPv4 values use standard CIDR notation\. For more information, see [IAM JSON Policy Elements Reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Conditions_IPAddress) in the *IAM User Guide*\. **Warning** -Before using this policy, replace the *`54.240.143.0/24`* IP address range in this example with an appropriate value for your use case\. Otherwise, you will lose the ability to access your bucket\. +Before using this policy, replace the *`192.0.2.0/24`* IP address range in this example with an appropriate value for your use case\. Otherwise, you will lose the ability to access your bucket\. ``` 1. { @@ -115,11 +345,11 @@ Before using this policy, replace the *`54.240.143.0/24`* IP address range in th 9. "Action": "s3:*", 10. "Resource": [ 11. "arn:aws:s3:::DOC-EXAMPLE-BUCKET", -12. "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" +12. "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*" 13. ], 14. "Condition": { 15. "NotIpAddress": { -16. "aws:SourceIp": "54.240.143.0/24" +16. "aws:SourceIp": "192.0.2.0/24" 17. } 18. } 19. } @@ -127,13 +357,13 @@ Before using this policy, replace the *`54.240.143.0/24`* IP address range in th 21. } ``` -### Allowing IPv4 and IPv6 addresses +### Allow both IPv4 and IPv6 addresses -When you start using IPv6 addresses, we recommend that you update all of your organization's policies with your IPv6 address ranges in addition to your existing IPv4 ranges to ensure that the policies continue to work as you make the transition to IPv6\. +When you start using IPv6 addresses, we recommend that you update all of your organization's policies with your IPv6 address ranges in addition to your existing IPv4 ranges\. Doing this will help ensure that the policies continue to work as you make the transition to IPv6\. -The following example bucket policy shows how to mix IPv4 and IPv6 address ranges to cover all of your organization's valid IP addresses\. The example policy allows access to the example IP addresses *`54.240.143.1`* and *`2001:DB8:1234:5678::1`* and denies access to the addresses *`54.240.143.129`* and *`2001:DB8:1234:5678:ABCD::1`*\. +The following example bucket policy shows how to mix IPv4 and IPv6 address ranges to cover all of your organization's valid IP addresses\. The example policy allows access to the example IP addresses *`192.0.2.1`* and *`2001:DB8:1234:5678::1`* and denies access to the addresses *`203.0.113.1`* and *`2001:DB8:1234:5678:ABCD::1`*\. -The IPv6 values for `aws:SourceIp` must be in standard CIDR format\. For IPv6, we support using `::` to represent a range of 0s \(for example, `2032001:DB8:1234:5678::/64`\)\. For more information, see [ IP Address Condition Operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_IPAddress) in the *IAM User Guide*\. +The `aws:SourceIp` condition key can only be used for public IP address ranges\. The IPv6 values for `aws:SourceIp` must be in standard CIDR format\. For IPv6, we support using `::` to represent a range of 0s \(for example, `2001:DB8:1234:5678::/64`\)\. For more information, see [ IP Address Condition Operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_IPAddress) in the *IAM User Guide*\. **Warning** Replace the IP address ranges in this example with appropriate values for your use case before using this policy\. Otherwise, you might lose the ability to access your bucket\. @@ -150,18 +380,18 @@ Replace the IP address ranges in this example with appropriate values for your u 9. "Action": "s3:*", 10. "Resource": [ 11. "arn:aws:s3:::DOC-EXAMPLE-BUCKET", -12. "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" +12. "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*" 13. ], 14. "Condition": { 15. "IpAddress": { 16. "aws:SourceIp": [ -17. "54.240.143.0/24", +17. "192.0.2.0/24", 18. "2001:DB8:1234:5678::/64" 19. ] 20. }, 21. "NotIpAddress": { 22. "aws:SourceIp": [ -23. "54.240.143.128/30", +23. "203.0.113.0/24", 24. "2001:DB8:1234:5678:ABCD::/80" 25. ] 26. } @@ -171,9 +401,42 @@ Replace the IP address ranges in this example with appropriate values for your u 30. } ``` -## Restricting access to a specific HTTP referer +## Managing access based on HTTP or HTTPS requests + +### Restrict access to only HTTPS requests + +If you want to prevent potential attackers from manipulating network traffic, you can use HTTPS \(TLS\) to only allow encrypted connections while restricting HTTP requests from accessing your bucket\. To determine whether the request is HTTP or HTTPS, use the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-securetransport](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-securetransport) global condition key in your S3 bucket policy\. The `aws:SecureTransport` condition key checks whether a request was sent by using HTTP\. -Suppose that you have a website with a domain name \(*`www.example.com`* or *`example.com`*\) with links to photos and videos stored in your Amazon S3 bucket, `DOC-EXAMPLE-BUCKET`\. By default, all Amazon S3 resources are private, so only the AWS account that created the resources can access them\. To allow read access to these objects from your website, you can add a bucket policy that allows the `s3:GetObject` permission with a condition, using the `aws:Referer` key, that the `GET` request must originate from specific webpages\. The following policy specifies the `StringLike` condition with the `aws:Referer` condition key\. +If a request returns `true`, then the request was sent through HTTP\. If the request returns `false`, then the request was sent through HTTPS\. You can then allow or deny access to your bucket based on the desired request scheme\. + +In the following example, the bucket policy explicitly denies access to HTTP requests\. + +``` +{ + "Version": "2012-10-17", + "Statement": [{ + "Sid": "RestrictToTLSRequestsOnly", + "Action": "s3:*", + "Effect": "Deny", + "Resource": [ + "arn:aws:s3:::DOC-EXAMPLE-BUCKET", + "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*" + ], + "Condition": { + "Bool": { + "aws:SecureTransport": "false" + } + }, + "Principal": "*" + }] +} +``` + +### Restrict access to a specific HTTP referer + +Suppose that you have a website with the domain name *`www.example.com`* or *`example.com`* with links to photos and videos stored in your bucket named `DOC-EXAMPLE-BUCKET`\. By default, all Amazon S3 resources are private, so only the AWS account that created the resources can access them\. + +To allow read access to these objects from your website, you can add a bucket policy that allows the `s3:GetObject` permission with a condition that the `GET` request must originate from specific webpages\. The following policy restricts requests by using the `StringLike` condition with the `aws:Referer` condition key\. ``` 1. { @@ -181,7 +444,7 @@ Suppose that you have a website with a domain name \(*`www.example.com`* or *`ex 3. "Id":"HTTP referer policy example", 4. "Statement":[ 5. { - 6. "Sid":"Allow GET requests originating from www.example.com and example.com.", + 6. "Sid":"Allow only GET requests originating from www.example.com and example.com.", 7. "Effect":"Allow", 8. "Principal":"*", 9. "Action":["s3:GetObject","s3:GetObjectVersion"], @@ -200,15 +463,129 @@ Make sure that the browsers that you use include the HTTP `referer` header in th We recommend that you use caution when using the `aws:Referer` condition key\. It is dangerous to include a publicly known HTTP referer header value\. Unauthorized parties can use modified or custom browsers to provide any `aws:Referer` value that they choose\. Therefore, do not use `aws:Referer` to prevent unauthorized parties from making direct AWS requests\. The `aws:Referer` condition key is offered only to allow customers to protect their digital content, such as content stored in Amazon S3, from being referenced on unauthorized third\-party sites\. For more information, see [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-referer](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-referer) in the *IAM User Guide*\. -## Granting permission to an Amazon CloudFront OAI +## Managing user access to specific folders + +### Grant users access to specific folders + +Suppose that you're trying to grant users access to a specific folder\. If the IAM user and the S3 bucket belong to the same AWS account, then you can use an IAM policy to grant the user access to a specific bucket folder\. With this approach, you don't need to update your bucket policy to grant access\. You can add the IAM policy to individual IAM users, or you can attach the IAM policy to an IAM role that multiple users can switch to\. + +If the IAM identity and the S3 bucket belong to different AWS accounts, then you must grant cross\-account access in both the IAM policy and the bucket policy\. For more information about granting cross\-account access, see [Bucket owner granting cross\-account bucket permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html)\. -The following example bucket policy grants a CloudFront origin access identity \(OAI\) permission to get \(read\) all objects in your Amazon S3 bucket\. You can use a CloudFront OAI to allow users to access objects in your bucket through CloudFront but not directly through Amazon S3\. For more information, see [Restricting Access to Amazon S3 Content by Using an Origin Access Identity](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*\. +The following example bucket policy grants `JohnDoe` full console access to only his folder \(`home/JohnDoe/`\)\. By creating a `home` folder and granting the appropriate permissions to your users, you can have multiple users share a single bucket\. This policy consists of three `Allow` statements: ++ `AllowRootAndHomeListingOfCompanyBucket`: Allows the user \(`JohnDoe`\) to list objects at the root level of the `DOC-EXAMPLE-BUCKET` bucket and in the `home` folder\. This statement also allows the user to search on the prefix `home/` by using the console\. ++ `AllowListingOfUserFolder`: Allows the user \(`JohnDoe`\) to list all objects in the `home/JohnDoe/` folder and any subfolders\. ++ `AllowAllS3ActionsInUserFolder`: Allows the user to perform all Amazon S3 actions by granting `Read`, `Write`, and `Delete` permissions\. Permissions are limited to the bucket owner's home folder\. -The following policy uses the OAI’s ID as the policy’s `Principal`\. For more information about using S3 bucket policies to grant access to a CloudFront OAI, see [Using Amazon S3 Bucket Policies](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-updating-s3-bucket-policies) in the *Amazon CloudFront Developer Guide*\. +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AllowRootAndHomeListingOfCompanyBucket", + "Principal": { + "AWS": [ + "arn:aws:iam::111122223333:user/JohnDoe" + ] + }, + "Effect": "Allow", + "Action": ["s3:ListBucket"], + "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"], + "Condition": { + "StringEquals": { + "s3:prefix": ["", "home/", "home/JohnDoe"], + "s3:delimiter": ["/"] + } + } + }, + { + "Sid": "AllowListingOfUserFolder", + "Principal": { + "AWS": [ + "arn:aws:iam::111122223333:user/JohnDoe" + ] + }, + "Action": ["s3:ListBucket"], + "Effect": "Allow", + "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"], + "Condition": { + "StringLike": { + "s3:prefix": ["home/JohnDoe/*"] + } + } + }, + { + "Sid": "AllowAllS3ActionsInUserFolder", + "Effect": "Allow", + "Principal": { + "AWS": [ + "arn:aws:iam::111122223333:user/JohnDoe" + ] + }, + "Action": ["s3:*"], + "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET/home/JohnDoe/*"] + } + ] +} +``` + +## Managing access for access logs + +### Grant access to Application Load Balancer for enabling access logs + +When you enable access logs for Application Load Balancer, you must specify the name of the S3 bucket where the load balancer will [store the logs](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/enable-access-logging.html#access-log-create-bucket)\. The bucket must have an [attached policy](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/enable-access-logging.html#attach-bucket-policy) that grants Elastic Load Balancing permission to write to the bucket\. + +In the following example, the bucket policy grants Elastic Load Balancing \(ELB\) permission to write the access logs to the bucket: + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Principal": { + "AWS": "arn:aws:iam::elb-account-id:root" + }, + "Effect": "Allow", + "Action": "s3:PutObject", + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/prefix/AWSLogs/111122223333/*" + } + ] +} +``` + +**Note** +Make sure to replace `elb-account-id` with the AWS account ID for Elastic Load Balancing for your AWS Region\. For the list of Elastic Load Balancing Regions, see [Attach a policy to your Amazon S3 bucket](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy) in the *Elastic Load Balancing User Guide*\. + +If your AWS Region does not appear in the supported Elastic Load Balancing Regions list, use the following policy, which grants permissions to the specified log delivery service\. + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Principal": { + "Service": "logdelivery.elasticloadbalancing.amazonaws.com" + }, + "Effect": "Allow", + "Action": "s3:PutObject", + "Resource": "arn:aws:s3::DOC-EXAMPLE-BUCKET/prefix/AWSLogs/111122223333/*" + } + ] +} +``` + +Then, make sure to configure your [Elastic Load Balancing access logs](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/enable-access-logging.html#enable-access-logs) by enabling them\. You can [verify your bucket permissions](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/enable-access-logging.html#verify-bucket-permissions) by creating a test file\. + +## Managing access to an Amazon CloudFront OAI + +### Grant permission to an Amazon CloudFront OAI + +The following example bucket policy grants a CloudFront origin access identity \(OAI\) permission to get \(read\) all objects in your S3 bucket\. You can use a CloudFront OAI to allow users to access objects in your bucket through CloudFront but not directly through Amazon S3\. For more information, see [Restricting access to Amazon S3 content by using an Origin Access Identity](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*\. + +The following policy uses the OAI's ID as the policy's `Principal`\. For more information about using S3 bucket policies to grant access to a CloudFront OAI, see [Migrating from origin access identity \(OAI\) to origin access control \(OAC\)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#migrate-from-oai-to-oac) in the *Amazon CloudFront Developer Guide*\. To use this example: -+ Replace `EH1HDMB1FH2TC` with the OAI’s ID\. To find the OAI’s ID, see the [Origin Access Identity page](https://console.aws.amazon.com/cloudfront/home?region=us-east-1#oai:) on the CloudFront console, or use [https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html) in the CloudFront API\. -+ Replace `DOC-EXAMPLE-BUCKET` with the name of your Amazon S3 bucket\. ++ Replace `EH1HDMB1FH2TC` with the OAI's ID\. To find the OAI's ID, see the [Origin Access Identity page](https://console.aws.amazon.com/cloudfront/home?region=us-east-1#oai:) on the CloudFront console, or use [https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html) in the CloudFront API\. ++ Replace `DOC-EXAMPLE-BUCKET` with the name of your bucket\. ``` 1. { @@ -227,120 +604,55 @@ To use this example: 14. } ``` -## Adding a bucket policy to require MFA +## Managing access for Amazon S3 Storage Lens -Amazon S3 supports MFA\-protected API access, a feature that can enforce multi\-factor authentication \(MFA\) for access to your Amazon S3 resources\. Multi\-factor authentication provides an extra level of security that you can apply to your AWS environment\. MFA is a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code\. For more information, see [AWS Multi\-Factor Authentication](https://aws.amazon.com/mfa/)\. You can require MFA for any requests to access your Amazon S3 resources\. - -To enforce the MFA requirement, use the `aws:MultiFactorAuthAge` key in a bucket policy\. AWS Identity and Access Management \(IAM\) users can access Amazon S3 resources by using temporary credentials issued by the AWS Security Token Service \(AWS STS\)\. You provide the MFA code at the time of the AWS STS request\. - -When Amazon S3 receives a request with multi\-factor authentication, the `aws:MultiFactorAuthAge` key provides a numeric value that indicates how long ago \(in seconds\) the temporary credential was created\. If the temporary credential provided in the request was not created by using an MFA device, this key value is null \(absent\)\. In a bucket policy, you can add a condition to check this value, as shown in the following example\. This example policy denies any Amazon S3 operation on the *`/taxdocuments`* folder in the `DOC-EXAMPLE-BUCKET` bucket if the request is not authenticated by using MFA\. To learn more about MFA, see [Using Multi\-Factor Authentication \(MFA\) in AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html) in the *IAM User Guide*\. +### Grant permissions for Amazon S3 Storage Lens -``` - 1. { - 2. "Version": "2012-10-17", - 3. "Id": "123", - 4. "Statement": [ - 5. { - 6. "Sid": "", - 7. "Effect": "Deny", - 8. "Principal": "*", - 9. "Action": "s3:*", -10. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", -11. "Condition": { "Null": { "aws:MultiFactorAuthAge": true }} -12. } -13. ] -14. } -``` +S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page\. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data\-protection best practices\. Your dashboard has drill\-down options to generate insights at the organization, account, bucket, object, or prefix level\. You can also send a once\-daily metrics export in CSV or Parquet format to an S3 bucket\. -The `Null` condition in the `Condition` block evaluates to `true` if the `aws:MultiFactorAuthAge` key value is null, indicating that the temporary security credentials in the request were created without an MFA device\. +S3 Storage Lens can export your aggregated storage usage metrics to an Amazon S3 bucket for further analysis\. The bucket where S3 Storage Lens places its metrics exports is known as the *destination bucket*\. When setting up your S3 Storage Lens metrics export, you must have a bucket policy for the destination bucket\. For more information, see [Assessing your storage activity and usage with Amazon S3 Storage Lens](storage_lens.md)\. -The following bucket policy is an extension of the preceding bucket policy\. It includes two policy statements\. One statement allows the `s3:GetObject` permission on a bucket \(`DOC-EXAMPLE-BUCKET`\) to everyone\. Another statement further restricts access to the `DOC-EXAMPLE-BUCKET/taxdocuments` folder in the bucket by requiring MFA\. +The following example bucket policy grants Amazon S3 permission to write objects \(`PUT` requests\) to a destination bucket\. You use a bucket policy like this on the destination bucket when setting up an S3 Storage Lens metrics export\. ``` 1. { 2. "Version": "2012-10-17", - 3. "Id": "123", - 4. "Statement": [ - 5. { - 6. "Sid": "", - 7. "Effect": "Deny", - 8. "Principal": "*", - 9. "Action": "s3:*", -10. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", -11. "Condition": { "Null": { "aws:MultiFactorAuthAge": true } } -12. }, -13. { -14. "Sid": "", -15. "Effect": "Allow", -16. "Principal": "*", -17. "Action": ["s3:GetObject"], -18. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" -19. } -20. ] -21. } + 3. "Statement": [ + 4. { + 5. "Sid": "S3StorageLensExamplePolicy", + 6. "Effect": "Allow", + 7. "Principal": { + 8. "Service": "storage-lens.s3.amazonaws.com" + 9. }, +10. "Action": "s3:PutObject", +11. "Resource": [ +12. "arn:aws:s3:::destination-bucket/destination-prefix/StorageLens/111122223333/*" +13. ], +14. "Condition": { +15. "StringEquals": { +16. "s3:x-amz-acl": "bucket-owner-full-control", +17. "aws:SourceAccount": "111122223333", +18. "aws:SourceArn": "arn:aws:s3:region-code:111122223333:storage-lens/storage-lens-dashboard-configuration-id" +19. } +20. } +21. } +22. ] +23. } ``` -You can optionally use a numeric condition to limit the duration for which the `aws:MultiFactorAuthAge` key is valid, independent of the lifetime of the temporary security credential that's used in authenticating the request\. For example, the following bucket policy, in addition to requiring MFA authentication, also checks how long ago the temporary session was created\. The policy denies any operation if the `aws:MultiFactorAuthAge` key value indicates that the temporary session was created more than an hour ago \(3,600 seconds\)\. +When you're setting up an S3 Storage Lens organization\-level metrics export, use the following modification to the previous bucket policy's `Resource` statement\. ``` - 1. { - 2. "Version": "2012-10-17", - 3. "Id": "123", - 4. "Statement": [ - 5. { - 6. "Sid": "", - 7. "Effect": "Deny", - 8. "Principal": "*", - 9. "Action": "s3:*", -10. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", -11. "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} -12. }, -13. { -14. "Sid": "", -15. "Effect": "Deny", -16. "Principal": "*", -17. "Action": "s3:*", -18. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", -19. "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 3600 }} -20. }, -21. { -22. "Sid": "", -23. "Effect": "Allow", -24. "Principal": "*", -25. "Action": ["s3:GetObject"], -26. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" -27. } -28. ] -29. } +1. "Resource": "arn:aws:s3:::destination-bucket/destination-prefix/StorageLens/your-organization-id/*", ``` -## Granting cross\-account permissions to upload objects while ensuring the bucket owner has full control +## Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports -The following example shows how to allow another AWS account to upload objects to your bucket while ensuring that you have full control of the uploaded objects\. This policy grants a specific AWS account \(*`111122223333`*\) the ability to upload objects only if that account includes the `bucket-owner-full-control` canned ACL on upload\. The `StringEquals` condition in the policy specifies the `s3:x-amz-acl condition` key to express the requirement \(see [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\)\. +### Grant permissions for S3 Inventory and S3 analytics -``` - 1. { - 2. "Version":"2012-10-17", - 3. "Statement":[ - 4. { - 5. "Sid":"PolicyForAllowUploadWithACL", - 6. "Effect":"Allow", - 7. "Principal":{"AWS":"111122223333"}, - 8. "Action":"s3:PutObject", - 9. "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", -10. "Condition": { -11. "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"} -12. } -13. } -14. ] -15. } -``` - -## Granting permissions for Amazon S3 Inventory and Amazon S3 analytics - -Amazon S3 Inventory creates lists of the objects in an Amazon S3 bucket, and Amazon S3 analytics export creates output files of the data used in the analysis\. The bucket that the inventory lists the objects for is called the *source bucket*\. The bucket where the inventory file or the analytics export file is written to is called a *destination bucket*\. When setting up an inventory or an analytics export, you must create a bucket policy for the destination bucket\. For more information, see [Amazon S3 Inventory](storage-inventory.md) and [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md)\. +S3 Inventory creates lists of the objects in a bucket, and S3 analytics Storage Class Analysis export creates output files of the data used in the analysis\. The bucket that the inventory lists the objects for is called the *source bucket*\. The bucket where the inventory file or the analytics export file is written to is called a *destination bucket*\. When setting up an inventory or an analytics export, you must create a bucket policy for the destination bucket\. For more information, see [Amazon S3 Inventory](storage-inventory.md) and [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md)\. -The following example bucket policy grants Amazon S3 permission to write objects \(`PUT` requests\) from the account for the source bucket to the destination bucket\. You use a bucket policy like this on the destination bucket when setting up Amazon S3 Inventory and Amazon S3 analytics export\. +The following example bucket policy grants Amazon S3 permission to write objects \(`PUT` requests\) from the account for the source bucket to the destination bucket\. You use a bucket policy like this on the destination bucket when setting up S3 Inventory and S3 analytics export\. ``` 1. { @@ -354,11 +666,11 @@ The following example bucket policy grants Amazon S3 permission to write objects 9. }, 10. "Action": "s3:PutObject", 11. "Resource": [ -12. "arn:aws:s3:::destinationbucket/*" +12. "arn:aws:s3:::DOC-EXAMPLE-DESTINATION-BUCKET/*" 13. ], 14. "Condition": { 15. "ArnLike": { -16. "aws:SourceArn": "arn:aws:s3:::sourcebucket" +16. "aws:SourceArn": "arn:aws:s3:::DOC-EXAMPLE-SOURCE-BUCKET" 17. }, 18. "StringEquals": { 19. "aws:SourceAccount": "111122223333", @@ -370,13 +682,13 @@ The following example bucket policy grants Amazon S3 permission to write objects 25. } ``` -## Restricting access to an Amazon S3 Inventory report +### Restrict access to an S3 Inventory report [Amazon S3 Inventory](storage-inventory.md) creates lists of the objects in an S3 bucket and the metadata for each object\. The `s3:PutInventoryConfiguration` permission allows a user to create an inventory report that includes all object metadata fields that are available and to specify the destination bucket to store the inventory\. A user with read access to objects in the destination bucket can access all object metadata fields that are available in the inventory report\. For more information about the metadata fields that are available in S3 Inventory, see [Amazon S3 Inventory list](storage-inventory.md#storage-inventory-contents)\. To restrict a user from configuring an S3 Inventory report of all object metadata available, remove the `s3:PutInventoryConfiguration` permission from the user\. -To restrict a user from accessing your S3 Inventory report in a destination bucket, create a bucket policy like the following example on the destination bucket\. This example bucket policy denies all the principals except the user *Ana* from accessing the inventory report *DOC\-EXAMPLE\-DESTINATION\-BUCKET\-INVENTORY* in the destination bucket *DOC\-EXAMPLE\-DESTINATION\-BUCKET*\. +To restrict a user from accessing your S3 Inventory report in a destination bucket, add a bucket policy like the following example to the destination bucket\. This example bucket policy denies all the principals except the user `Ana` from accessing the inventory report `DOC-EXAMPLE-DESTINATION-BUCKET-INVENTORY` in the destination bucket `DOC-EXAMPLE-DESTINATION-BUCKET`\. ``` 1. { @@ -431,42 +743,93 @@ To restrict a user from accessing your S3 Inventory report in a destination buck 50. } ``` -## Granting permissions for Amazon S3 Storage Lens +## Requiring MFA -S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page\. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data\-protection best practices\. Your dashboard has drill\-down options to generate insights at the organization, account, bucket, object, or prefix level\. You can also send a once\-daily metrics export in CSV or Parquet format to an S3 bucket\. +Amazon S3 supports MFA\-protected API access, a feature that can enforce multi\-factor authentication \(MFA\) for access to your Amazon S3 resources\. Multi\-factor authentication provides an extra level of security that you can apply to your AWS environment\. MFA is a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code\. For more information, see [AWS Multi\-Factor Authentication](https://aws.amazon.com/mfa/)\. You can require MFA for any requests to access your Amazon S3 resources\. -S3 Storage Lens can aggregate your storage usage to metrics exports in an Amazon S3 bucket for further analysis\. The bucket where S3 Storage Lens places its metrics exports is known as the *destination bucket*\. When setting up your S3 Storage Lens metrics export, you must have a bucket policy for the destination bucket\. For more information, see [Assessing your storage activity and usage with Amazon S3 Storage Lens](storage_lens.md)\. +To enforce the MFA requirement, use the `aws:MultiFactorAuthAge` condition key in a bucket policy\. IAM users can access Amazon S3 resources by using temporary credentials issued by the AWS Security Token Service \(AWS STS\)\. You provide the MFA code at the time of the AWS STS request\. -The following example bucket policy grants Amazon S3 permission to write objects \(`PUT` requests\) to a destination bucket\. You use a bucket policy like this on the destination bucket when setting up an S3 Storage Lens metrics export\. +When Amazon S3 receives a request with multi\-factor authentication, the `aws:MultiFactorAuthAge` condition key provides a numeric value that indicates how long ago \(in seconds\) the temporary credential was created\. If the temporary credential provided in the request was not created by using an MFA device, this key value is null \(absent\)\. In a bucket policy, you can add a condition to check this value, as shown in the following example\. + +This example policy denies any Amazon S3 operation on the *`/taxdocuments`* folder in the `DOC-EXAMPLE-BUCKET` bucket if the request is not authenticated by using MFA\. To learn more about MFA, see [Using Multi\-Factor Authentication \(MFA\) in AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html) in the *IAM User Guide*\. ``` 1. { 2. "Version": "2012-10-17", - 3. "Statement": [ - 4. { - 5. "Sid": "S3StorageLensExamplePolicy", - 6. "Effect": "Allow", - 7. "Principal": { - 8. "Service": "storage-lens.s3.amazonaws.com" - 9. }, -10. "Action": "s3:PutObject", -11. "Resource": [ -12. "arn:aws:s3:::destination-bucket/destination-prefix/StorageLens/111122223333/*" -13. ], -14. "Condition": { -15. "StringEquals": { -16. "s3:x-amz-acl": "bucket-owner-full-control", -17. "aws:SourceAccount": "111122223333", -18. "aws:SourceArn": "arn:aws:s3:AWS Region:111122223333:storage-lens/storage-lens-dashboard-configuration-id" -19. } -20. } -21. } -22. ] -23. } + 3. "Id": "123", + 4. "Statement": [ + 5. { + 6. "Sid": "", + 7. "Effect": "Deny", + 8. "Principal": "*", + 9. "Action": "s3:*", +10. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", +11. "Condition": { "Null": { "aws:MultiFactorAuthAge": true }} +12. } +13. ] +14. } ``` -When setting up an S3 Storage Lens organization\-level metrics export, use the following modification to the previous bucket policy's `Resource` statement\. +The `Null` condition in the `Condition` block evaluates to `true` if the `aws:MultiFactorAuthAge` condition key value is null, indicating that the temporary security credentials in the request were created without an MFA device\. + +The following bucket policy is an extension of the preceding bucket policy\. It includes two policy statements\. One statement allows the `s3:GetObject` permission on a bucket \(`DOC-EXAMPLE-BUCKET`\) to everyone\. Another statement further restricts access to the `DOC-EXAMPLE-BUCKET/taxdocuments` folder in the bucket by requiring MFA\. ``` -1. "Resource": "arn:aws:s3:::destination-bucket/destination-prefix/StorageLens/your-organization-id/*", + 1. { + 2. "Version": "2012-10-17", + 3. "Id": "123", + 4. "Statement": [ + 5. { + 6. "Sid": "", + 7. "Effect": "Deny", + 8. "Principal": "*", + 9. "Action": "s3:*", +10. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", +11. "Condition": { "Null": { "aws:MultiFactorAuthAge": true } } +12. }, +13. { +14. "Sid": "", +15. "Effect": "Allow", +16. "Principal": "*", +17. "Action": ["s3:GetObject"], +18. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" +19. } +20. ] +21. } +``` + +You can optionally use a numeric condition to limit the duration for which the `aws:MultiFactorAuthAge` key is valid\. The duration that you specify with the `aws:MultiFactorAuthAge` key is independent of the lifetime of the temporary security credential that's used in authenticating the request\. + +For example, the following bucket policy, in addition to requiring MFA authentication, also checks how long ago the temporary session was created\. The policy denies any operation if the `aws:MultiFactorAuthAge` key value indicates that the temporary session was created more than an hour ago \(3,600 seconds\)\. + +``` + 1. { + 2. "Version": "2012-10-17", + 3. "Id": "123", + 4. "Statement": [ + 5. { + 6. "Sid": "", + 7. "Effect": "Deny", + 8. "Principal": "*", + 9. "Action": "s3:*", +10. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", +11. "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} +12. }, +13. { +14. "Sid": "", +15. "Effect": "Deny", +16. "Principal": "*", +17. "Action": "s3:*", +18. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/taxdocuments/*", +19. "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 3600 }} +20. }, +21. { +22. "Sid": "", +23. "Effect": "Allow", +24. "Principal": "*", +25. "Action": ["s3:GetObject"], +26. "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" +27. } +28. ] +29. } ``` \ No newline at end of file diff --git a/doc_source/example-walkthroughs-managing-access-example2.md b/doc_source/example-walkthroughs-managing-access-example2.md index 0c85aa5..6db71bb 100644 --- a/doc_source/example-walkthroughs-managing-access-example2.md +++ b/doc_source/example-walkthroughs-managing-access-example2.md @@ -109,7 +109,7 @@ The bucket policy grants the `s3:GetLifecycleConfiguration` and `s3:ListBucket` 1. Attach the following bucket policy to `DOC-EXAMPLE-BUCKET`\. The policy grants Account B permission for the `s3:GetLifecycleConfiguration` and `s3:ListBucket` actions\. - For instructions, see [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md)\. + For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md)\. ``` { diff --git a/doc_source/example-walkthroughs-managing-access-example3.md b/doc_source/example-walkthroughs-managing-access-example3.md index 0e620c8..de711a3 100644 --- a/doc_source/example-walkthroughs-managing-access-example3.md +++ b/doc_source/example-walkthroughs-managing-access-example3.md @@ -84,7 +84,7 @@ Using the IAM user sign\-in URL for Account A, sign in to the AWS Management Con 1. Note down the Dave credentials\. -1. In the Amazon S3 console, attach the following bucket policy to `DOC-EXAMPLE-BUCKET1` bucket\. For instructions, see [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md)\. Follow the steps to add a bucket policy\. For information about how to find account IDs, see [Finding your AWS account ID](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html#FindingYourAccountIdentifiers)\. +1. In the Amazon S3 console, attach the following bucket policy to `DOC-EXAMPLE-BUCKET1` bucket\. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md)\. Follow the steps to add a bucket policy\. For information about how to find account IDs, see [Finding your AWS account ID](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html#FindingYourAccountIdentifiers)\. The policy grants Account B the `s3:PutObject` and `s3:ListBucket` permissions\. The policy also grants user Dave the `s3:GetObject` permission\. diff --git a/doc_source/example_s3_CopyObject_section.md b/doc_source/example_s3_CopyObject_section.md index 04dcfaa..b818f8c 100644 --- a/doc_source/example_s3_CopyObject_section.md +++ b/doc_source/example_s3_CopyObject_section.md @@ -165,7 +165,7 @@ func (basics BucketBasics) CopyToFolder(bucketName string, objectKey string, fol **SDK for Java 2\.x** There's more on GitHub\. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3#readme)\. - +Copy an object using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static String copyBucketObject (S3Client s3, String fromBucket, String objectKey, String toBucket) { @@ -195,6 +195,39 @@ func (basics BucketBasics) CopyToFolder(bucketName string, objectKey string, fol return ""; } ``` +Use an [S3TransferManager](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html) to [copy an object](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#copy(software.amazon.awssdk.transfer.s3.CopyRequest)) from one bucket to another\. View the [complete file](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/ObjectCopy.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/test/java/TransferManagerTest.java)\. + +``` +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.core.sync.RequestBody; +import software.amazon.awssdk.services.s3.model.CopyObjectRequest; +import software.amazon.awssdk.transfer.s3.S3TransferManager; +import software.amazon.awssdk.transfer.s3.model.CompletedCopy; +import software.amazon.awssdk.transfer.s3.model.Copy; +import software.amazon.awssdk.transfer.s3.model.CopyRequest; + +import java.util.UUID; + + public String copyObject(S3TransferManager transferManager, String bucketName, + String key, String destinationBucket, String destinationKey){ + CopyObjectRequest copyObjectRequest = CopyObjectRequest.builder() + .sourceBucket(bucketName) + .sourceKey(key) + .destinationBucket(destinationBucket) + .destinationKey(destinationKey) + .build(); + + CopyRequest copyRequest = CopyRequest.builder() + .copyObjectRequest(copyObjectRequest) + .build(); + + Copy copy = transferManager.copy(copyRequest); + + CompletedCopy completedCopy = copy.completionFuture().join(); + return completedCopy.response().copyObjectResult().eTag(); + } +``` + For API details, see [CopyObject](https://docs.aws.amazon.com/goto/SdkForJavaV2/s3-2006-03-01/CopyObject) in *AWS SDK for Java 2\.x API Reference*\. ------ diff --git a/doc_source/example_s3_DeleteObjects_section.md b/doc_source/example_s3_DeleteObjects_section.md index c7a69e1..171e2d4 100644 --- a/doc_source/example_s3_DeleteObjects_section.md +++ b/doc_source/example_s3_DeleteObjects_section.md @@ -848,7 +848,20 @@ This is prerelease documentation for an SDK in preview release\. It is subject t ) do { - _ = try await client.deleteObjects(input: input) + let output = try await client.deleteObjects(input: input) + + // As of the last update to this example, any errors are returned + // in the `output` object's `errors` property. If there are any + // errors in this array, throw an exception. Once the error + // handling is finalized in later updates to the AWS SDK for + // Swift, this example will be updated to handle errors better. + + guard let errors = output.errors else { + return // No errors. + } + if errors.count != 0 { + throw ServiceHandlerError.deleteObjectsError + } } catch { throw error } diff --git a/doc_source/example_s3_DownloadBucketToDirectory_section.md b/doc_source/example_s3_DownloadBucketToDirectory_section.md new file mode 100644 index 0000000..681eb3b --- /dev/null +++ b/doc_source/example_s3_DownloadBucketToDirectory_section.md @@ -0,0 +1,51 @@ +# Download all objects in an Amazon Simple Storage Service \(Amazon S3\) bucket to a local directory + +The following code example shows how to download all objects in an Amazon Simple Storage Service \(Amazon S3\) bucket to a local directory\. + +**Note** +The source code for these examples is in the [AWS Code Examples GitHub repository](https://github.com/awsdocs/aws-doc-sdk-examples)\. Have feedback on a code example? [Create an Issue](https://github.com/awsdocs/aws-doc-sdk-examples/issues/new/choose) in the code examples repo\. + +------ +#### [ Java ] + +**SDK for Java 2\.x** + There's more on GitHub\. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3#readme)\. +Use an [S3TransferManager](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html) to [download all S3 objects](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#downloadDirectory(software.amazon.awssdk.transfer.s3.DownloadDirectoryRequest)) in the same S3 bucket\. View the [complete file](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/DownloadToDirectory.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/test/java/TransferManagerTest.java)\. + +``` +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.core.sync.RequestBody; +import software.amazon.awssdk.services.s3.model.ObjectIdentifier; +import software.amazon.awssdk.transfer.s3.S3TransferManager; +import software.amazon.awssdk.transfer.s3.model.CompletedDirectoryDownload; +import software.amazon.awssdk.transfer.s3.model.DirectoryDownload; +import software.amazon.awssdk.transfer.s3.model.DownloadDirectoryRequest; + +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Paths; +import java.util.HashSet; +import java.util.Set; +import java.util.UUID; +import java.util.stream.Collectors; + + public Integer downloadObjectsToDirectory(S3TransferManager transferManager, + String destinationPath, String bucketName) { + DirectoryDownload directoryDownload = + transferManager.downloadDirectory(DownloadDirectoryRequest.builder() + .destination(Paths.get(destinationPath)) + .bucket(bucketName) + .build()); + CompletedDirectoryDownload completedDirectoryDownload = directoryDownload.completionFuture().join(); + + completedDirectoryDownload.failedTransfers().forEach(fail -> + logger.warn("Object [{}] failed to transfer", fail.toString())); + return completedDirectoryDownload.failedTransfers().size(); + } +``` ++ For API details, see [DownloadDirectory](https://docs.aws.amazon.com/goto/SdkForJavaV2/s3-2006-03-01/DownloadDirectory) in *AWS SDK for Java 2\.x API Reference*\. + +------ + +For a complete list of AWS SDK developer guides and code examples, see [Using this service with an AWS SDK](UsingAWSSDK.md#sdk-general-information-section)\. This topic also includes information about getting started and details about previous SDK versions\. \ No newline at end of file diff --git a/doc_source/example_s3_GetObject_IfModifiedSince_section.md b/doc_source/example_s3_GetObject_IfModifiedSince_section.md index ae89e13..fbe4918 100644 --- a/doc_source/example_s3_GetObject_IfModifiedSince_section.md +++ b/doc_source/example_s3_GetObject_IfModifiedSince_section.md @@ -93,7 +93,7 @@ async fn main() -> Result<(), Error> { ) = match head_object { Ok(output) => (Ok(output.last_modified().cloned()), output.e_tag), Err(err) => match err { - aws_sdk_s3::types::SdkError::ServiceError(err) => { + SdkError::ServiceError(err) => { let http = err.raw().http(); match http.status() { StatusCode::NOT_MODIFIED => ( @@ -111,7 +111,7 @@ async fn main() -> Result<(), Error> { .get("etag") .map(|t| t.to_str().unwrap().into()), ), - _ => (Err(aws_sdk_s3::types::SdkError::ServiceError(err)), None), + _ => (Err(SdkError::ServiceError(err)), None), } } _ => (Err(err), None), diff --git a/doc_source/example_s3_GetObject_section.md b/doc_source/example_s3_GetObject_section.md index 96acb17..f1c70b8 100644 --- a/doc_source/example_s3_GetObject_section.md +++ b/doc_source/example_s3_GetObject_section.md @@ -141,7 +141,7 @@ func (basics BucketBasics) DownloadFile(bucketName string, objectKey string, fil **SDK for Java 2\.x** There's more on GitHub\. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3#readme)\. -Read data as a byte array\. +Read data as a byte array using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static void getObjectBytes (S3Client s3, String bucketName, String keyName, String path) { @@ -171,7 +171,41 @@ Read data as a byte array\. } } ``` -Read tags that belong to an object\. +Use an [S3TransferManager](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html) to [download an object](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#downloadFile(software.amazon.awssdk.transfer.s3.DownloadFileRequest)) in an S3 bucket to a local file\. View the [complete file](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/DownloadFile.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/test/java/TransferManagerTest.java)\. + +``` +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.core.sync.RequestBody; +import software.amazon.awssdk.transfer.s3.S3TransferManager; +import software.amazon.awssdk.transfer.s3.model.CompletedFileDownload; +import software.amazon.awssdk.transfer.s3.model.DownloadFileRequest; +import software.amazon.awssdk.transfer.s3.model.FileDownload; +import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener; + +import java.io.IOException; +import java.net.URL; +import java.nio.file.Files; +import java.nio.file.Paths; +import java.util.UUID; + + public Long downloadFile(S3TransferManager transferManager, String bucketName, + String key, String downloadedFileWithPath) { + DownloadFileRequest downloadFileRequest = + DownloadFileRequest.builder() + .getObjectRequest(b -> b.bucket(bucketName).key(key)) + .addTransferListener(LoggingTransferListener.create()) + .destination(Paths.get(downloadedFileWithPath)) + .build(); + + FileDownload downloadFile = transferManager.downloadFile(downloadFileRequest); + + CompletedFileDownload downloadResult = downloadFile.completionFuture().join(); + logger.info("Content length [{}]", downloadResult.response().contentLength()); + return downloadResult.response().contentLength(); + } +``` +Read tags that belong to an object using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static void listTags(S3Client s3, String bucketName, String keyName) { @@ -196,7 +230,7 @@ Read tags that belong to an object\. } } ``` -Get a URL for an object\. +Get a URL for an object using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static void getURL(S3Client s3, String bucketName, String keyName ) { @@ -216,7 +250,7 @@ Get a URL for an object\. } } ``` -Get an object by using the S3Presigner client object\. +Get an object by using the S3Presigner client object using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static void getPresignedUrl(S3Presigner presigner, String bucketName, String keyName ) { diff --git a/doc_source/example_s3_PutObject_section.md b/doc_source/example_s3_PutObject_section.md index 345a5e3..1d3daf1 100644 --- a/doc_source/example_s3_PutObject_section.md +++ b/doc_source/example_s3_PutObject_section.md @@ -211,7 +211,7 @@ func (basics BucketBasics) UploadFile(bucketName string, objectKey string, fileN **SDK for Java 2\.x** There's more on GitHub\. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3#readme)\. -Upload an object to a bucket\. +Upload a file to a bucket using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static String putS3Object(S3Client s3, String bucketName, String objectKey, String objectPath) { @@ -263,7 +263,37 @@ Upload an object to a bucket\. return bytesArray; } ``` -Upload an object to a bucket and set tags\. +Use an [S3TransferManager](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html) to [upload a file](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#uploadFile(software.amazon.awssdk.transfer.s3.UploadFileRequest)) to a bucket\. View the [complete file](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/UploadFile.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/test/java/TransferManagerTest.java)\. + +``` +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.transfer.s3.S3TransferManager; +import software.amazon.awssdk.transfer.s3.model.CompletedFileUpload; +import software.amazon.awssdk.transfer.s3.model.FileUpload; +import software.amazon.awssdk.transfer.s3.model.UploadFileRequest; +import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener; + +import java.net.URL; +import java.nio.file.Paths; +import java.util.UUID; + + public String uploadFile(S3TransferManager transferManager, String bucketName, + String key, String filePath) { + UploadFileRequest uploadFileRequest = + UploadFileRequest.builder() + .putObjectRequest(b -> b.bucket(bucketName).key(key)) + .addTransferListener(LoggingTransferListener.create()) + .source(Paths.get(filePath)) + .build(); + + FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); + + CompletedFileUpload uploadResult = fileUpload.completionFuture().join(); + return uploadResult.response().eTag(); + } +``` +Upload an object to a bucket and set tags using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static void putS3ObjectTags(S3Client s3, String bucketName, String objectKey, String objectPath) { @@ -356,7 +386,7 @@ Upload an object to a bucket and set tags\. } } ``` -Upload an object to a bucket and set metadata\. +Upload an object to a bucket and set metadata using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static String putS3Object(S3Client s3, String bucketName, String objectKey, String objectPath) { @@ -410,7 +440,7 @@ Upload an object to a bucket and set metadata\. return bytesArray; } ``` -Upload an object to a bucket and set an object retention value\. +Upload an object to a bucket and set an object retention value using an [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)\. ``` public static void setRentionPeriod(S3Client s3, String key, String bucket) { diff --git a/doc_source/example_s3_Scenario_GettingStarted_section.md b/doc_source/example_s3_Scenario_GettingStarted_section.md index 9ec43fe..4f37871 100644 --- a/doc_source/example_s3_Scenario_GettingStarted_section.md +++ b/doc_source/example_s3_Scenario_GettingStarted_section.md @@ -884,7 +884,7 @@ public class S3Scenario { String bucketName = args[0]; String key = args[1]; String objectPath = args[2]; - String savePath = args[3]; + String savePath = args[3]; String toBucket = args[4] ; ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create(); diff --git a/doc_source/example_s3_UploadDirectoryToBucket_section.md b/doc_source/example_s3_UploadDirectoryToBucket_section.md new file mode 100644 index 0000000..4c002e0 --- /dev/null +++ b/doc_source/example_s3_UploadDirectoryToBucket_section.md @@ -0,0 +1,46 @@ +# Recursively upload a local directory to an Amazon Simple Storage Service \(Amazon S3\) bucket + +The following code example shows how to upload a local directory recursively to an Amazon Simple Storage Service \(Amazon S3\) bucket\. + +**Note** +The source code for these examples is in the [AWS Code Examples GitHub repository](https://github.com/awsdocs/aws-doc-sdk-examples)\. Have feedback on a code example? [Create an Issue](https://github.com/awsdocs/aws-doc-sdk-examples/issues/new/choose) in the code examples repo\. + +------ +#### [ Java ] + +**SDK for Java 2\.x** + There's more on GitHub\. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3#readme)\. +Use an [S3TransferManager](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html) to [upload a local directory](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#uploadDirectory(software.amazon.awssdk.transfer.s3.UploadDirectoryRequest))\. View the [complete file](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/UploadADirectory.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/test/java/TransferManagerTest.java)\. + +``` +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.services.s3.model.ObjectIdentifier; +import software.amazon.awssdk.transfer.s3.S3TransferManager; +import software.amazon.awssdk.transfer.s3.model.CompletedDirectoryUpload; +import software.amazon.awssdk.transfer.s3.model.DirectoryUpload; +import software.amazon.awssdk.transfer.s3.model.UploadDirectoryRequest; + +import java.net.URL; +import java.nio.file.Paths; +import java.util.UUID; + + public Integer uploadDirectory(S3TransferManager transferManager, + String sourceDirectory, String bucketName){ + DirectoryUpload directoryUpload = + transferManager.uploadDirectory(UploadDirectoryRequest.builder() + .source(Paths.get(sourceDirectory)) + .bucket(bucketName) + .build()); + + CompletedDirectoryUpload completedDirectoryUpload = directoryUpload.completionFuture().join(); + completedDirectoryUpload.failedTransfers().forEach(fail -> + logger.warn("Object [{}] failed to transfer", fail.toString())); + return completedDirectoryUpload.failedTransfers().size(); + } +``` ++ For API details, see [UploadDirectory](https://docs.aws.amazon.com/goto/SdkForJavaV2/s3-2006-03-01/UploadDirectory) in *AWS SDK for Java 2\.x API Reference*\. + +------ + +For a complete list of AWS SDK developer guides and code examples, see [Using this service with an AWS SDK](UsingAWSSDK.md#sdk-general-information-section)\. This topic also includes information about getting started and details about previous SDK versions\. \ No newline at end of file diff --git a/doc_source/index.md b/doc_source/index.md index d4902f0..e488150 100644 --- a/doc_source/index.md +++ b/doc_source/index.md @@ -130,14 +130,15 @@ sponsored by Amazon. + [Data protection in Amazon S3](DataDurability.md) + [Protecting data using encryption](UsingEncryption.md) + [Protecting data using server-side encryption](serv-side-encryption.md) + + [Amazon S3 now automatically encrypts all new objects](default-encryption-faq.md) + + [Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3)](UsingServerSideEncryption.md) + + [Specifying Amazon S3 encryption](specifying-s3-encryption.md) + [Using server-side encryption with AWS Key Management Service (SSE-KMS)](UsingKMSEncryption.md) + [Specifying server-side encryption with AWS KMS (SSE-KMS)](specifying-kms-encryption.md) + [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md) + [Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects](configuring-bucket-key.md) + [Configuring an S3 Bucket Key at the object level using Batch Operations, REST API, AWS SDKs, or AWS CLI](configuring-bucket-key-object.md) + [Viewing settings for an S3 Bucket Key](viewing-bucket-key-settings.md) - + [Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3)](UsingServerSideEncryption.md) - + [Specifying Amazon S3 encryption](specifying-s3-encryption.md) + [Using server-side encryption with customer-provided keys (SSE-C)](ServerSideEncryptionCustomerKeys.md) + [Protecting data using client-side encryption](UsingClientSideEncryption.md) + [Internetwork traffic privacy](inter-network-traffic-privacy.md) @@ -157,7 +158,7 @@ sponsored by Amazon. + [Amazon S3 condition key examples](amazon-s3-policy-keys.md) + [Actions, resources, and condition keys for Amazon S3](list_amazons3.md) + [Using bucket policies](bucket-policies.md) - + [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md) + + [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md) + [Controlling access from VPC endpoints with bucket policies](example-bucket-policies-vpc-endpoint.md) + [Bucket policy examples](example-bucket-policies.md) + [Using IAM user policies](user-policies.md) @@ -528,6 +529,7 @@ sponsored by Amazon. + [Delete the website configuration from an Amazon S3 bucket using an AWS SDK](example_s3_DeleteBucketWebsite_section.md) + [Determine the existence and content type of an object in an Amazon S3 bucket using an AWS SDK](example_s3_HeadObject_section.md) + [Determine the existence of an Amazon S3 bucket using an AWS SDK](example_s3_HeadBucket_section.md) + + [Download all objects in an Amazon Simple Storage Service (Amazon S3) bucket to a local directory](example_s3_DownloadBucketToDirectory_section.md) + [Enable logging on an Amazon S3 bucket using an AWS SDK](example_s3_ServiceAccessLogging_section.md) + [Enable notifications for an Amazon S3 bucket using an AWS SDK](example_s3_PutBucketNotification_section.md) + [Enable transfer acceleration for an Amazon S3 bucket using an AWS SDK](example_s3_TransferAcceleration_section.md) @@ -550,6 +552,7 @@ sponsored by Amazon. + [Set the website configuration for an Amazon S3 bucket using an AWS SDK](example_s3_PutBucketWebsite_section.md) + [Upload a single part of a multipart upload using an AWS SDK](example_s3_UploadPart_section.md) + [Upload an object to an Amazon S3 bucket using an AWS SDK](example_s3_PutObject_section.md) + + [Recursively upload a local directory to an Amazon Simple Storage Service (Amazon S3) bucket](example_s3_UploadDirectoryToBucket_section.md) + [Scenarios for Amazon S3 using AWS SDKs](service_code_examples_scenarios.md) + [Create a presigned URL for Amazon S3 using an AWS SDK](example_s3_Scenario_PresignedUrl_section.md) + [Get started with Amazon S3 buckets and objects using an AWS SDK](example_s3_Scenario_GettingStarted_section.md) diff --git a/doc_source/ipv6-access.md b/doc_source/ipv6-access.md index e4e588c..f3225b3 100644 --- a/doc_source/ipv6-access.md +++ b/doc_source/ipv6-access.md @@ -76,9 +76,9 @@ You can modify the bucket policy's `Condition` element to allow both IPv4 \(`54. 8. } ``` -Before using IPv6 you must update all relevant IAM user and bucket policies that use IP address filtering to allow IPv6 address ranges\. We recommend that you update your IAM policies with your organization's IPv6 address ranges in addition to your existing IPv4 address ranges\. For an example of a bucket policy that allows access over both IPv6 and IPv4, see [Limiting access to specific IP addresses](example-bucket-policies.md#example-bucket-policies-use-case-3)\. +Before using IPv6 you must update all relevant IAM user and bucket policies that use IP address filtering to allow IPv6 address ranges\. We recommend that you update your IAM policies with your organization's IPv6 address ranges in addition to your existing IPv4 address ranges\. For an example of a bucket policy that allows access over both IPv6 and IPv4, see [Restrict access to specific IP addresses](example-bucket-policies.md#example-bucket-policies-IP-1)\. -You can review your IAM user policies using the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\. For more information about IAM, see the [IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/)\. For information about editing S3 bucket policies, see [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md)\. +You can review your IAM user policies using the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\. For more information about IAM, see the [IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/)\. For information about editing S3 bucket policies, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md)\. ## Testing IP address compatibility diff --git a/doc_source/metrics-dimensions.md b/doc_source/metrics-dimensions.md index 137f538..6567338 100644 --- a/doc_source/metrics-dimensions.md +++ b/doc_source/metrics-dimensions.md @@ -95,19 +95,7 @@ S3 Object Lambda includes the following request metrics\. ## Amazon S3 on Outposts metrics in CloudWatch -The `S3Outposts` namespace includes the following metrics for Amazon S3 on Outposts buckets\. You can monitor the total number of S3 on Outposts bytes provisioned, the total free bytes available for objects, and the total size of all objects for a given bucket\. - -**Note** -S3 on Outposts supports only the following metrics, and no other Amazon S3 metrics\. -Because S3 on Outposts has fixed capacity, you can create CloudWatch alerts that alert you when your storage utilization exceeds a certain threshold\. - - -| Metric | Description | -| --- | --- | -| OutpostTotalBytes | The total provisioned capacity in bytes for an Outpost\. Units: Bytes Period: 5 minutes | -| OutpostFreeBytes | The count of free bytes available on an Outpost to store customer data\. Units: Bytes Period: 5 minutes | -| BucketUsedBytes | The total size of all objects for the given bucket\. Units: Counts Period: 5 minutes | -| AccountUsedBytes | The total size of all objects for the specified Outposts account\. Units: Bytes Period: 5 minutes | +For a list of metrics in CloudWatch that are used for S3 on Outposts buckets, see [CloudWatch metrics](S3OutpostsCapacity.md#S3OutpostsCloudWatchMetrics)\. ## Amazon S3 dimensions in CloudWatch diff --git a/doc_source/object-ownership-new-bucket.md b/doc_source/object-ownership-new-bucket.md index 4874a69..1c16693 100644 --- a/doc_source/object-ownership-new-bucket.md +++ b/doc_source/object-ownership-new-bucket.md @@ -13,7 +13,10 @@ Object Ownership has three settings that you can use to control ownership of obj + **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL\. + **Object writer \(default\)** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs\. -**Permissions**: To create a bucket and select a setting for Object Ownership, you must have both the `s3:CreateBucket` and `s3:PutBucketOwnershipControls` permissions\. For more information about Amazon S3 permissions, see [Actions, resources, and condition keys for Amazon S3](list_amazons3.md)\. +**Permissions**: To apply the **Bucket owner enforced** setting or the **Bucket owner preferred** setting, you must have the following permissions: `s3:CreateBucket` and `s3:PutBucketOwnershipControls`\. No additional permissions are needed when creating a bucket with the **Object writer** setting applied\. For more information about Amazon S3 permissions, see [Actions, resources, and condition keys for Amazon S3](list_amazons3.md)\. + +**Important** +A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that you disable ACLs except in unusual circumstances where you need to control access for each object individually\. With Object Ownership, you can disable ACLs and rely on policies for access control\. When you disable ACLs, you can easily maintain a bucket with objects uploaded by different AWS accounts\. You, as the bucket owner, own all the objects in the bucket and can manage access to them using policies\. ## Using the S3 console @@ -94,6 +97,9 @@ This example applies the bucket owner enforced setting for a new bucket using th aws s3api create-bucket --bucket DOC-EXAMPLE-BUCKET --region us-east-1 --object-ownership BucketOwnerEnforced ``` +**Important** +If you don’t set Object Ownership when you create a bucket using the CLI, the default setting will be ObjectWriter \(ACLs enabled\)\. + ## Using the AWS SDK for Java This example sets the bucket owner enforced setting for a new bucket using the AWS SDK for Java: diff --git a/doc_source/replication-add-config.md b/doc_source/replication-add-config.md index 6cb674e..26ee966 100644 --- a/doc_source/replication-add-config.md +++ b/doc_source/replication-add-config.md @@ -540,6 +540,6 @@ For backward compatibility, Amazon S3 continues to support the XML V1 replicatio For backward compatibility, Amazon S3 continues to support the V1 configuration\. + When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker\. If you use V1 of the replication configuration XML, Amazon S3 replicates delete markers that result from user actions\. In other words, Amazon S3 replicates the delete marker only if a user deletes an object\. If an expired object is removed by Amazon S3 \(as part of a lifecycle action\), Amazon S3 does not replicate the delete marker\. - In V2 replication configurations, you can enable delete marker replication for tag\-based rules\. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md)\. + In V2 replication configurations, you can enable delete marker replication for non\-tag\-based rules\. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md)\. \ No newline at end of file diff --git a/doc_source/replication-config-for-kms-objects.md b/doc_source/replication-config-for-kms-objects.md index 888f91e..015d207 100644 --- a/doc_source/replication-config-for-kms-objects.md +++ b/doc_source/replication-config-for-kms-objects.md @@ -1,5 +1,18 @@ # Replicating objects created with server\-side encryption \(SSE\-C, SSE\-S3, SSE\-KMS\) +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + +By default, Amazon S3 doesn't replicate objects that are stored at rest using server\-side encryption with customer managed keys stored in AWS KMS\. This section explains additional configuration that you add to direct Amazon S3 to replicate these objects\. + +**Note** +You can use multi\-Region AWS KMS keys in Amazon S3\. However, Amazon S3 currently treats multi\-Region keys as though they were single\-Region keys, and does not use the multi\-Region features of the key\. For more information, see [ Using multi\-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in *AWS Key Management Service Developer Guide*\. + +For an example with step\-by\-step instructions, see [Replicating encrypted objects](replication-walkthrough-4.md)\. For information about creating a replication configuration, see [Replicating objects](replication.md)\. + +**Important** +Replication of encrypted data is a server\-side process that occurs entirely within Amazon S3\. Objects created with server\-side encryption using customer\-provided \(SSE\-C\) encryption keys are not replicated\. + **Topics** + [Replicating encrypted objects \(SSE\-C\)](#replicationSSEC) + [Replicating encrypted objects \(SSE\-S3, SSE\-KMS\)](#replications) diff --git a/doc_source/replication-troubleshoot.md b/doc_source/replication-troubleshoot.md index 39ef0b2..e64e657 100644 --- a/doc_source/replication-troubleshoot.md +++ b/doc_source/replication-troubleshoot.md @@ -13,7 +13,7 @@ If object replicas don't appear in the destination bucket after you configure re + If the destination bucket is owned by another AWS account, verify that the bucket owner has a bucket policy on the destination bucket that allows the source bucket owner to replicate objects\. For an example, see [Configuring replication when source and destination buckets are owned by different accounts](replication-walkthrough-2.md)\. + If an object replica doesn't appear in the destination bucket, the following might have prevented replication: + Amazon S3 doesn't replicate an object in a source bucket that is a replica created by another replication configuration\. For example, if you set a replication configuration from bucket A to bucket B to bucket C, Amazon S3 doesn't replicate object replicas in bucket B to bucket C\. - + A source bucket owner can grant other AWS accounts permission to upload objects\. By default, the source bucket owner doesn't have permissions for the objects created by other accounts\. The replication configuration replicates only the objects for which the source bucket owner has access permissions\. The source bucket owner can grant other AWS accounts permissions to create objects conditionally, requiring explicit access permissions on those objects\. For an example policy, see [Granting cross\-account permissions to upload objects while ensuring the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-use-case-8)\. + + A source bucket owner can grant other AWS accounts permission to upload objects\. By default, the source bucket owner doesn't have permissions for the objects created by other accounts\. The replication configuration replicates only the objects for which the source bucket owner has access permissions\. The source bucket owner can grant other AWS accounts permissions to create objects conditionally, requiring explicit access permissions on those objects\. For an example policy, see [Grant cross\-account permissions to upload objects while ensuring that the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-acl-2)\. + Suppose that in the replication configuration, you add a rule to replicate a subset of objects having a specific tag\. In this case, you must assign the specific tag key and value at the time of creating the object for Amazon S3 to replicate the object\. If you first create an object and then add the tag to the existing object, Amazon S3 does not replicate the object\. + Replication fails if the bucket policy denies access to the replication role for any of the following actions: diff --git a/doc_source/replication-walkthrough-2.md b/doc_source/replication-walkthrough-2.md index cf87ba7..e518713 100644 --- a/doc_source/replication-walkthrough-2.md +++ b/doc_source/replication-walkthrough-2.md @@ -41,6 +41,6 @@ For more information about configuring replication using server\-side encryption } ``` -Choose the bucket and add the bucket policy\. For instructions, see [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md)\. +Choose the bucket and add the bucket policy\. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md)\. In replication, the owner of the source object owns the replica by default\. When source and destination buckets are owned by different AWS accounts, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets\. This includes granting the `ObjectOwnerOverrideToBucketOwner` permission\. For more information, see [Changing the replica owner](replication-change-owner.md)\. \ No newline at end of file diff --git a/doc_source/replication-what-is-isnot-replicated.md b/doc_source/replication-what-is-isnot-replicated.md index 2840802..fed641e 100644 --- a/doc_source/replication-what-is-isnot-replicated.md +++ b/doc_source/replication-what-is-isnot-replicated.md @@ -52,7 +52,7 @@ By default, Amazon S3 doesn't replicate the following: To learn more about the Amazon S3 Glacier service, see the [Amazon S3 Glacier Developer Guide](https://docs.aws.amazon.com/amazonglacier/latest/dev/)\. + Objects in the source bucket that the bucket owner doesn't have sufficient permissions to replicate\. - For information about how an object owner can grant permissions to a bucket owner, see [Granting cross\-account permissions to upload objects while ensuring the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-use-case-8)\. + For information about how an object owner can grant permissions to a bucket owner, see [Grant cross\-account permissions to upload objects while ensuring that the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-acl-2)\. + Updates to bucket\-level subresources\. For example, if you change the lifecycle configuration or add a notification configuration to your source bucket, these changes are not applied to the destination bucket\. This feature makes it possible to have different configurations on source and destination buckets\. diff --git a/doc_source/s3-arn-format.md b/doc_source/s3-arn-format.md index 75f2d03..2f3e22d 100644 --- a/doc_source/s3-arn-format.md +++ b/doc_source/s3-arn-format.md @@ -24,7 +24,7 @@ The ARN format for Amazon S3 resources reduces to the following: For a complete list of Amazon S3 resources, see [Actions, resources, and condition keys for Amazon S3](list_amazons3.md)\. To find the ARN for an S3 bucket, you can look at the Amazon S3 console **Bucket Policy** or **CORS configuration** permissions pages\. For more information, see the following topics: -+ [Adding a bucket policy using the Amazon S3 console](add-bucket-policy.md) ++ [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md) + [CORS configuration](ManageCorsUsing.md) ## Amazon S3 ARN examples diff --git a/doc_source/security-best-practices.md b/doc_source/security-best-practices.md index 656ad5d..b3504a0 100644 --- a/doc_source/security-best-practices.md +++ b/doc_source/security-best-practices.md @@ -3,10 +3,10 @@ Amazon S3 provides a number of security features to consider as you develop and implement your own security policies\. The following best practices are general guidelines and don’t represent a complete security solution\. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions\. **Topics** -+ [Amazon S3 preventative security best Practices](#security-best-practices-prevent) ++ [Amazon S3 security best practices](#security-best-practices-prevent) + [Amazon S3 Monitoring and auditing best practices](#security-best-practices-detect) -## Amazon S3 preventative security best Practices +## Amazon S3 security best practices The following best practices for Amazon S3 can help prevent security incidents\. @@ -83,6 +83,13 @@ VPC endpoints for Amazon S3 provide multiple ways to control access to your Amaz + You can help prevent data exfiltration by using a VPC that does not have an internet gateway\. For more information, see [Controlling access from VPC endpoints with bucket policies](example-bucket-policies-vpc-endpoint.md)\. +** Use managed AWS services to receive actionable findings in your AWS accounts** +Managed AWS services can be enabled and integrated across multiple accounts\. After you receive your findings, take action according to your incident response policy\. For each finding, determine what your required response actions will be\. +To receive actionable findings in your AWS accounts, you can enable one of these managed AWS services: ++ AWS Security Hub, which gives you aggregated visibility into your security and compliance status across multiple AWS accounts\. With Security Hub, you can perform security best practice checks, aggregate alerts, and automate remediation\. You can create custom actions, which allow a customer to manually invoke a specific response or remediation action on a specific finding\. You can then send custom actions to Amazon CloudWatch Events as a specific event pattern, allowing you to create a CloudWatch Events rule that listens for these actions and sends them to a target service, such as a Lambda function or Amazon SQSqueue\. You can then export findings to an Amazon S3 bucket, and share them in a standardized format for multiple use cases across AWS services\. ++ Amazon GuardDuty, which monitors object\-level API operations to identify potential security risks for data within your S3 buckets\. GuardDuty monitors threats against your Amazon S3 resources by analyzing CloudTrail management events and CloudTrail Amazon S3 data events\. These data sources monitor different kinds of activity, for example, data events for Amazon S3 include object\-level API operations, such as `GetObject`, `ListObjects`, and `PutObject`\. For more information, see [Amazon S3 protection in Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html) in the *Amazon GuardDuty User Guide;*\. ++ AWS Identity and Access ManagementAccess Analyzer, which reviews access to your internal AWS resources and validate where you've shared access outside of your AWS accounts\. With Access Analyzer for Amazon S3, you're alerted when your buckets are configured to allow access to anyone on the internet or other AWS accounts, including accounts outside of your organization\. For each public or shared bucket, you'll receive findings into the source and level of public or shared access\. For example, Access Analyzer for Amazon S3 might show that a bucket has read or write access provided through a bucket access control list \(ACL\), a bucket policy, a Multi\-Region Access Point policy, or an access point policy\. With these findings, you can take immediate and precise corrective action to restore your bucket access to what you intended\. For more information, see [Reviewing bucket access using Access Analyzer for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-analyzer.html)\. + ## Amazon S3 Monitoring and auditing best practices The following best practices for Amazon S3 can help detect potential security weaknesses and incidents\. diff --git a/doc_source/serv-side-encryption.md b/doc_source/serv-side-encryption.md index 920f68d..8905d13 100644 --- a/doc_source/serv-side-encryption.md +++ b/doc_source/serv-side-encryption.md @@ -1,5 +1,8 @@ # Protecting data using server\-side encryption +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + Server\-side encryption is the encryption of data at its destination by the application or service that receives it\. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it\. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects\. For example, if you share your objects using a presigned URL, that URL works the same way for both encrypted and unencrypted objects\. Additionally, when you list objects in your bucket, the list API returns a list of all objects, regardless of whether they are encrypted\. **Note** diff --git a/doc_source/service_code_examples.md b/doc_source/service_code_examples.md index 1ce3da3..11bb4ae 100644 --- a/doc_source/service_code_examples.md +++ b/doc_source/service_code_examples.md @@ -115,6 +115,7 @@ if __name__ == '__main__': + [Delete the website configuration from a bucket](example_s3_DeleteBucketWebsite_section.md) + [Determine the existence and content type of an object](example_s3_HeadObject_section.md) + [Determine the existence of a bucket](example_s3_HeadBucket_section.md) + + [Download objects to a local directory](example_s3_DownloadBucketToDirectory_section.md) + [Enable logging](example_s3_ServiceAccessLogging_section.md) + [Enable notifications](example_s3_PutBucketNotification_section.md) + [Enable transfer acceleration](example_s3_TransferAcceleration_section.md) @@ -137,6 +138,7 @@ if __name__ == '__main__': + [Set the website configuration for a bucket](example_s3_PutBucketWebsite_section.md) + [Upload a single part of a multipart upload](example_s3_UploadPart_section.md) + [Upload an object to a bucket](example_s3_PutObject_section.md) + + [Upload directory to a bucket](example_s3_UploadDirectoryToBucket_section.md) + [Scenarios](service_code_examples_scenarios.md) + [Create a presigned URL](example_s3_Scenario_PresignedUrl_section.md) + [Get started with buckets and objects](example_s3_Scenario_GettingStarted_section.md) diff --git a/doc_source/service_code_examples_actions.md b/doc_source/service_code_examples_actions.md index 30ec3ae..7435ff7 100644 --- a/doc_source/service_code_examples_actions.md +++ b/doc_source/service_code_examples_actions.md @@ -22,6 +22,7 @@ The following code examples demonstrate how to perform individual Amazon S3 acti + [Delete the website configuration from a bucket](example_s3_DeleteBucketWebsite_section.md) + [Determine the existence and content type of an object](example_s3_HeadObject_section.md) + [Determine the existence of a bucket](example_s3_HeadBucket_section.md) ++ [Download objects to a local directory](example_s3_DownloadBucketToDirectory_section.md) + [Enable logging](example_s3_ServiceAccessLogging_section.md) + [Enable notifications](example_s3_PutBucketNotification_section.md) + [Enable transfer acceleration](example_s3_TransferAcceleration_section.md) @@ -43,4 +44,5 @@ The following code examples demonstrate how to perform individual Amazon S3 acti + [Set the ACL of an object](example_s3_PutObjectAcl_section.md) + [Set the website configuration for a bucket](example_s3_PutBucketWebsite_section.md) + [Upload a single part of a multipart upload](example_s3_UploadPart_section.md) -+ [Upload an object to a bucket](example_s3_PutObject_section.md) \ No newline at end of file ++ [Upload an object to a bucket](example_s3_PutObject_section.md) ++ [Upload directory to a bucket](example_s3_UploadDirectoryToBucket_section.md) \ No newline at end of file diff --git a/doc_source/specifying-kms-encryption.md b/doc_source/specifying-kms-encryption.md index a71b7c7..417820f 100644 --- a/doc_source/specifying-kms-encryption.md +++ b/doc_source/specifying-kms-encryption.md @@ -1,5 +1,8 @@ # Specifying server\-side encryption with AWS KMS \(SSE\-KMS\) +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + When you create an object, you can specify the use of server\-side encryption with AWS Key Management Service \(AWS KMS\) keys to encrypt your data\. This encryption is known as SSE\-KMS\. You can apply encryption when you are either uploading a new object or copying an existing object\. You can specify SSE\-KMS using the Amazon S3 console, REST API operations, AWS SDKs, and AWS Command Line Interface \(AWS CLI\)\. For more information, see the following topics\. @@ -40,7 +43,7 @@ If you use the AWS KMS option for your default encryption configuration, you are + **Enter KMS master key ARN**, and enter your AWS KMS key Amazon Resource Name \(ARN\)\. **Important** You can use only AWS KMS keys that are enabled in the same AWS Region as the bucket\. When you choose **Choose from your AWS KMS keys**, the S3 console lists only 100 KMS keys per Region\. If you have more than 100 KMS keys in the same Region, you can see only the first 100 KMS keys in the S3 console\. To use a KMS key that is not listed in the console, choose **Custom KMS ARN**, and enter the KMS key ARN\. -When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a KMS key that is enabled in the same Region as your bucket\. Additionally, Amazon S3 supports only symmetric encryption KMS keys or HMAC keys\. For more information about these keys, see [Special\-purpose keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-types.html) in the *AWS Key Management Service Developer Guide*\. +When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a KMS key that is enabled in the same Region as your bucket\. Additionally, Amazon S3 supports only symmetric encryption KMS keys\. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*\. For more information about creating an AWS KMS key, see [Creating Keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*\. For more information about using AWS KMS with Amazon S3, see [Using server\-side encryption with AWS Key Management Service \(SSE\-KMS\)](UsingKMSEncryption.md)\. @@ -102,7 +105,7 @@ For information about the encryption context in Amazon S3, see [Encryption conte You can use the `x-amz-server-side-encryption-aws-kms-key-id` header to specify the ID of the customer managed key used to protect the data\. If you specify `x-amz-server-side-encryption:aws:kms`, but don't provide `x-amz-server-side-encryption-aws-kms-key-id`, Amazon S3 uses the AWS managed key to protect the data\. If you want to use a customer managed key, you must provide the `x-amz-server-side-encryption-aws-kms-key-id` header of the customer managed key\. **Important** -When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 supports only symmetric encryption KMS keys or HMAC keys\. For more information about these keys, see [Special\-purpose keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-types.html) in the *AWS Key Management Service Developer Guide*\. +When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 supports only symmetric encryption KMS keys\. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*\. ### S3 Bucket Keys \(x\-amz\-server\-side\-encryption\-aws\-bucket\-key\-enabled\) @@ -135,7 +138,7 @@ aws s3api copy-object --bucket DOC-EXAMPLE-BUCKET --key example-object-key --bod When using AWS SDKs, you can request Amazon S3 to use AWS KMS keys\. This section provides examples of using the AWS SDKs for Java and \.NET\. For information about other SDKs, go to [Sample Code and Libraries](https://aws.amazon.com/code)\. **Important** -When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 supports only symmetric encryption KMS keys or HMAC keys\. For more information about these keys, see [Special\-purpose keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-types.html) in the *AWS Key Management Service Developer Guide*\. +When you use an AWS KMS key for server\-side encryption in Amazon S3, you must choose a symmetric encryption KMS key\. Amazon S3 supports only symmetric encryption KMS keys\. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*\. ### Copy operation diff --git a/doc_source/specifying-s3-encryption.md b/doc_source/specifying-s3-encryption.md index e4954b3..2afb001 100644 --- a/doc_source/specifying-s3-encryption.md +++ b/doc_source/specifying-s3-encryption.md @@ -1,5 +1,8 @@ # Specifying Amazon S3 encryption +**Important** +Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\. + When you create an object, you can specify the use of server\-side encryption with Amazon S3\-managed encryption keys to encrypt your data\. This is true when you are either uploading a new object or copying an existing object\. This encryption is known as SSE\-S3\. You can specify SSE\-S3 using the S3 console, REST APIs, AWS SDKs, and AWS CLI\. For more information, see the topics below\. diff --git a/doc_source/storage-inventory.md b/doc_source/storage-inventory.md index 6167765..ae585cb 100644 --- a/doc_source/storage-inventory.md +++ b/doc_source/storage-inventory.md @@ -58,7 +58,7 @@ The inventory list contains a list of the objects in an S3 bucket and the follow We recommend that you create a lifecycle policy that deletes old inventory lists\. For more information, see [Managing your storage lifecycle](object-lifecycle-mgmt.md)\. -The `s3:PutInventoryConfiguration` permission allows a user to both select all the metadata fields that are listed earlier for each object when configuring an inventory list and to specify the destination bucket to store the inventory\. A user with read access to objects in the destination bucket can access all object metadata fields that are available in the inventory list\. To restrict access to an inventory report, see [Restricting access to an Amazon S3 Inventory report](example-bucket-policies.md#example-bucket-policies-use-case-10)\. +The `s3:PutInventoryConfiguration` permission allows a user to both select all the metadata fields that are listed earlier for each object when configuring an inventory list and to specify the destination bucket to store the inventory\. A user with read access to objects in the destination bucket can access all object metadata fields that are available in the inventory list\. To restrict access to an inventory report, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1)\. ### Inventory consistency diff --git a/doc_source/tagging-and-policies.md b/doc_source/tagging-and-policies.md index 1bd361b..c9ab607 100644 --- a/doc_source/tagging-and-policies.md +++ b/doc_source/tagging-and-policies.md @@ -13,129 +13,79 @@ When granting permissions for the `PUT Object` and `DELETE Object` operations, t For a complete list of Amazon S3 service\-specific condition keys, see [Amazon S3 condition key examples](amazon-s3-policy-keys.md)\. The following permissions policies illustrate how object tagging enables fine grained access permissions management\. -**Example 1: Allow a user to read only the objects that have a specific tag** -The following permissions policy grants a user permission to read objects, but the condition limits the read permission to only objects that have the following specific tag key and value\. +**Example 1: Allow a user to read only the objects that have a specific tag and key value** +The following permissions policy limits a user to only reading objects that have the `environment: production` tag key and value\. This policy uses the `s3:ExistingObjectTag` condition key to specify the tag key and value\. ``` -security : public +{ + "Version": "2012-10-17", + "Statement": [ + {"Principal":{"AWS":[ + "arn:aws:iam::111122223333:role/JohnDoe" + "Effect": "Allow", + "Action": ["s3:GetObject","s3:GetObjectVersion"], + "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", + ] + } + "Condition": { "StringEquals": {"s3:ExistingObjectTag/environment": "production" } } + } + ] +} ``` -Note that the policy uses the Amazon S3 condition key, `s3:ExistingObjectTag/` to specify the key and value\. -``` - 1. { - 2. "Version": "2012-10-17", - 3. "Statement": [ - 4. { - 5. "Effect": "Allow", - 6. "Action": "s3:GetObject", - 7. "Resource": "arn:aws:s3:::awsexamplebucket1/*", - 8. "Principal": "*", - 9. "Condition": { "StringEquals": {"s3:ExistingObjectTag/security": "public" } } -10. } -11. ] -12. } -``` - -**Example 2: Allow a user to add object tags with restrictions on the allowed tag keys** -The following permissions policy grants a user permissions to perform the `s3:PutObjectTagging` action, which allows user to add tags to an existing object\. The condition limits the tag keys that the user is allowed to use\. The condition uses the `s3:RequestObjectTagKeys` condition key to specify the set of tag keys\. +**Example 2: Restrict which object tag keys that users can add** +The following permissions policy grants a user permissions to perform the `s3:PutObjectTagging` action, which allows user to add tags to an existing object\. The condition uses the `s3:RequestObjectTagKeys` condition key to specify the allowed tag keys, such as `Owner` or `CreationDate`\. For more information, see [Creating a condition that tests multiple key values](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*\. +The policy ensures that every tag key specified in the request is an authorized tag key\. The `ForAnyValue` qualifier in the condition ensures that at least one of the specified keys must be present in the request\. ``` - 1. { - 2. "Version": "2012-10-17", - 3. "Statement": [ - 4. { - 5. "Effect": "Allow", - 6. "Action": [ - 7. "s3:PutObjectTagging" - 8. ], - 9. "Resource": [ -10. "arn:aws:s3:::awsexamplebucket1/*" -11. ], -12. "Principal":{ -13. "CanonicalUser":[ -14. "64-digit-alphanumeric-value" -15. ] -16. }, -17. "Condition": { -18. "ForAllValues:StringLike": { -19. "s3:RequestObjectTagKeys": [ -20. "Owner", -21. "CreationDate" -22. ] -23. } -24. } -25. } -26. ] -27. } +{ + "Version": "2012-10-17", + "Statement": [ + {"Principal":{"AWS":[ + "arn:aws:iam::111122223333:role/JohnDoe" + ] + }, + "Effect": "Allow", + "Action": [ + "s3:PutObjectTagging" + ], + "Resource": [ + "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" + ], + "Condition": {"ForAnyValue:StringEquals": {"s3:RequestObjectTagKeys": [ + "Owner", + "CreationDate" + ] + } + } + } + ] +} ``` -The policy ensures that the tag set, if specified in the request, has the specified keys\. A user might send an empty tag set in `PutObjectTagging`, which is allowed by this policy \(an empty tag set in the request removes any existing tags on the object\)\. If you want to prevent a user from removing the tag set, you can add another condition to ensure that the user provides at least one value\. The `ForAnyValue` in the condition ensures at least one of the specified values must be present in the request\. -``` - 1. { - 2. - 3. "Version": "2012-10-17", - 4. "Statement": [ - 5. { - 6. "Effect": "Allow", - 7. "Action": [ - 8. "s3:PutObjectTagging" - 9. ], -10. "Resource": [ -11. "arn:aws:s3:::awsexamplebucket1/*" -12. ], -13. "Principal":{ -14. "AWS":[ -15. "arn:aws:iam::account-number-without-hyphens:user/username" -16. ] -17. }, -18. "Condition": { -19. "ForAllValues:StringLike": { -20. "s3:RequestObjectTagKeys": [ -21. "Owner", -22. "CreationDate" -23. ] -24. }, -25. "ForAnyValue:StringLike": { -26. "s3:RequestObjectTagKeys": [ -27. "Owner", -28. "CreationDate" -29. ] -30. } -31. } -32. } -33. ] -34. } -``` -For more information, see [Creating a Condition That Tests Multiple Key Values \(Set Operations\)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*\. - -**Example 3: Allow a user to add object tags that include a specific tag key and value** -The following user policy grants a user permissions to perform the `s3:PutObjectTagging` action, which allows user to add tags on an existing object\. The condition requires the user to include a specific tag \(`Project`\) with value set to `X`\. +**Example 3: Require a specific tag key and value when allowing users to add object tags** +The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object\. The condition requires the user to include a specific tag key \(such as `Project`\) with the value set to `X`\. ``` - 1. { - 2. "Version": "2012-10-17", - 3. "Statement": [ - 4. { - 5. "Effect": "Allow", - 6. "Action": [ - 7. "s3:PutObjectTagging" - 8. ], - 9. "Resource": [ -10. "arn:aws:s3:::awsexamplebucket1/*" -11. ], -12. "Principal":{ -13. "AWS":[ -14. "arn:aws:iam::account-number-without-hyphens:user/username" -15. ] -16. }, -17. "Condition": { -18. "StringEquals": { -19. "s3:RequestObjectTag/Project": "X" -20. } -21. } -22. } -23. ] -24. } +{ + "Version": "2012-10-17", + "Statement": [ + {"Principal":{"AWS":[ + "arn:aws:iam::111122223333:user/JohnDoe" + ] + }, + "Effect": "Allow", + "Action": [ + "s3:PutObjectTagging" + ], + "Resource": [ + "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" + ], + "Condition": {"StringEquals": {"s3:RequestObjectTag/Project": "X" + } + } + } + ] +} ``` - diff --git a/doc_source/using-s3-access-logs-to-identify-requests.md b/doc_source/using-s3-access-logs-to-identify-requests.md index 7b8fc65..481044f 100644 --- a/doc_source/using-s3-access-logs-to-identify-requests.md +++ b/doc_source/using-s3-access-logs-to-identify-requests.md @@ -72,8 +72,6 @@ It's a best practice to create the database in the same AWS Region as your S3 bu LOCATION 's3://DOC-EXAMPLE-BUCKET1-logs/prefix/' ``` -**Important** -During the next few weeks, we are adding a new field, `aclrequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclrequired` field, the rollout hasn't been completed in your Region\. 1. In the navigation pane, under **Database**, choose your database\. @@ -196,9 +194,6 @@ BETWEEN parse_datetime('2022-05-10:00:00:00','yyyy-MM-dd:HH:mm:ss') AND parse_datetime('2022-08-10:00:00:00','yyyy-MM-dd:HH:mm:ss') ``` -**Important** -During the next few weeks, we are adding a new field, `aclrequired`, to Amazon S3 server access logs and AWS CloudTrail logs\. This field will indicate if your Amazon S3 requests required an access control list \(ACL\) for authorization\. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs\. This process is currently occurring across all AWS Regions, including the AWS GovCloud \(US\) Regions and the AWS China Regions\. If you don't see the `aclrequired` field, the rollout hasn't been completed in your Region\. - **Note** You can modify the date range as needed to suit your needs\. These query examples might also be useful for security monitoring\. You can review the results for `PutObject` or `GetObject` calls from unexpected or unauthorized IP addresses/requesters and for identifying any anonymous requests to your buckets\. diff --git a/doc_source/website-hosting-cloudfront-walkthrough.md b/doc_source/website-hosting-cloudfront-walkthrough.md index 3944847..1bb4d09 100644 --- a/doc_source/website-hosting-cloudfront-walkthrough.md +++ b/doc_source/website-hosting-cloudfront-walkthrough.md @@ -22,8 +22,6 @@ First, you create a CloudFront distribution\. This makes your website available 1. Choose **Create Distribution**\. -1. On the **Select a delivery method for your content** page, under **Web**, choose **Get Started**\. - 1. On the **Create Distribution** page, in the **Origin Settings** section, for **Origin Domain Name**, enter the Amazon S3 website endpoint for your bucket—for example, **example\.com\.s3\-website\.us\-west\-1\.amazonaws\.com**\. CloudFront fills in the **Origin ID** for you\.