AWS Certified Solutions Architect Associate (SAA-C03) S3 cheatsheet and questions

  • S3 provides unlimited storage capacity with industry-leading scalability, availability, security, and performance.
  • S3 supports static websites and can automatically scale to meet demand.
  • S3 Standard: For general-purpose storage of frequently accessed data.
  • Intelligent-Tiering: Moves data between two access tiers when access patterns change.
  • S3 Standard-IA (Infrequent Access): For data that is accessed less frequently but requires rapid access when needed.
  • S3 One Zone-IA: For infrequently accessed data stored in a single availability zone.
  • S3 Glacier: For archival storage with retrieval times ranging from minutes to hours.
  • S3 Glacier Deep Archive: For long-term data archiving with retrieval times within 12 hours.
  • S3 supports lifecycle policies to automatically transition data to different storage classes and expire data when it is no longer needed.
  • S3 supports versioning, which enables the preservation and restoration of every version of every object stored in an S3 bucket.

Encryption:

  • S3 supports data encryption, including client-side encryption and server-side encryption managed by AWS, to ensure data privacy and security.

Object Lock:

  • S3 supports object lock to prevent data from being deleted or overwritten for a fixed amount of time or indefinitely.

Retention Periods:

  • S3 supports setting retention periods to ensure that data is not deleted before a specified time.

Cross-Region Replication (CRR):

  • Automatically replicates objects across different AWS regions.

Access Points:

  • Simplifies managing data access at scale for shared datasets.

Multi-Part Upload:

  • Multi-Part Upload allows you to upload large objects in parts, making it easier to manage uploads, especially with unstable network connections.
  • It divides a large file into smaller parts and uploads them concurrently, which can significantly speed up the upload process.
  • If any part fails to upload, you can retry uploading just that part without affecting the overall upload.
  • Once all parts are uploaded, S3 assembles them into a single object.

S3 Transfer Acceleration:

  • Accelerated Transfers: S3 Transfer Acceleration uses Amazon CloudFront’s globally distributed edge locations to accelerate uploads to S3, reducing the time it takes to upload files over long distances.
  • Easy to Enable: You can enable S3 Transfer Acceleration on an S3 bucket without making any changes to your applications. Just specify the bucket name with the s3-accelerate endpoint.
  • Cost-Effective: There are no upfront costs or minimum fees for using S3 Transfer Acceleration; you pay only for the data transfer accelerated.
  • Ideal Use Cases: This feature is particularly useful for applications that involve uploading large files from geographically dispersed users or transferring gigabytes to terabytes of data regularly.
  • Private Connectivity: Provides a secure and private connection to S3 from your VPC.
  • No Additional Cost: There is no additional charge for using gateway endpoints.
  • Region-Specific: The gateway endpoint must be created in the same region as your S3 buckets.
  • Routing: Automatically adds routes to your VPC route tables to direct traffic to the endpoint.
  • Security: You can control access using IAM roles, bucket policies, and endpoint policies.

以下題目測試你對S3認識,希望幫到大家溫書。

1. A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity. Which solution meets these requirements?

            A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.

            B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.

            C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.

            D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.

            答案: A

            S3 Transfer Acceleration+multi-parts uploads 符合minimize operational complexity.

            Keywords: high-speed Internet connection, minimize operational complexity, 500GB, single Amazon S3 bucket

              2. A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements?

                  A. S3 Standard

                  B. S3 Intelligent-Tiering

                  C. S3 Standard-Infrequent Access (S3 Standard-IA)

                  D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

                  答案: B

                  Accessed pattern : unpredictable pattern

                  Keywords: unpredictable pattern

                  3. A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images.
                  Which method is the MOST cost-effective for hosting the website?

                    A. Containerize the website and host it in AWS Fargate.

                    B. Create an Amazon S3 bucket and host the website there.

                    C. Deploy a web server on an Amazon EC2 instance to host the website.

                    D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

                    答案: B

                    存放網頁最平的方法一定是將網站擺在S3。

                    Keywords: cost-effective, host a website

                    4. A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.
                    How can the solutions architect meet this requirement?

                      A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.

                      B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.

                      C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.

                      D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.

                      答案: D

                      VPC 可以使用Gateway endpoint去S3,沒有額外收費。

                      Keywords: S3 buckets, reduce costs