Logo Passei Direto
Material
Study with thousands of resources!

Text Material Preview

SAA-C03
Exam Name: AWS Certified Solutions Architect -
Associate
Full version: 597 Q&As
Full version of SAA-C03 Dumps
Share some SAA-C03 exam dumps below.
1. A company is implementing a shared storage solution for a media application that is hosted m
the AWS Cloud The company needs the ability to use SMB clients to access data The solution
must he fully managed.
Which AWS solution meets these requirements?
 1 / 68
https://www.certqueen.com/SAA-C03.html
A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required
client protocol Connect the application server to the file share.
B. Create an AWS Storage Gateway tape gateway Configure (apes to use Amazon S3 Connect
the application server lo the tape gateway
C. Create an Amazon EC2 Windows instance Install and configure a Windows file share role on
the instance. Connect the application server to the file share.
D. Create an Amazon FSx for Windows File Server tile system Attach the fie system to the
origin server. Connect the application server to the file system
Answer: D
Explanation:
https://aws.amazon.com/fsx/lustre/
Amazon FSx has native support for Windows file system features and for the industry-standard
Server Message Block (SMB) protocol to access file storage over a network.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
2. A company has more than 5 TB of file data on Windows file servers that run on premises
Users and applications interact with the data each day
The company is moving its Windows workloads to AWS. As the company continues this
process, the company requires access to AWS and on-premises file storage with minimum
latency The company needs a solution that minimizes operational overhead and requires no
significant changes to the existing file access patterns. The company uses an AWS Site-to-Site
VPN connection for connectivity to AWS
What should a solutions architect do to meet these requirements?
A. Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on-premises
file data to FSx for Windows File Server. Reconfigure the workloads to use FSx for Windows
File Server on AWS.
B. Deploy and configure an Amazon S3 File Gateway on premises Move the on-premises file
data to the S3 File Gateway Reconfigure the on-premises workloads and the cloud workloads to
use the S3 File Gateway
C. Deploy and configure an Amazon S3 File Gateway on premises Move the on-premises file
data to Amazon S3 Reconfigure the workloads to use either Amazon S3 directly or the S3 File
Gateway,
depending on each workload's location
D. Deploy and configure Amazon FSx for Windows File Server on AWS Deploy and configure
an Amazon FSx File Gateway on premises Move the on-premises file data to the FSx File
Gateway Configure the cloud workloads to use FSx for Windows File Server on AWS Configure
 2 / 68
the on-premises workloads to use the FSx File Gateway
Answer: D
Explanation:
https://docs.aws.amazon.com/filegateway/latest/filefsxw/what-is-file-fsxw.html
To meet the requirements of the company to have access to both AWS and on-premises file
storage with minimum latency, a hybrid cloud architecture can be used. One solution is to
deploy and configure Amazon FSx for Windows File Server on AWS, which provides fully
managed Windows file servers. The on-premises file data can be moved to the FSx File
Gateway, which can act as a bridge between on-premises and AWS file storage. The cloud
workloads can be configured to use FSx for Windows File Server on AWS, while the on-
premises workloads can be configured to use the FSx File Gateway. This solution minimizes
operational overhead and requires no significant changes to the existing file access patterns.
The connectivity between on-premises and AWS can be established using an AWS Site-to-Site
VPN connection.
Reference:
AWS FSx for Windows File Server: https://aws.amazon.com/fsx/windows/
AWS FSx File Gateway: https://aws.amazon.com/fsx/file-gateway/
AWS Site-to-Site VPN: https://aws.amazon.com/vpn/site-to-site-vpn/
3. A company runs an application using Amazon ECS. The application creates esi/ed versions
of an original image and then makes Amazon S3 API calls to store the resized images in
Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then
relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the
task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the
launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for
the ECS cluster while logged in as this account.
Answer: B
Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-
taskdefinition.html
 3 / 68
4. A company has a serverless application on AWS that uses Amazon RDS as a backend
database. The application sometimes experiences a sudden unpredictable increase in traffic.
During traffic increases, the application frequently opens and closes connections to the
database, which causes the application to receive errors from the database or run out of
connections. The company needs to ensure that the application is always scalable and highly
available.
Which solution will meet these requirements WITHOUT any code changes to the application?
A. Increase the maximum number of connections in the option group of the RDS database of
the serverless application.
B. Increase the instance size of the RDS DB instance to meet the peak load traffic.
C. Deploy Amazon RDS Proxy between the serverless application and Amazon RDS.
D. Purchase Reserved Instances for Amazon RDS to ensure that the database is highly
available during peak load traffic.
Answer: C
Explanation:
Amazon RDS Proxy is a fully managed database proxy that makes applications more scalable,
more resilient to database failures, and more secure. RDS Proxy sits between your application
and your relational database to pool and share established database connections, improving
database efficiency and application scalability. RDS Proxy also reduces the load on the
database by handling connection management and query retries for transient errors. By
deploying RDS Proxy between your serverless application and Amazon RDS, you can avoid
opening and closing connections to the database frequently, which can cause errors or run out
of connections. This solution will also reduce operational costs and improve availability of your
application.
Reference: https://aws.amazon.com/rds/proxy/
5. An online learning company is migrating to the AWS Cloud. The company maintains its
student records in a PostgreSQL database. The company needs a solution in which its data is
available and online across multiple AWS Regions at all times.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Migrate the PostgreSQL database to a PostgreSQL cluster on Amazon EC2 instances.
B. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the
Multi-AZ feature turned on.
C. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a
read replica in another Region.
D. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up
 4 / 68
DB snapshots to be copied to another Region.
Answer: C
Explanation:
" online across multiple AWS Regions at all times". Currently only Read Replica supports cross-
regions, Multi-AZ does not support cross-region (it works only in same region) https://aws.amaz
on.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-support-multi-az-
deployments/
6. A companyis storing 700 terabytes of data on a large network-attached storage (NAS)
system in fts corporate data center. The company has a hybrid environment with a 10 Gbps
AWS Direct Connect connection.
After an audit from a regulator, the company has 90 days to move the data to the cloud. The
company needs to move the data efficiently and without disruption. The company still needs to
be able to access and update the data during the transfer window.
Which solution will meet these requirements?
A. Create an AWS DataSync agent in the corporate data center. Create a data transfer task.
Start the transfer to an Amazon S3 bucket.
B. Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an
AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
C. Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket
over the Direct Connect connection.
D. Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon
S3 bucket on the on-premises file system.
Answer: A
Explanation:
This answer is correct because it meets the requirements of moving the data efficiently and
without disruption, and still being able to access and update the data during the transfer
window. AWS DataSync is an online data movement and discovery service that simplifies and
accelerates data migrations to AWS and helps you move data quickly and securely between on-
premises storage, edge locations, other clouds, and AWS Storage. You can create an AWS
DataSync agent in the corporate data center to connect your NAS system to AWS over the
Direct Connect connection. You can create a data transfer task to specify the source location,
destination location, and options for transferring the data. You can start the transfer to an
Amazon S3 bucket and monitor the progress of the task. DataSync automatically encrypts data
in transit and verifies data integrity during transfer. DataSync also supports incremental
transfers, which means that only files that have changed since the last transfer are copied. This
 5 / 68
way, you can ensure that your data is synchronized between your NAS system and S3 bucket,
and you can access and update the data during the transfer window.
Reference:
https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html
https://docs.aws.amazon.com/datasync/latest/userguide/how-datasync-works.html
7. A company runs applications on AWS that connect to the company's Amazon RDS database.
The applications scale on weekends and at peak times of the year. The company wants to scale
the database more effectively for its applications that connect to the database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon DynamoDB with connection pooling with a target group configuration for the
database. Change the applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to
use the RDS Proxy endpoint.
C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change
the applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide connection pooling with a target group
configuration for the database. Change the applications to use the Lambda function.
Answer: B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational
Database Service (RDS) that makes applications more scalable, more resilient to database
failures, and more secure1. RDS Proxy allows applications to pool and share connections
established with the database, improving database efficiency and application scalability2. RDS
Proxy also reduces failover times for Aurora and RDS databases by up to 66% and enables IAM
authentication and Secrets Manager integration for database access1. RDS Proxy can be
enabled for most applications with no code changes2.
8. A company's facility has badge readers at every entrance throughout the building. When
badges are scanned, the readers send a message over HTTPS to indicate who attempted to
access that particular entrance.
A solutions architect must design a system to process these messages from the sensors. The
solution must be highly available, and the results must be made available for the company's
security team to analyze.
Which system architecture should the solutions architect recommend?
A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the
 6 / 68
messages Configure the EC2 instance to save the results to an Amazon S3 bucket.
B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to
invoke an AWS Lambda function to process the messages and save the results to an Amazon
DynamoDB table.
C. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function.
Configure the Lambda function to process the messages and save the results to an Amazon
DynamoDB table.
D. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection
from the facility network to the VPC so that sensor data can be written directly to an S3 bucket
by way of the VPC endpoint.
Answer: B
Explanation:
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save
the messages to an Amazon DynamoDB table. This option provides a highly available and
scalable solution that can easily handle large amounts of data. It also integrates with other AWS
services, making it easier to analyze and visualize the data for the security team.
9. A company is launching a new application and will display application metrics on an Amazon
CloudWatch dashboard. The company’s product manager needs to access this dashboard
periodically. The product manager does not have an AWS account. A solution architect must
provide access to the product manager by following the principle of least privilege.
Which solution will meet these requirements?
A. Share the dashboard from the CloudWatch console. Enter the product manager’s email
address, and complete the sharing steps. Provide a shareable link for the dashboard to the
product manager.
B. Create an IAM user specifically for the product manager. Attach the CloudWatch Read Only
Access managed policy to the user. Share the new login credential with the product manager.
Share the browser URL of the correct dashboard with the product manager.
C. Create an IAM user for the company’s employees, Attach the View Only Access AWS
managed policy to the IAM user. Share the new login credentials with the product manager. Ask
the product manager to navigate to the CloudWatch console and locate the dashboard by name
in the Dashboards section.
D. Deploy a bastion server in a public subnet. When the product manager requires access to
the dashboard, start the server and share the RDP credentials. On the bastion server, ensure
that the browser is configured to open the dashboard URL with cached AWS credentials that
have appropriate permissions to view the dashboard.
 7 / 68
Answer: B
Explanation:
To provide the product manager access to the Amazon CloudWatch dashboard while following
the principle of least privilege, a solution architect should create an IAM user specifically for the
product manager and attach the CloudWatch Read Only Access managed policy to the user.
This policy allows the user to view the dashboard without being able to make any changes to it.
The solution architect should then share the new login credential with the product manager and
provide them with the browser URL of the correct dashboard.
10. A company is running a multi-tier web application on premises. The web application is
containerized
and runs on a number of Linux hosts connected to a PostgreSQL database that contains user
records. The operational overhead of maintainingthe infrastructure and capacity planning is
limiting the company's growth. A solutions architect must improve the application's
infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose
two.)
A. Migrate the PostgreSQL database to Amazon Aurora
B. Migrate the web application to be hosted on Amazon EC2 instances.
C. Set up an Amazon CloudFront distribution for the web application content.
D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container
Service (Amazon ECS).
Answer: AE
Explanation:
Amazon Aurora is a fully managed, scalable, and highly available relational database service
that is compatible with PostgreSQL. Migrating the database to Amazon Aurora would reduce
the operational overhead of maintaining the database infrastructure and allow the company to
focus on building and scaling the application. AWS Fargate is a fully managed container
orchestration service that enables users to run containers without the need to manage the
underlying EC2 instances. By using AWS Fargate with Amazon Elastic Container Service
(Amazon ECS), the solutions architect can improve the scalability and efficiency of the web
application and reduce the operational overhead of maintaining the underlying infrastructure.
11. A company runs a high performance computing (HPC) workload on AWS. The workload
required low-latency network performance and high network throughput with tightly coupled
 8 / 68
node-to-node communication. The Amazon EC2 instances are properly sized for compute and
storage capacity, and are launched using default options.
What should a solutions architect propose to improve the performance of the workload?
A. Choose a cluster placement group while launching Amazon EC2 instances.
B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
D. Choose the required capacity reservation while launching Amazon EC2 instances.
Answer: A
Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-
ec2-placementgroup.html
: "A cluster placement group is a logical grouping of instances within a single Availability Zone
that benefit from low network latency, high network throughput"
12. A company provides a Voice over Internet Protocol (VoIP) service that uses UDP
connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group.
The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also
needs automated failover between Regions.
Which solution will meet these requirements?
A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target
group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in
each Region.
B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the
target group with the Auto Scaling group. Use the ALB as an AWS Global Accelerator endpoint
in each Region.
C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target
group with the Auto Scaling group. Create an Amazon Route 53 latency record that points to
aliases for
each NLB. Create an Amazon CloudFront distribution that uses the latency record as an origin.
D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the
target group with the Auto Scaling group. Create an Amazon Route 53 weighted record that
points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the
weighted record as an origin.
Answer: D
Explanation:
 9 / 68
https://aws.amazon.com/global-accelerator/faqs/
HTTP /HTTPS - ALB; TCP and UDP - NLB; Lowest latency routing and more throughput. Also
supports failover, uses Anycast Ip addressing - Global Accelerator Caching at Egde Locations C
Cloutfront
WS Global Accelerator automatically checks the health of your applications and routes user
traffic only to healthy application endpoints. If the health status changes or you make
configuration updates, AWS Global Accelerator reacts instantaneously to route your users to
the next available endpoint.
13. A company hostss a three application on Amazon EC2 instances in a single Availability
Zone. The web application uses a self-managed MySQL database that is hosted on an EC2
instances to store data in an Amazon Elastic Block Store (Amazon EBS) volumn. The MySQL
database currently uses a 1 TB Provisioned IOPS SSD (io2) EBS volume. The company
expects traffic of 1,000 IOPS for both reads and writes at peak traffic.
The company wants to minimize any distruptions, stabilize perperformace, and reduce costs
while retaining the capacity for double the IOPS. The company wants to more the database tier
to a fully managed solution that is highly available and fault tolerant.
Which solution will meet these requirements MOST cost-effectively?
A. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block
Express EBS volume.
B. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General
Purpose SSD (gp2) EBS volume.
C. Use Amazon S3 Intelligent-Tiering access tiers.
D. Use two large EC2 instances to host the database in active-passive mode.
Answer: B
Explanation:
RDS supported Storage >
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html GP2 max
IOPS > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-
purpose.html#gp2-performance
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and
gp3),
Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard). They differ
in
performance characteristics and price, which means that you can tailor your storage
performance
 10 / 68
and cost to the needs of your database workload. You can create MySQL, MariaDB, Oracle,
and
PostgreSQL RDS DB instances with up to 64 tebibytes (TiB) of storage. You can create SQL
Server RDS
DB instances with up to 16 TiB of storage. For this amount of storage, use the Provisioned
IOPS SSD
and General Purpose SSD storage types.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
14. A company is running an online transaction processing (OLTP) workload on AWS. This
workload uses an unencrypted Amazon RDS DB instance in a Multi-AZ deployment. Daily
database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always
encrypted moving forward?
A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the
encrypted snapshot
B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the
snapshots to it Enable encryption on the DB instance
C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS
KMS) Restore encrypted snapshot to an existing DB instance
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption
with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS)
Answer: A
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.ht
ml#US
ER_RestoreFromSnapshot.CON
Under "Encrypt unencrypted resources" -
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
15. A company is developing a mobile gaming app in a single AWS Region. The app runs on
multiple Amazon EC2 instances in an Auto Scaling group. The company stores the app data in
Amazon DynamoDB. The app communicates by using TCP traffic and UDP traffic between the
users and the servers. The application will be used globally. The companywants to ensure the
lowest possible latency for all users.
Which solution will meet these requirements?
 11 / 68
A. Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer
(ALB) behind an accelerator endpoint that uses Global Accelerator integration and listening on
the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB.
B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer
(NLB) behind an accelerator endpoint that uses Global Accelerator integration and listening on
the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB
C. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network
Load
Balancer (NLB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto
Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the
origin.
D. Create an Amazon Cloudfront content delivery network (CDN) endpoint. Create an
Application Load Balancer (ALB) behind the endpoint and listening on the TCP and UDP ports.
Update the Auto Scaling group to register instances on the ALB. Update CloudFront to use the
ALB as the origin
Answer: B
Explanation:
AWS Global Accelerator is a networking service that improves the performance and availability
of applications for global users. It uses the AWS global network to route user traffic to the
optimal endpoint based on performance and health. It also provides static IP addresses that act
as a fixed entry point to the applications and support both TCP and UDP protocols1. By using
AWS Global Accelerator, the solution can ensure the lowest possible latency for all users.
A) Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer
(ALB)
behind an accelerator endpoint that uses Global Accelerator integration and listening on the
TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB. This
solution will not work, as ALB does not support UDP protocol2.
C) Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network
Load Balancer (NLB) behind the endpoint and listening on the TCP and UDP ports. Update the
Auto Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the
origin. This solution will not work, as CloudFront does not support UDP protocol3.
D) Create an Amazon Cloudfront content delivery network (CDN) endpoint. Create an
Application Load Balancer (ALB) behind the endpoint and listening on the TCP and UDP ports.
Update the Auto Scaling group to register instances on the ALB. Update CloudFront to use the
ALB as the origin. This solution will not work, as CloudFront and ALB do not support UDP
protocol23.
 12 / 68
Reference URL: https://aws.amazon.com/global-accelerator/
16. A financial company hosts a web application on AWS. The application uses an Amazon API
Gateway Regional API endpoint to give users the ability to retrieve current stock prices. The
company's security team has noticed an increase in the number of API requests. The security
team is concerned that HTTP flood attacks might take the application offline.
A solutions architect must design a solution to protect the application from this type of attack.
Which solution meats these requirements with the LEAST operational overhead?
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint
with a maximum TTL of 24 hours
B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with
the API Gateway stage.
C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team
when the predefined rate is reached
D. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway
Regional API endpoint Create an AWS Lambda function to block requests from IP addresses
that exceed the predefined rate.
Answer: B
Explanation:
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html
A rate-based rule in AWS WAF allows the security team to configure thresholds that trigger rate-
based rules, which enable AWS WAF to track the rate of requests for a specified time period
and then block them automatically when the threshold is exceeded. This provides the ability to
prevent HTTP flood attacks with minimal operational overhead.
17. A company has an application that ingests incoming messages. These messages are then
quickly consumed by dozens of other applications and microservices.
The number of messages varies drastically and sometimes spikes as high as 100,000 each
second.
The company wants to decouple the solution and increase scalability.
Which solution meets these requirements?
A. Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and
process the messages.
B. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the
number of EC2 instances based on CPU metrics.
C. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will
 13 / 68
read from the stream and process the messages.
D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with
one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then
process the messages from the queues.
Answer: D
Explanation:
https://aws.amazon.com/sqs/features/
By routing incoming requests to Amazon SQS, the company can decouple the job requests
from the processing instances. This allows them to scale the number of instances based on the
size of the queue, providing more resources when needed. Additionally, using an Auto Scaling
group based on the queue size will automatically scale the number of instances up or down
depending on the workload. Updating the software to read from the queue will allow it to
process the job requests in a more efficient manner, improving the performance of the system.
18. A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic
File System (Amazon EFS) file system and another S3 bucket. The files must be copied
continuously. New files are added to the original S3 bucket consistently. The copied files should
be overwritten only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file
system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer
mode to transfer only data that has changed.
B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event
notification to invoke the function when files are created and changed in Amazon S3. Configure
the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file
system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer
mode to transfer all data.
D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system.
Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the
destination S3 bucket and the mounted file system.
Answer: A
Explanation:
AWS DataSync is a service that makes it easy to move large amounts of data between AWS
storage services and on-premises storage systems. AWS DataSync can copy files from an S3
bucket to an EFS file system and another S3 bucket continuously, as well as overwrite only the
 14 / 68
files that have changed in the source. This solution will meet the requirements with the least
operational overhead, as it does not require any code development or manual intervention.
Reference:
4explains how to create AWS DataSync locations for different storage services.
5 describes how to create and configure AWS DataSync tasks for data transfer.
6 discusses the different transfer modes that AWS DataSync supports.
19. A company runs a website that uses a content management system (CMS) on Amazon
EC2. The CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB
instance for the data tier. Website images are stored on an Amazon Elastic Block Store
(Amazon EBS) volume that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance and
resilience of the website? (Select TWO.)
A. Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance.
B. Share the website images by using an NFS share from the primary EC2 instance. Mount this
share on the other EC2 instances.
C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system
that is mounted on every EC2 instance.
D. Create an Amazon Machine Image (AMI) from the existing EC2 instance Use the AMI to
provision new instances behind an Application Load Balancer as part of an Auto Scaling group.
Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
accelerator in AWS Global Accelerator for the website.
E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to
provision new instances behind an Application Load Balancer as part of an Auto Scaling group.
Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
Amazon CloudFront distribution for the website.
Answer: CE
Explanation:
Option C provides moving the website images onto an Amazon EFS file system that is mounted
on every EC2 instance. Amazon EFS provides a scalable and fully managed file storage
solution that can be accessed concurrently from multiple EC2 instances. This ensures that the
website images can be accessed efficiently and consistently by all instances, improving
performance In Option E The Auto Scaling group maintains a minimum of two instances,
ensuring resilience by automatically replacing any unhealthy instances. Additionally, configuring
an Amazon CloudFront distribution for the website further improves performance by caching
content at edge locations closer to the end-users, reducing latency and improving content
 15 / 68
delivery. Hence combining these actions, the website's performance is improved through
efficient image storage and content delivery
20. An image-processing company has a web application that users use to upload images. The
application uploads the images into an Amazon S3 bucket. The company has set up S3 event
notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon
SQS) standard queue. The SQS queue serves as the event source for an AWS Lambda
function that processes the images and sends the results to users through email.
Users report that they are receiving multiple email messages for every uploaded image. A
solutions architect determines that SQS messages are invoking the Lambda function more than
once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational
overhead?
A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30
seconds.
B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID
to discard duplicate messages.
C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the
function timeout and the batch window timeout.
D. Modify the Lambda function to delete each message from the SQS queue immediately after
the message is read before processing.
Answer: C
21. A solutions architect needs to help a company optimize the cost of running an application on
AWS. The application will use Amazon EC2 instances, AWS Fargate, and AWS Lambda for
compute within the architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be
sporadic and unpredictable. Workloads that run on EC2 instances can be interrupted at any
time. The application front end will run on Fargate, and Lambda will serve the API layer. The
front-end utilization and API layer utilization will be predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for
hosting this application? (Choose two.)
A. Use Spot Instances for the data ingestion layer
B. Use On-Demand Instances for the data ingestion layer
C. Purchase a 1-year Compute Savings Plan for the front end and API layer.
D. Purchase 1-year All Upfront Reserved instances for the data ingestion layer.
 16 / 68
E. Purchase a 1-year EC2 instance Savings Plan for the front end and API layer.
Answer: AC
Explanation:
EC2 instance Savings Plan saves 72% while Compute Savings Plans saves 66%. But
according to link, it says "Compute Savings Plans provide the most flexibility and help to reduce
your costs by up to 66%. These plans automatically apply to EC2 instance usage regardless of
instance family, size, AZ, region, OS or tenancy, and also apply to Fargate and Lambda usage."
EC2 instance Savings Plans are not applied to Fargate or Lambda
22. A company runs a website that stores images of historical events. Website users need the
ability to search and view images based on the year that the event in the image occurred. On
average, users request each image only once or twice a year The company wants a highly
available solution to store and deliver the images to users.
Which solution will meet these requirements MOST cost-effectively?
A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on
Amazon EC2_
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on
Amazon EC2.
C. Store images in Amazon S3 Standard. use S3 Standard to directly deliver images by using a
static website.
D. Store images in Amazon S3 Standard-InfrequentAccess (S3 Standard-IA). use S3 Standard-
IA to directly deliver images by using a static website.
Answer: C
Explanation:
it allows the company to store and deliver images to users in a highly available and cost-
effective way. By storing images in Amazon S3 Standard, the company can use a durable,
scalable, and secure object storage service that offers high availability and performance. By
using S3 Standard to directly deliver images by using a static website, the company can avoid
running web servers and reduce operational overhead. S3 Standard also offers low storage
pricing and free data transfer within AWS Regions.
Reference: Amazon S3 Storage Classes
Hosting a Static Website on Amazon S3
23. A company runs a real-time data ingestion solution on AWS. The solution consists of the
most recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). The
solution is deployed in a VPC in private subnets across three Availability Zones.
 17 / 68
A solutions architect needs to redesign the data ingestion solution to be publicly available over
the internet. The data in transit must also be encrypted.
Which solution will meet these requirements with the MOST operational efficiency?
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets.
Update the MSK cluster security settings to enable mutual TLS authentication.
B. Create a new VPC that has public subnets. Deploy an MSK cluster in the public subnets.
Update the MSK cluster security settings to enable mutual TLS authentication.
C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALB
security group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPS
protocol.
D. Deploy a Network LoadBalancer (NLB) that uses private subnets. Configure an NLB listener
for HTTPS communication over the internet.
Answer: A
Explanation:
The solution that meets the requirements with the most operational efficiency is to configure
public subnets in the existing VPC and deploy an MSK cluster in the public subnets. This
solution allows the data ingestion solution to be publicly available over the internet without
creating a new VPC or deploying a load balancer. The solution also ensures that the data in
transit is encrypted by enabling mutual TLS authentication, which requires both the client and
the server to present certificates for verification. This solution leverages the public access
feature of Amazon MSK, which is available for clusters running Apache Kafka 2.6.0 or later
versions1.
The other solutions are not as efficient as the first one because they either create unnecessary
resources or do not encrypt the data in transit. Creating a new VPC with public subnets would
incur additional costs and complexity for managing network resources and routing. Deploying an
ALB or an NLB would also add more costs and latency for the data ingestion solution.
Moreover, an ALB or an NLB would not encrypt the data in transit by itself, unless they are
configured with HTTPS listeners and certificates, which would require additional steps and
maintenance. Therefore, these solutions are not optimal for the given requirements.
Reference: Public access - Amazon Managed Streaming for Apache Kafka
24. A company is building an ecommerce application and needs to store sensitive customer
information. The company needs to give customers the ability to complete purchase
transactions on the website. The company also needs to ensure that sensitive customer data is
protected, even from database administrators.
Which solution meets these requirements?
 18 / 68
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS
encryption to encrypt the data. Use an IAM instance role to restrict access.
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS
KMS) client-side encryption to encrypt the data.
C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-
side encryption to encrypt the data. Use S3 bucket policies to restrict access.
D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application
servers. Use Windows file permissions to restrict access.
Answer: B
Explanation:
it allows the company to store sensitive customer information in a managed AWS service and
give customers the ability to complete purchase transactions on the website. By using AWS Key
Management Service (AWS KMS) client-side encryption, the company can encrypt the data
before sending it to Amazon RDS for MySQL. This ensures that sensitive customer data is
protected, even from database administrators, as only the application has access to the
encryption keys.
Reference: Using Encryption with Amazon RDS for MySQL
Encrypting Amazon RDS Resources
25. A company is developing a two-tier web application on AWS. The company's developers
have deployed the application on an Amazon EC2 instance that connects directly to a backend
Amazon RDS database. The company must not hardcode database credentials in the
application. The company must also implement a solution to automatically rotate the database
credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon
CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS
credentials and instance metadata at the same time.
B. Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use
Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda
function that updates the RDS credentials and the credentials in the configuration file at the
same time. Use S3 Versioning to ensure the ability to fall back to previous values.
C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic
rotation for the secret. Attach the required permission to the EC2 role to grant access to the
secret.
D. Store the database credentials as encrypted parameters in AWS Systems Manager
 19 / 68
Parameter Store. Turn on automatic rotation for the encrypted parameters. Attach the required
permission to the EC2 role to grant access to the encrypted parameters.
Answer: C
Explanation:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
26. A company wants to manage Amazon Machine Images (AMIs). The company currently
copies AMIs to the same AWS Region where the AMIs were created. The company needs to
design an application that captures AWS API calls and sends alerts whenever the Amazon EC2
Createlmage API operation is called within the company's account.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a
Createlmage API call is detected.
B. Configure AWS CloudTrail with an Amazon Simple Notification Service {Amazon SNS)
notification that occurs when updated logs are sent to Amazon S3. Use Amazon Athena to
create a new table and to query on Createlmage when an API call is detected.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the Createlmage API
call. Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to
send an alert when a Createlmage API call is detected.
D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for
AWS CloudTrail logs. Create an AWS Lambda function to send an alert to an Amazon Simple
Notification Service (Amazon SNS) topic when a Createlmage API call is detected.
Answer: C
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-
events.html#:~:text=For example%2C you can create an EventBridge rule that detects when the
AMI creation process has completed and then invokes an Amazon SNS topic to send an email
notificati on to you.
Creating an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API
call and configuring the target as an Amazon Simple Notification Service (Amazon SNS) topic to
send an alert when a CreateImage API call is detected will meet the requirements with the least
operational overhead. Amazon EventBridge is a serverless event bus that makes it easy to
connect applications together using data from your own applications, integrated Software as a
Service (SaaS) applications, and AWS services. By creating an EventBridge rule for the
CreateImage API call, the company can set up alerts whenever this operation is called within
their account. The alert can be sent to an SNS topic, which can then be configured to send
 20 / 68
notifications to the company's email or other desired destination.
27. A company has a Windows-based application that must be migrated to AWS. The
application requires the use of a shared Windows file system attached to multiple Amazon EC2
Windows instances that are deployed across multiple Availability Zones.
What should a solutions architect do to meet this requirement?
A. Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each
Windows instance.
B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each
Windows instance.
C. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS
file system to each Windows instance.
D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size.
Attach each EC2 instance to the volume. Mount the file system within the volume to each
Windowsinstance.
Answer: B
Explanation:
This solution meets the requirement of migrating a Windows-based application that requires the
use of a shared Windows file system attached to multiple Amazon EC2 Windows instances that
are deployed across multiple Availability Zones. Amazon FSx for Windows File Server provides
fully managed shared storage built on Windows Server, and delivers a wide range of data
access, data management, and administrative capabilities. It supports the Server Message
Block (SMB) protocol and can be mounted to EC2 Windows instances across multiple
Availability Zones.
Option A is incorrect because AWS Storage Gateway in volume gateway mode provides cloud-
backed
storage volumes that can be mounted as iSCSI devices from on-premises application servers,
but it does not support SMB protocol or EC2 Windows instances. Option C is incorrect because
Amazon Elastic File System (Amazon EFS) provides a scalable and elastic NFS file system for
Linux-based workloads, but it does not support SMB protocol or EC2 Windows instances.
Option D is incorrect because Amazon Elastic Block Store (Amazon EBS) provides persistent
block storage volumes for use with EC2 instances, but it does not support SMB protocol or
attaching multiple instances to the same volume.
Reference:
https://aws.amazon.com/fsx/windows/
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-file-shares.html
 21 / 68
28. A company sells ringtones created from clips of popular songs. The files containing the
ringtones are stored in Amazon S3 Standard and are at least 128 KB in size. The company has
millions of files, but downloads are infrequent for ringtones older than 90 days. The company
needs to save money on storage while keeping the most accessed files readily available for its
users.
Which action should the company take to meet these requirements MOST cost-effectively?
A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier
of the objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive
storage tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access
(S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-
Infrequent Access (S3 Standard-1A) after 90 days.
Answer: D
Explanation:
This solution meets the requirements of saving money on storage while keeping the most
accessed files readily available for the users. S3 Lifecycle policy can automatically move objects
from one storage class to another based on predefined rules. S3 Standard-IA is a lower-cost
storage class for data that is accessed less frequently, but requires rapid access when needed.
It is suitable for ringtones older than 90 days that are downloaded infrequently.
Option A is incorrect because configuring S3 Standard-IA for the initial storage tier of the
objects can incur higher costs for frequent access and retrieval fees. Option B is incorrect
because moving the files to S3 Intelligent-Tiering can incur additional monitoring and
automation fees that may not be necessary for ringtones older than 90 days. Option C is
incorrect because using S3 inventory to manage objects and move them to S3 Standard-IA can
be complex and time-consuming, and it does not provide automatic cost savings.
Reference:
https://aws.amazon.com/s3/storage-classes/
https://aws.amazon.com/s3/cloud-storage-cost-optimization-ebook/
29. A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object
storage. The chief information security officer has directed that no application traffic between
the two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?
 22 / 68
A. AWS Key Management Service (AWS KMS)
B. VPC endpoint
C. Private subnet
D. Virtual private gateway
Answer: B
Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
30. A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data
sources. The data is in JSON format and Ingestion rates can be as high as 1 MB/s. When an
EC2 instance is rebooted, the data in-flight is lost. The company's data science team wants to
query Ingested data In near-real time.
Which solution provides near-real -time data querying that is scalable with minimal data loss?
A. Publish data to Amazon Kinesis Data Streams Use Kinesis data Analytics to query the data.
B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination Use
Amazon Redshift to query the data
C. Store ingested data m an EC2 Instance store Publish data to Amazon Kinesis Data Firehose
with Amazon S3 as the destination. Use Amazon Athena to query the data.
D. Store ingested data m an Amazon Elastic Block Store (Amazon EBS) volume Publish data to
Amazon ElastiCache tor Red Subscribe to the Redis channel to query the data
Answer: A
Explanation:
https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html
31. "need a shared block device volume"==>EBS Multi-attach is Block Storage whereas EFS is
File Storage==> B out
32. An application allows users at a company's headquarters to access product dat
a. The product data is stored in an Amazon RDS MySQL DB instance. The operations team has
isolated an application performance slowdown and wants to separate read traffic from write
traffic. A solutions architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?
A. Change the existing database to a Multi-AZ deployment. Serve the read requests from the
primary Availability Zone.
B. Change the existing database to a Multi-AZ deployment. Serve the read requests from the
secondary Availability Zone.
C. Create read replicas for the database. Configure the read replicas with half of the compute
 23 / 68
and storage resources as the source database.
D. Create read replicas for the database. Configure the read replicas with the same compute
and storage resources as the source database.
Answer: D
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadR
eplica s.html
33. A company runs a three-tier web application in the AWS Cloud that operates across three
Availability Zones. The application architecture has an Application Load Balancer, an Amazon
EC2 web server that hosts user session states, and a MySQL database that runs on an EC2
instance. The company expects sudden increases in application traffic. The company wants to
be able to scale to meet future application capacity demands and to ensure high availability
across all three Availability Zones.
Which solution will meet these requirements?
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster
deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache
reads.
Migrate the web server to an Auto Scaling group that is in three Availability Zones.
B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster
deployment. Use Amazon ElastiCache for Memcached with high availability to store session
data and to cache reads. Migrate the web server to an Auto Scaling group that is in three
Availability Zones.
C. Migrate the MySQL database to Amazon DynamoDB. Use DynamoDB Accelerator (DAX) to
cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling
group that is in three Availability Zones.
D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use
Amazon ElastiCache for Redis with high availability to store session data and to cache reads.
Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Answer:A
Explanation:
This answer is correct because it meets the requirements of scaling to meet future application
capacity demands and ensuring high availability across all three Availability Zones. By migrating
the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment, the
company can benefit from automatic failover, backup, and patching of the database across
 24 / 68
multiple Availability Zones. By using Amazon ElastiCache for Redis with high availability, the
company can store session data and cache reads in a fast, in-memory data store that can also
fail over across Availability Zones. By migrating the web server to an Auto Scaling group that is
in three Availability Zones, the company can automatically scale the web server capacity based
on the demand and traffic patterns.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
34. A company wants lo build a web application on AWS. Client access requests to the website
are not predictable and can be idle for a long time. Only customers who have paid a
subscription fee can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Select
THREE.)
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB.
Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the
Lambda function.
B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application
Load Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway
endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
C. Create an Amazon Cogmto user pool to authenticate users
D. Create an Amazon Cognito identity pool to authenticate users.
E. Use AWS Amplify to serve the frontend web content with HTML. CSS, and JS. Use an
integrated Amazon CloudFront configuration.
F. Use Amazon S3 static web hosting with PHP. CSS. and JS. Use Amazon CloudFront to
serve the frontend web content.
Answer: ACE
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
35. An ecommerce company hosts its analytics application in the AWS Cloud. The application
generates about 300 MB of data each month. The data is stored in JSON format. The company
is evaluating a disaster recovery solution to back up the data. The data must be accessible in
milliseconds if it is needed, and the data must be kept for 30 days.
Which solution meets these requirements MOST cost-effectively?
 25 / 68
A. Amazon OpenSearch Service (Amazon Elasticsearch Service)
B. Amazon S3 Glacier
C. Amazon S3 Standard
D. Amazon RDS for PostgreSQL
Answer: C
Explanation:
This solution meets the requirements of a disaster recovery solution to back up the data that is
generated by an analytics application, stored in JSON format, and must be accessible in
milliseconds if it is needed. Amazon S3 Standard is a durable and scalable storage class for
frequently accessed data. It can store any amount of data and provide high availability and
performance. It can also support millisecond access time for data retrieval.
Option A is incorrect because Amazon OpenSearch Service (Amazon Elasticsearch Service) is
a search and analytics service that can index and query data, but it is not a backup solution for
data stored in JSON format. Option B is incorrect because Amazon S3 Glacier is a low-cost
storage class for data archiving and long-term backup, but it does not support millisecond
access time for data retrieval. Option D is incorrect because Amazon RDS for PostgreSQL is a
relational database service that can
store and query structured data, but it is not a backup solution for data stored in JSON format.
Reference:
https://aws.amazon.com/s3/storage-classes/
https://aws.amazon.com/s3/faqs/#Durability_and_data_protection
36. A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda
The hospital uses API Gateway and Lambda to upload reports that are in PDF format and
JPEG format The hospital needs to modify the Lambda code to identify protected health
information (PHI) in the reports
Which solution will meet these requirements with the LEAST operational overhead?
A. Use existing Python libraries to extract the text from the reports and to identify the PHI from
the extracted text.
B. Use Amazon Textract to extract the text from the reports Use Amazon SageMaker to identify
the PHI from the extracted text.
C. Use Amazon Textract to extract the text from the reports Use Amazon Comprehend Medical
to identify the PHI from the extracted text
D. Use Amazon Rekognition to extract the text from the reports Use Amazon Comprehend
Medical to identify the PHI from the extracted text
Answer: C
 26 / 68
Explanation:
To meet the requirements of the company to have access to both AWS and on-premises file
storage with minimum latency, a hybrid cloud architecture can be used. One solution is to
deploy and configure Amazon FSx for Windows File Server on AWS, which provides fully
managed Windows file servers. The on-premises file data can be moved to the FSx File
Gateway, which can act as a bridge between on-premises and AWS file storage. The cloud
workloads can be configured to use FSx for Windows File Server on AWS, while the on-
premises workloads can be configured to use the FSx File Gateway. This solution minimizes
operational overhead and requires no significant changes to the existing file access patterns.
The connectivity between on-premises and AWS can be established using an AWS Site-to-Site
VPN connection.
Reference: AWS FSx for Windows File Server: https://aws.amazon.com/fsx/windows/
AWS FSx File Gateway: https://aws.amazon.com/fsx/file-gateway/
AWS Site-to-Site VPN: https://aws.amazon.com/vpn/site-to-site-vpn/
37. A company is building a containerized application on premises and decides to move the
application to AWS. The application will have thousands of users soon after li is deployed. The
company Is unsure how to manage the deployment of containers at scale. The company needs
to deploy the containerized application in a highly available architecture that minimizes
operational overhead.
Which solution will meet these requirements?
A. Store container images In an Amazon Elastic Container Registry (Amazon ECR) repository.
Use an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch
type to run the containers. Use target tracking to scale automatically based on demand.
B. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository.
Use an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch
type to run the containers. Use target tracking to scale automatically based on demand.
C. Store container images in a repository that runs on an Amazon EC2 instance. Run the
containers on EC2 instances that are spread across multiple Availability Zones. Monitor the
average CPU utilization in Amazon CloudWatch. Launch new EC2 instances as needed
D. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image
Launch EC2 Instances in an Auto Scaling group across multiple Availability Zones. Use an
Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization
threshold is breached.
Answer: A
Explanation:
 27 / 68
AWS Fargate is a serverless experience for user applications, allowing the user to concentrate
on building applications instead of configuring and managing servers. Fargate also automates
resource management, allowing users to easily scale their applications in response to demand.38. A company has several web servers that need to frequently access a common Amazon
RDS MySQL Multi-AZ DB instance The company wants a secure method for the web servers to
connect to the database while meeting a security requirement to rotate user credentials
frequently.
Which solution meets these requirements?
A. Store the database user credentials in AWS Secrets Manager Grant the necessary IAM
permissions to allow the web servers to access AWS Secrets Manager
B. Store the database user credentials in AWS Systems Manager OpsCenter Grant the
necessary IAM permissions to allow the web servers to access OpsCenter
C. Store the database user credentials in a secure Amazon S3 bucket Grant the necessary IAM
permissions to allow the web servers to retrieve credentials and access the database.
D. Store the database user credentials in files encrypted with AWS Key Management Service
(AWS KMS) on the web server file system. The web server should be able to decrypt the files
and access the database
Answer: A
Explanation:
AWS Secrets Manager helps you protect secrets needed to access your applications, services,
and IT resources. The service enables you to easily rotate, manage, and retrieve database
credentials, API keys, and other secrets throughout their lifecycle.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
Secrets Manager enables you to replace hardcoded credentials in your code, including
passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This
helps ensure the secret can't be compromised by someone examining your code, because the
secret no longer exists in the code. Also, you can configure Secrets Manager to automatically
rotate the secret for you according to a specified schedule. This enables you to replace long-
term secrets with short-term ones, significantly reducing the risk of compromise.
39. A company wants to use high-performance computing and artificial intelligence to improve
its fraud prevention and detection technology. The company requires distributed processing to
complete a single workload as quickly as possible.
Which solution will meet these requirements?
A. Use Amazon Elastic Kubernetes Service (Amazon EKS) and multiple containers.
 28 / 68
B. Use AWS ParallelCluster and the Message Passing Interface (MPI) libraries.
C. Use an Application Load Balancer and Amazon EC2 instances.
D. Use AWS Lambda functions.
Answer: B
Explanation:
AWS ParallelCluster is a service that allows you to create and manage high-performance
computing (HPC) clusters on AWS. It supports multiple schedulers, including AWS Batch, which
can run distributed workloads across multiple EC2 instances1.
MPI is a standard for message passing between processes in parallel computing. It provides
functions for sending and receiving data, synchronizing processes, and managing
communication groups2.
By using AWS ParallelCluster and MPI libraries, you can take advantage of the following
benefits:
You can easily create and configure HPC clusters that meet your specific requirements, such as
instance type, number of nodes, network configuration, and storage options1.
You can leverage the scalability and elasticity of AWS to run large-scale parallel workloads
without worrying about provisioning or managing servers1.
You can use MPI libraries to optimize the performance and efficiency of your parallel
applications by enabling inter-process communication and data exchange2.
You can choose from a variety of MPI implementations that are compatible with AWS
ParallelCluster, such as Open MPI, Intel MPI, and MPICH3.
40. A global company hosts its web application on Amazon EC2 instances behind an
Application Load Balancer (ALB). The web application has static data and dynamic data. The
company stores its static data in an Amazon S3 bucket. The company wants to improve
performance and reduce latency for the static data and dynamic data. The company is using its
own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins
Configure Route 53 to route traffic to the CloudFront distribution.
B. Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS
Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Configure
Route 53 to route traffic to the CloudFront distribution.
C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an
AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution
as endpoints Create a custom domain name that points to the accelerator DNS name Use the
 29 / 68
custom domain name as an endpoint for the web application.
D. Create an Amazon CloudFront distribution that has the ALB as an origin C. Create an AWS
Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two
domain names. Point one domain name to the CloudFront DNS name for dynamic content,
Point the other domain name to the accelerator DNS name for static content Use the domain
names as endpoints for the web application.
Answer: C
Explanation:
Static content can be cached at Cloud front Edge locations from S3 and dynamic content EC2
behind the ALB whose performance can be improved by Global Accelerator whose one
endpoint is ALB and other Cloud front. So with regards to custom domain name endpoint is web
application is R53 alias records for the custom domain point to web application https://aws.amaz
on.com/blogs/networking-and-content-delivery/improving-availability-and-performance-for-
application-load-balancers-using-one-click-integration-with-aws-global-accelerator/
41. A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and
Amazon RDS DB instances to build and run a payment processing application. The company
will run the application in its on-premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect
is working with the company's operational team to build the application.
Which activities are the responsibility of the company's operational team? (Select THREE.)
A. Providing resilient power and network connectivity to the Outposts racks
B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on
Outposts
C. Physical security and access controls of the data center environment
D. Availability of the Outposts infrastructure including the power supplies, servers, and
networking equipment within the Outposts racks
E. Physical maintenance of Outposts components
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and
maintenance events
Answer: ACF
Explanation:
These answers are correct because they reflect the customer’s responsibilities for using AWS
Outposts as part of the solution. According to the AWS shared responsibility model, the
customer is responsible for providing resilient power and network connectivity to the Outposts
racks, ensuring physical security and access controls of the data center environment, and
 30 / 68
providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance
events. AWS is responsible for managing the virtualization hypervisor, storage systems, and the
AWS services that run on Outposts, as well as the availability of the Outposts infrastructure
including the power supplies, servers, and networking equipment within the Outposts racks, and
the physical maintenance of Outposts components.
Reference:
https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html
https://www.contino.io/insights/the-sandwich-responsibility-model-aws-outposts/
42. A company's ecommerce website has unpredictabletraffic and uses AWS Lambda functions
to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to
maintain predictable database performance and ensure that the Lambda invocations do not
overload the database with too many connections.
What should a solutions architect do to meet these requirements?
A. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a
VPC.
B. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
C. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a
VPC.
D. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside a
VPC.
Answer: B
Explanation:
To maintain predictable database performance and ensure that the Lambda invocations do not
overload the database with too many connections, a solutions architect should point the client
driver at an RDS proxy endpoint and deploy the Lambda functions inside a VPC. An RDS proxy
is a fully managed database proxy that allows applications to share connections to a database,
improving database availability and scalability. By using an RDS proxy, the Lambda functions
can reuse existing connections, rather than creating new ones for every invocation, reducing the
connection overhead and latency. Deploying the Lambda functions inside a VPC allows them to
access the private RDS DB instance securely and efficiently, without exposing it to the public
internet.
Reference: Using Amazon RDS Proxy with AWS Lambda
Configuring a Lambda function to access resources in a VPC
43. A company is migrating an old application to AWS The application runs a batch job every
 31 / 68
hour and is CPU intensive. The batch job takes 15 minutes on average with an on-premises
server The server has 64 virtual CPU (vCPU) and 512 GiB of memory
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
A. Use AWS Lambda with functional scaling
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate
C. Use Amazon Lightsail with AWS Auto Scaling
D. Use AWS Batch on Amazon EC2
Answer: D
Explanation:
Use AWS Batch on Amazon EC2. AWS Batch is a fully managed batch processing service that
can be used to easily run batch jobs on Amazon EC2 instances. It can scale the number of
instances to match the workload, allowing the batch job to be completed in the desired time
frame with minimal operational overhead.
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
Using AWS Lambda with Amazon API Gateway - AWS Lambda
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
https://aws.amazon.com/lambda/faqs/
AWS Lambda FAQs
https://aws.amazon.com/lambda/faqs/
44. A company is launching an application on AWS. The application uses an Application Load
(ALB) to direct traffic to at least two Amazon EC2 instances in a single target group.
The instances are in an Auto Scaling group for each environment. The company requires a
development and a production environment. The production environment will have periods of
high traffic.
Which solution will configure the development environment MOST cost-effectively?
A. Reconfigure the target group in the development environment to have one EC2 instance as a
target.
B. Change the ALB balancing algorithm to least outstanding requests.
C. Reduce the size of the EC2 instances in both environments.
D. Reduce the maximum number of EC2 instances in the development environment’s Auto
Scaling group
Answer: D
Explanation:
This option will configure the development environment in the most cost-effective way as it
reduces the number of instances running in the development environment and therefore
 32 / 68
reduces the cost of running the application. The development environment typically requires
less resources than the production environment, and it is unlikely that the development
environment will have periods of high traffic that would require a large number of instances. By
reducing the maximum number of instances in the development environment's Auto Scaling
group, the company can save on costs while still maintaining a functional development
environment.
45. A company has an application that generates a large number of files, each approximately 5
MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for
4 years before they can be deleted Immediate accessibility is always required as the files
contain critical business data that is not easy to reproduce. The files are frequently accessed in
the first 30 days of the object creation but are rarely accessed after the first 30 days
Which storage solution is MOST cost-effective?
A. Create an S3 bucket lifecycle policy to move Mm from S3 Standard to S3 Glacier 30 days
from object creation Delete the Tiles 4 years after object creation
B. Create an S3 bucket lifecycle policy to move tiles from S3 Standard to S3 One Zone-
infrequent Access (S3 One Zone-IA] 30 days from object creation. Delete the fees 4 years after
object creation
C. Create an S3 bucket lifecycle policy to move files from S3 Standard-infrequent Access (S3
Standard -lA) 30 from object creation. Delete the ties 4 years after object creation
D. Create an S3 bucket Lifecycle policy to move files from S3 Standard to S3 Standard-
Infrequent Access (S3 Standard-IA) 30 days from object creation Move the files to S3 Glacier 4
years after object carton.
Answer: B
Explanation:
https://aws.amazon.com/s3/storage-classes/?trk=66264cd8-3b73-416c-9693-ea7cf4fe846a&sc
_channel=ps&s_kwcid=AL!4422!3!536452716950!p!!g!!aws s3 pricing&ef_id=Cj0KCQjwnbmaB
hD-ARIsAGTPcfVHUZN5_BMrzl5zBcaC8KnqpnNZvjbZzqPkH6k7q4JcYO5KFLx0YYgaAm6nEA
Lw_wcB:G:s&s_kwcid=AL!4422!3!536452716950!p!!g!!aws s3 pricing
46. A company wants to use high performance computing (HPC) infrastructure on AWS for
financial risk modeling. The company's HPC workloads run on Linux. Each HPC workflow runs
on hundreds of Amazon EC2 Spot Instances, is shorl-lived, and generates thousands of output
files that are ultimately stored in persistent storage for analytics and long-term future use.
The company seeks a cloud storage solution that permits the copying of on-premises data to
long-term persistent storage to make data available for processing by all EC2 instances. The
 33 / 68
solution should also be a high performance file system that is integrated with persistent storage
to read and write datasets and output files.
Which combination of AWS services meets these requirements?
A. Amazon FSx for Lustre integrated with Amazon S3
B. Amazon FSx for Windows File Server integrated with Amazon S3
C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)
D. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store
(Amazon EBS) General Purpose SSD (gp2) volume
Answer: A
Explanation:
https://aws.amazon.com/fsx/lustre/
Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-
performance, scalable storage for compute workloads. Many workloads such as machine
learning, high performance computing (HPC), video rendering, and financial simulations depend
on compute instances accessing the same set of data through high-performance shared
storage.
47. A company stores call transcript files on a monthly basis. Users access the files randomly
within 1 year of the call, but users access the files infrequently after 1 year. The company wants
to optimize its solution by giving users the ability to query and retrieve files that are less than
1-year-old as quickly as possible. A delay in retrieving older files is acceptable.
Which solution will meet these requirements MOST cost-effectively?
A. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to
retrievethe files from S3 Glacier Instant Retrieval.
B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the
files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon
S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3
Glacier Select.
C. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for
each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3
Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata
from Amazon S3.
D. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the
files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the
files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.
Answer: B
 34 / 68
Explanation:
"For archive data that needs immediate access, such as medical images, news media assets,
or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage
class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does
not require immediate access but needs the flexibility to retrieve large sets of data at no cost,
such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly
S3 Glacier), with retrieval in minutes or free bulk retrievals in 5-12 hours." https://aws.amazon.c
om/about-aws/whats-new/2021/11/amazon-s3-glacier-instant-retrieval-storage-class/
48. A company wants to reduce the cost of its existing three-tier web architecture. The web,
application, and database servers are running on Amazon EC2 instances for the development,
test, and production environments. The EC2 instances average 30% CPU utilization during
peak hours and 10% CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances
run for at least 8 hours each day. The company plans to implement automation to stop the
development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-
effectively?
A. Use Spot Instances for the production EC2 instances. Use Reserved Instances for the
development and test EC2 instances.
B. Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for
the development and test EC2 instances.
C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the
development and test EC2 instances.
D. Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the
development and test EC2 instances.
Answer: B
49. A company has a three-tier web application that is deployed on AWS. The web servers are
deployed in a public subnet in a VPC. The application servers and database servers are
deployed in private subnets in the same VPC. The company has deployed a third-party virtual
firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured
with an IP interface that can accept IP packets.
A solutions architect needs to Integrate the web application with the appliance to inspect all
traffic to the application before the traffic teaches the web server.
Which solution will moot these requirements with the LEAST operational overhead?
 35 / 68
A. Create a Network Load Balancer the public subnet of the application's VPC to route the traffic
lo the appliance for packet inspection
B. Create an Application Load Balancer in the public subnet of the application's VPC to route
the traffic to the appliance for packet inspection
C. Deploy a transit gateway m the inspection VPC Configure route tables to route the incoming
pockets through the transit gateway
D. Deploy a Gateway Load Balancer in the inspection VPC Create a Gateway Load Balancer
endpoint to receive the incoming packets and forward the packets to the appliance
Answer: D
Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-
inspection-using-aws-gateway-load-balancer/
50. A development team has launched a new application that is hosted on Amazon EC2
instances inside a development VPC. A solution architect needs to create a new VPC in the
same account. The new VPC will be peered with the development VPC. The VPC CIDR block
for the development VPC is 192.
51. A social media company runs its application on Amazon EC2 instances behind an
Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution.
The application has more than a billion images stored in an Amazon S3 bucket and processes
thousands of images each second. The company wants to resize the images dynamically and
serve appropriate formats to clients.
Which solution will meet these requirements with the LEAST operational overhead?
A. Install an external image management library on an EC2 instance. Use the image
management library to process the images.
B. Create a CloudFront origin request policy. Use the policy to automatically resize images and
to serve the appropriate format based on the User-Agent HTTP header in the request.
C. Use a Lambda@Edge function with an external image management library. Associate the
Lambda@Edge function with the CloudFront behaviors that serve the images.
D. Create a CloudFront response headers policy. Use the policy to automatically resize images
and to serve the appropriate format based on the User-Agent HTTP header in the request.
Answer: C
Explanation:
To resize images dynamically and serve appropriate formats to clients, a Lambda@Edge
function with an external image management library can be used. Lambda@Edge allows
running custom code at the edge locations of CloudFront, which can process the images on the
 36 / 68
fly and optimize them for different devices and browsers. An external image management library
can provide various image manipulation and optimization features.
Reference: Lambda@Edge
Resizing Images with Amazon CloudFront & Lambda@Edge
52. A social media company allows users to upload images to its website. The website runs on
Amazon EC2 instances. During upload requests, the website resizes the images to a standard
size and stores the resized images in Amazon S3. Users are experiencing slow upload requests
to the website.
The company needs to reduce coupling within the application and improve website
performance. A solutions architect must design the most operationally efficient process for
image uploads.
Which combination of actions should the solutions architect take to meet these requirements?
(Choose two.)
A. Configure the application to upload images to S3 Glacier.
B. Configure the web server to upload the original images to Amazon S3.
C. Configure the application to upload images directly from each user's browser to Amazon S3
through the use of a presigned URL.
D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is
uploaded. Use the function to resize the image
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS
Lambda function on a schedule to resize uploaded images.
Answer: CD
Explanation:
Amazon S3 is a highly scalable and durable object storage service that can store and retrieve
any amount of data from anywhere on the web1. Users can configure the application to upload
images directly from each user’s browser to Amazon S3 through the use of a presigned URL. A
presigned URL is a URL that gives access to an object in an S3 bucket for a limited time and
with a specific action, such as uploading an object2. Users can generate a presigned URL
programmatically using the AWS SDKs or AWS CLI. By using a presigned URL, users can
reduce coupling within the application and improvewebsite performance, as they do not need to
send the images to the web server first.
AWS Lambda is a serverless compute service that runs code in response to events and
automatically manages the underlying compute resources3. Users can configure S3 Event
Notifications to invoke an AWS Lambda function when an image is uploaded. S3 Event
Notifications is a feature that allows users to receive notifications when certain events happen in
 37 / 68
an S3 bucket, such as object creation or deletion. Users can configure S3 Event Notifications to
invoke a Lambda function that resizes the image and stores it back in the same or a different S3
bucket. This way, users can offload the image resizing task from the web server to Lambda.
53. A company hosts a serverless application on AWS. The application uses Amazon API
Gateway, AWS Lambda, and an Amazon RDS for PostgreSQL database. The company notices
an increase in application errors that result from database connection timeouts during times Of
peak traffic or unpredictable traffic. The company needs a solution that reduces the application
failures with the least amount of change to the code.
What should a solutions architect do to meet these requirements?
A. Reduce the Lambda concurrency rate.
B. Enable RDS Proxy on the RDS DB instance.
C. Resize the RDS DB instance class to accept more connections.
D. Migrate the database to Amazon DynamoDB with on-demand scaling.
Answer: B
Explanation:
Using RDS Proxy, you can handle unpredictable surges in database traffic. Otherwise, these
surges might cause issues due to oversubscribing connections or creating new connections at a
fast rate. RDS Proxy establishes a database connection pool and reuses connections in this
pool. This approach avoids the memory and CPU overhead of opening a new database
connection each time. To protect the database against oversubscription, you can control the
number of database connections that are created.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
54. A company uses a 100 GB Amazon RDS for Microsoft SQL Server Single-AZ DB instance
in the us-east-1 Region to store customer transactions. The company needs high availability
and automate recovery for the DB instance.
The company must also run reports on the RDS database several times a year. The report
process causes transactions to take longer than usual to post to the customer‘ accounts.
Which combination of steps will meet these requirements? (Select TWO.)
A. Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment.
B. Take a snapshot of the current DB instance. Restore the snapshot to a new RDS deployment
in another Availability Zone.
C. Create a read replica of the DB instance in a different Availability Zone. Point All requests for
reports to the read replica.
D. Migrate the database to RDS Custom.
 38 / 68
E. Use RDS Proxy to limit reporting requests to the maintenance window.
Answer: AC
Explanation:
https://medium.com/awesome-cloud/aws-difference-between-multi-az-and-read-replicas-in-
amazon-rds-60fe848ef53a
55. A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The
applications must initiate communications with other external applications using the internet.
However, the company’s security policy states that any external service cannot initiate a
connection to the EC2 instances.
What should a solutions architect recommend to resolve this issue?
A. Create a NAT gateway and make it the destination of the subnet's route table.
B. Create an internet gateway and make it the destination of the subnet's route table
C. Create a virtual private gateway and make it the destination of the subnet's route table.
D. Create an egress-only internet gateway and make it the destination of the subnet's route
table.
Answer: D
Explanation:
An egress-only internet gateway is a VPC component that allows outbound communication over
IPv6 from instances in your VPC to the internet, and prevents the internet from initiating an IPv6
connection with your instances. This meets the company’s security policy and requirements. To
use an egress-only internet gateway, you need to add a route in the subnet’s route table that
routes IPv6 internet traffic (::/0) to the egress-only internet gateway.
Reference URLs:
1 https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
2 https://dev.to/aws-builders/what-is-an-egress-only-internet-gateways-in-aws-7gp
3 https://docs.aws.amazon.com/vpc/latest/userguide/route-table-options.html
56. An ecommerce company is building a distributed application that involves several serverless
functions and AWS services to complete order-processing tasks. These tasks require manual
approvals as part of the workflow A solutions architect needs to design an architecture for the
order-processing application The solution must be able to combine multiple AWS Lambda
functions into responsive serverless applications The solution also must orchestrate data and
services that run on Amazon EC2 instances, containers, or on-premises servers
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Step Functions to build the application.
 39 / 68
B. Integrate all the application components in an AWS Glue job
C. Use Amazon Simple Queue Service (Amazon SQS) to build the application
D. Use AWS Lambda functions and Amazon EventBridge (Amazon CloudWatch Events) events
to build the application
Answer: A
Explanation:
AWS Step Functions is a fully managed service that makes it easy to build applications by
coordinating the components of distributed applications and microservices using visual
workflows. With Step Functions, you can combine multiple AWS Lambda functions into
responsive serverless applications and orchestrate data and services that run on Amazon EC2
instances, containers, or on-premises servers. Step Functions also allows for manual approvals
as part of the workflow. This solution meets all the requirements with the least operational
overhead.
https://aws.amazon.com/step-functions/#:~:text=AWS Step Functions is a,machine learning
(ML) pipeline
s.
57. A company is preparing a new data platform that will ingest real-time streaming data from
multiple sources. The company needs to transform the data before writing the data to Amazon
S3. The company needs the ability to use SQL to query the transformed data.
Which solutions will meet these requirements? (Choose two.)
A. Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics
to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use
Amazon Athena to query the transformed data from Amazon S3.
B. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use
AWS Glue to transform the data and to write the data to Amazon S3. Use Amazon Athena to
query the transformed data from Amazon S3.
C. Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to
transform the data and to write the data to Amazon S3. Use Amazon Athena to query the
transformed data from Amazon S3.
D. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use
Amazon Kinesis Data Analytics to transform the data and to write the data to Amazon S3. Use
the Amazon RDS query editor to query the transformed data from Amazon S3.
E. Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data.
Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the Amazon RDS
query editor to query the transformed data from Amazon S3.
 40 / 68
Answer: AB
Explanation:
To ingest, transform, and query real-time streaming data from multiple sources, Amazon Kinesis
and Amazon MSK are suitable solutions. Amazon Kinesis Data Streamscan stream the data
from various sources and integrate with other AWS services. Amazon Kinesis Data Analytics
can transform the data using SQL or Apache Flink. Amazon Kinesis Data Firehose can write the
data to Amazon S3 or other destinations. Amazon Athena can query the transformed data from
Amazon S3 using standard SQL. Amazon MSK can stream the data using Apache Kafka, which
is a popular open-source platform for streaming data. AWS Glue can transform the data using
Apache Spark or Python scripts and write the data to Amazon S3 or other destinations. Amazon
Athena can also query the transformed data from Amazon S3 using standard SQL.
Reference:
What Is Amazon Kinesis Data Streams?
What Is Amazon Kinesis Data Analytics?
What Is Amazon Kinesis Data Firehose?
What Is Amazon Athena?
What Is Amazon MSK?
What Is AWS Glue?
58. A company uses multiple vendors to distribute digital assets that are stored in Amazon S3
buckets The company wants to ensure that its vendor AWS accounts have the minimum access
that is needed to download objects in these S3 buckets
Which solution will meet these requirements with the LEAST operational overhead?
A. Design a bucket policy that has anonymous read permissions and permissions to list ail
buckets.
B. Design a bucket policy that gives read-only access to users. Specify 1AM entities as
principals
C. Create a cross-account 1AM role that has a read-only access policy specified for the 1AM
role.
D. Create a user policy and vendor user groups that give read-only access to vendor users
Answer: C
Explanation:
A cross-account IAM role is a way to grant users from one AWS account access to resources in
another AWS account. The cross-account IAM role can have a read-only access policy attached
to it, which allows the users to download objects from the S3 buckets without modifying or
deleting them. The cross-account IAM role also reduces the operational overhead of managing
 41 / 68
multiple IAM users and policies in each account. The cross-account IAM role meets all the
requirements of the question, while the other options do not.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-
managing-access-example2.html
https://aws.amazon.com/blogs/storage/setting-up-cross-account-amazon-s3-access-with-
s3-access-points/
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
59. EFS automatically replicates data within and across 3 AZ==>we use cluster placement so
all EC2 are within one AZ.
60. A company has a financial application that produces reports. The reports average 50 KB in
size and are stored in Amazon S3. The reports are frequently accessed during the first week
after production and must be stored for several years. The reports must be retrievable within 6
hours.
Which solution meets these requirements MOST cost-effectively?
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7
days.
B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent
Access (S3 Standard-IA) after 7 days.
C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3
Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep
Archive after 7 days.
Answer: A
Explanation:
To store and retrieve reports that are frequently accessed during the first week and must be
stored for several years, S3 Standard and S3 Glacier are suitable solutions. S3 Standard offers
high durability, availability, and performance for frequently accessed data. S3 Glacier offers
secure and durable storage for long-term data archiving at a low cost. S3 Lifecycle rules can be
used to transition the reports from S3 Standard to S3 Glacier after 7 days, which can reduce
storage costs. S3 Glacier also supports retrieval within 6 hours.
Reference: Storage Classes
Object Lifecycle Management
Retrieving Archived Objects from Amazon S3 Glacier
61. A company runs an application that uses Amazon RDS for PostgreSQL. The application
 42 / 68
receives traffic only on weekdays during business hours. The company wants to optimize costs
and reduce operational overhead based on this usage.
Which solution will meet these requirements?
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum
CPU utilization.
D. Purchase All Upfront reserved DB instances.
Answer: A
Explanation:
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/?nc1=h_ls
The Instance Scheduler on AWS solution automates the starting and stopping of Amazon
Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon
RDS) instances. This solution helps reduce operational costs by stopping resources that are not
in use and starting them when they are needed1. The solution allows you to define custom
schedules and periods using a command line interface (CLI) or an SSM maintenance window1.
You can also choose between different payment options for the reserved DB instances, such as
No Upfront, Partial Up front, or All Upfront2.
62. A company recently deployed a new auditing system to centralize information about
operating system versions patching and installed software for Amazon EC2 instances. A
solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups
successfully send reports to the auditing system as soon as they are launched and terminated
Which solution achieves these goals MOST efficiently?
A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to
send data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system
when instances are launched and terminated
C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to
send data to the audit system when instances are launched and terminated
D. Run a custom script on the instance operating system to send data to the audit system
Configure the script to be invoked by the EC2 Auto Scaling group when the instance starts and
is terminated
Answer: B
Explanation:
Amazon EC2 Auto Scaling offers the ability to add lifecycle hooks to your Auto Scaling groups.
 43 / 68
These hooks let you create solutions that are aware of events in the Auto Scaling instance
lifecycle, and then perform a custom action on instances when the corresponding lifecycle event
occurs. (https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html)
63. A company used an Amazon RDS for MySQL DB instance during application testing. Before
terminating the DB instance at the end of the test cycle, a solutions architect created two
backups. The solutions architect created the first backup by using the mysqldump utility to
create a database dump. The solutions architect created the second backup by enabling the
final DB snapshot option on RDS termination.
The company is now planning for a new test cycle and wants to create a new DB instance from
the most recent backup. The company has chosen a MySQL-compatible edition of Amazon
Aurora to host the DB instance.
Which solutions will create the new DB instance? (Select TWO.)
A. Import the RDS snapshot directly into Aurora.
B. Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.
D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS
DMS) to import thedatabase dump into Aurora.
Answer: AC
Explanation:
These answers are correct because they meet the requirements of creating a new DB instance
from the most recent backup and using a MySQL-compatible edition of Amazon Aurora to host
the DB instance. You can import the RDS snapshot directly into Aurora if the MySQL DB
instance and the Aurora DB cluster are running the same version of MySQL. For example, you
can restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.6, but you can’t
restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.7. This method is
simple and requires the fewest number of steps. You can upload the database dump to Amazon
S3 and then import the database dump into Aurora if the MySQL DB instance and the Aurora
DB cluster are running different versions of MySQL. For example, you can import a MySQL
version 5.6 database dump into Aurora MySQL version 5.7, but you can’t restore a MySQL
version 5.6 snapshot directly to Aurora MySQL version 5.7. This method is more flexible and
allows you to migrate across different versions of MySQL.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.R
DSMySQL.Import.html
 44 / 68
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.R
DSMySQL.Dump.html
64. A company has a dynamic web application hosted on two Amazon EC2 instances. The
company has
its own SSL certificate, which is on each instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL
encryption and decryption is causing the compute capacity of the web servers to reach their
maximum limit.
What should a solutions architect do to increase the application's performance?
A. Create a new SSL certificate using AWS Certificate Manager (ACM) install the ACM
certificate on each instance
B. Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket Configure the EC2
instances to reference the bucket for SSL termination
C. Create another EC2 instance as a proxy server Migrate the SSL certificate to the new
instance and configure it to direct connections to the existing EC2 instances
D. Import the SSL certificate into AWS Certificate Manager (ACM) Create an Application Load
Balancer with an HTTPS listener that uses the SSL certificate from ACM
Answer: D
Explanation:
https://aws.amazon.com/certificate-manager/:
"With AWS Certificate Manager, you can quickly request a certificate, deploy it on ACM-
integrated AWS resources, such as Elastic Load Balancers, Amazon CloudFront distributions,
and APIs on API Gateway, and let AWS Certificate Manager handle certificate renewals. It also
enables you to create private certificates for your internal resources and manage the certificate
lifecycle centrally."
65. A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses
bucket policies to restrict access to audit team IAM user credentials according to the principle of
least privilege. Company managers are worried about accidental deletion of documents in the
S3 bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?
A. Enable the versioning and MFA Delete features on the S3 bucket.
B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM
user account.
C. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the
 45 / 68
s3:DeleteObject action during audit dates.
D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit
team IAM user accounts from accessing the KMS key.
Answer: A
66. A company is implementing new data retention policies for all databases that run on
Amazon RDS DB instances. The company must retain daily backups for a minimum period of 2
years. The backups must be consistent and restorable.
Which solution should a solutions architect recommend to meet these requirements?
A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan
with a daily schedule and an expiration period of 2 years after creation. Assign the RDS DB
instances to the backup plan.
B. Configure a backup window for the RDS DB instances for daily snapshots. Assign a snapshot
retention policy of 2 years to each RDS DB instance. Use Amazon Data Lifecycle Manager
(Amazon DLM) to schedule snapshot deletions.
C. Configure database transaction logs to be automatically backed up to Amazon CloudWatch
Logs with an expiration period of 2 years.
D. Configure an AWS Database Migration Service (AWS DMS) replication task. Deploy a
replication instance, and configure a change data capture (CDC) task to stream database
changes to Amazon S3 as the target. Configure S3 Lifecycle policies to delete the snapshots
after 2 years.
Answer: A
Explanation:
AWS Backup is a fully managed service that enables users to centralize and automate the
backup of data across AWS services. It can create and manage backup plans that specify the
frequency and retention period of backups. It can also assign backup resources to backup
vaults, which are containers that store backup data1. By using AWS Backup, the solution can
ensure that the RDS backups are consistent, restorable, and retained for a minimum period of 2
years.
B) Configure a backup window for the RDS DB instances for daily snapshots. Assign a
snapshot retention policy of 2 years to each RDS DB instance. Use Amazon Data Lifecycle
Manager (Amazon DLM) to schedule snapshot deletions. This solution will not meet the
requirement of ensuring that the backups are consistent and restorable, as Amazon DLM is not
compatible with RDS snapshots and cannot be used to schedule snapshot deletions2.
C) Configure database transaction logs to be automatically backed up to Amazon CloudWatch
Logs with an expiration period of 2 years. This solution will not meet the requirement of ensuring
 46 / 68
that the backups are consistent and restorable, as database transaction logs are not sufficient
to restore a database to a point in time. They only capture the changes made to the database,
not the full state of the database3.
D) Configure an AWS Database Migration Service (AWS DMS) replication task. Deploy a
replication instance, and configure a change data capture (CDC) task to stream database
changes to Amazon S3 as the target. Configure S3 Lifecycle policies to delete the snapshots
after 2 years. This solution will not meet the requirement of ensuring that the backups are
consistent and restorable, as AWS DMS is a service that helps users migrate databases to
AWS, not back up databases. It also requires additional resources and configuration, such as
replication instances and CDC tasks.
Reference URL: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
67. A company has an on-premises data center that is running out of storage capacity. The
company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs.
The solution must allow for immediate retrieval of data at no additional cost.
How can these requirements be met?
A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval
capacity for the workload.
B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in
Amazon S3 while retaining copies of frequently accessed data subsets locally.
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage
Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS
Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-
in-time snapshots of the data to Amazon S3.
Answer: B
Explanation:The solution that will meet the requirements is to deploy AWS Storage Gateway using cached
volumes and use Storage Gateway to store data in Amazon S3 while retaining copies of
frequently accessed data subsets locally. This solution will allow the company to migrate its
storage infrastructure to AWS while minimizing bandwidth costs, as it will only transfer data that
is not cached locally. The solution will also allow for immediate retrieval of data at no additional
cost, as the cached volumes will provide low-latency access to the most recently used data. The
data stored in Amazon S3 will be durable, scalable, and secure.
The other solutions are not as effective as the first one because they either do not meet the
requirements or introduce additional costs or complexity. Deploying Amazon S3 Glacier Vault
 47 / 68
and enabling expedited retrieval will not meet the requirements, as it will incur additional costs
for both storage and retrieval. Amazon S3 Glacier is a low-cost storage service for data
archiving and backup, but it has longer retrieval times than Amazon S3. Expedited retrieval is a
feature that allows faster access to data, but it charges a higher fee per GB retrieved.
Provisioned retrieval capacity is a feature that reserves dedicated capacity for expedited
retrievals, but it also charges a monthly fee per provisioned capacity unit. Deploying AWS
Storage Gateway using stored volumes to store data locally and use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3 will not meet the
requirements, as it will not migrate the storage infrastructure to AWS, but only create backups.
Stored volumes are volumes that store the primary data locally and back up snapshots to
Amazon S3. This solution will not reduce the storage capacity needed on-premises, nor will it
leverage the benefits of cloud storage. Deploying AWS Direct Connect to connect with the on-
premises data center and configuring AWS Storage Gateway to store data locally and use
Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3
will not meet the requirements, as it will also not migrate the storage infrastructure to AWS, but
only create backups. AWS Direct Connect is a service that establishes a dedicated network
connection between the on-premises data center and AWS, which can reduce network costs
and increase bandwidth. However, this solution will also not reduce the storage capacity
needed on-premises, nor will it leverage the benefits of cloud storage.
Reference:
AWS Storage Gateway
Cached volumes - AWS Storage Gateway
Amazon S3 Glacier
Retrieving archives from Amazon S3 Glacier vaults - Amazon Simple Storage Service
Stored volumes - AWS Storage Gateway
AWS Direct Connect
68. A company is designing an application. The application uses an AWS Lambda function to
receive information through Amazon API Gateway and to store the information in an Amazon
Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas
significantly to handle the high volumes of data that the company needs to load into the
database. A solutions architect must recommend a new design to improve scalability and
minimize the configuration effort.
Which solution will meet these requirements?
A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2
 48 / 68
instances.
Connect the database by using native Java Database Connectivity (JDBC) drivers.
B. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator
(DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX
cluster.
C. Set up two Lambda functions. Configure one function to receive the information. Configure
the other function to load the information into the database. Integrate the Lambda functions by
using Amazon Simple Notification Service (Amazon SNS).
D. Set up two Lambda functions. Configure one function to receive the information. Configure
the other function to load the information into the database. Integrate the Lambda functions by
using an Amazon Simple Queue Service (Amazon SQS) queue.
Answer: B
Explanation:
bottlenecks can be avoided with queues (SQS).
69. A solutions architect needs to implement a solution to reduce a company's storage costs. All
the company's data is in the Amazon S3 Standard storage class. The company must keep all
data for at least 25 years. Data from the most recent 2 years must be highly available and
immediately retrievable.
Which solution will meet these requirements?
A. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive immediately.
B. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.
C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3
Glacier Deep Archive.
D. Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3
One Zone-IA) immediately and to S3 Glacier Deep Archive after 2 years.
Answer: B
Explanation:
https://aws.amazon.com/about-aws/whats-new/2018/04/announcing-s3-one-zone-infrequent-
access-a-new-amazon-s3-storage-class/?nc1=h_ls
70. A company wants to run its critical applications in containers to meet requirements tor
scalability and availability The company prefers to focus on maintenance of the critical
applications The company does not want to be responsible for provisioning and managing the
underlying infrastructure that runs the containerized workload
What should a solutions architect do to meet those requirements?
 49 / 68
A. Use Amazon EC2 Instances, and Install Docker on the Instances
B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate
D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon
ECS)-op6mized Amazon Machine Image (AMI).
Answer: C
Explanation:
using AWS ECS on AWS Fargate since they requirements are for scalability and availability
without having to provision and manage the underlying infrastructure to run the containerized
workload. https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
71. At part of budget planning. management wants a report of AWS billed dams listed by user.
The data will be used to create department budgets. A solution architect needs to determine the
most efficient way to obtain this report Information
Which solution meets these requirement?
A. Run a query with Amazon Athena to generate the report.
B. Create a report in Cost Explorer and download the report
C. Access the bill details from the running dashboard and download Via bill.
D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon
SES).
Answer: B
Explanation:
This option is the most efficient because it uses Cost Explorer, which is a tool that allows you to
visualize, understand, and manage your AWS costs and usage over time1. You can create a
report in Cost Explorer that lists AWS billed items by user, using the user name tag as a filter2.
You can then download the report as a CSV file and use it for budget planning. Option A is less
efficient because it uses Amazon Athena, which is a serverless interactive query service that
allows you to analyze data in Amazon S3 using standard SQL3. You would need to set up an
Athena table that points to your AWS Cost and Usage Report data in S3, and then run a query
to generate the report. This would incur additional costs and complexity. Option C is less
efficient because it uses the billing dashboard, which provides a high-level summary of your
AWS costs and usage. You can access the bill details from the billing dashboard and download
them via bill, but this would not list the billed items by user.You would need to use tags to group
your costs by user name, which would require additional steps. Option D is less efficient
because it uses AWS Budgets, which is a tool that allows you to plan your service usage,
service costs, and instance reservations. You can modify a cost budget in AWS Budgets to alert
 50 / 68
with Amazon Simple Email Service (Amazon SES), but this would not generate a report of AWS
billed items by user. This would only notify you when your actual or forecasted costs exceed or
are expected to exceed your budgeted amount.
72. A company is creating an application that runs on containers in a VPC. The application
stores and accesses data in an Amazon S3 bucket During the development phase, the
application will store and access 1 TB of data in Amazon S3 each day. The company wants to
minimize costs and wants to prevent traffic from traversing the internet whenever possible.
Which solution will meet these requirements?
A. Enable S3 Intelligent-Tiering for the S3 bucket.
B. Enable S3 Transfer Acceleration for the S3 bucket.
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables
in the VPC.
D. Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all
route tables in the VPC.
Answer: C
Explanation:
A gateway VPC endpoint for Amazon S3 enables private connections between the VPC and
Amazon S3 that do not require an internet gateway or NAT device. This minimizes costs and
prevents traffic from traversing the internet. A gateway VPC endpoint uses a prefix list as the
route target in a VPC route table to route traffic privately to Amazon S31. Associating the
endpoint with all route tables in the VPC ensures that all subnets can access Amazon S3
through the endpoint.
Option A is incorrect because S3 Intelligent-Tiering is a storage class that optimizes storage
costs by automatically moving objects between two access tiers based on changing access
patterns. It does not affect the network traffic between the VPC and Amazon S32.
Option B is incorrect because S3 Transfer Acceleration is a feature that enables fast, easy, and
secure transfers of files over long distances between clients and an S3 bucket. It does not
prevent traffic from traversing the internet3.
Option D is incorrect because an interface VPC endpoint for Amazon S3 is powered by AWS
PrivateLink, which requires an elastic network interface (ENI) with a private IP address in each
subnet. This adds complexity and cost to the solution. Moreover, an interface VPC endpoint
does not support cross-Region access to Amazon S3.
Reference URL:
1: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
2: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-
 51 / 68
dynamic-data-access
3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html:
https://aws.amazon.com/blogs/architecture/choosing-your-vpc-endpoint-strategy-for-amazon-s3/
73. A company runs an application that receives data from thousands of geographically
dispersed remote devices that use UDP The application processes the data immediately and
sends a message back to the device if necessary No data is stored.
The company needs a solution that minimizes latency for the data transmission from the
devices. The solution also must provide rapid failover to another AWS Region
Which solution will meet these requirements?
A. Configure an Amazon Route 53 failover routing policy Create a Network Load Balancer
(NLB) in each of the two Regions Configure the NLB to invoke an AWS Lambda function to
process the data
B. Use AWS Global Accelerator Create a Network Load Balancer (NLB) in each of the two
Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster
with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the
target for the NLB Process the data in Amazon ECS.
C. Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two
Regions as an endpoint Create an Amazon Elastic Container Service (Amazon ECS) cluster
with the Fargate launch type Create an ECS service on the cluster. Set the ECS service as the
target for the ALB Process the data in Amazon ECS
D. Configure an Amazon Route 53 failover routing policy Create an Application Load Balancer
(ALB) in each of the two Regions Create an Amazon Elastic Container Service (Amazon ECS)
cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service
as the target for the ALB Process the data in Amazon ECS
Answer: B
Explanation:
To meet the requirements of minimizing latency for data transmission from the devices and
providing rapid failover to another AWS Region, the best solution would be to use AWS Global
Accelerator in combination with a Network Load Balancer (NLB) and Amazon Elastic Container
Service (Amazon ECS). AWS Global Accelerator is a service that improves the availability and
performance of applications by using static IP addresses (Anycast) to route traffic to optimal
AWS endpoints. With Global Accelerator, you can direct traffic to multiple Regions and
endpoints, and provide automatic failover to another AWS Region.
74. A company's website uses an Amazon EC2 instance store for its catalog of items. The
 52 / 68
company wants to make sure that the catalog is highly available and that the catalog is stored in
a durable location.
What should a solutions architect do to meet these requirements?
A. Move the catalog to Amazon ElastiCache for Redis.
B. Deploy a larger EC2 instance with a larger instance store.
C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
Answer: D
Explanation:
Moving the catalog to an Amazon Elastic File System (Amazon EFS) file system provides both
high availability and durability. Amazon EFS is a fully-managed, highly-available, and durable
file system that is built to scale on demand. With Amazon EFS, the catalog data can be stored
and accessed from multiple EC2 instances in different availability zones, ensuring high
availability. Also, Amazon EFS automatically stores files redundantly within and across multiple
availability zones, making it a durable storage option.
75. A company wants to implement a disaster recovery plan for its primary on-premises file
storage volume. The file storage volume is mounted from an Internet Small Computer Systems
Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of
terabytes (TB) of data.
The company wants to ensure that end users retain immediate access to all file types from the
on-premises systems without experiencing latency.
Which solution will meet these requirements with the LEAST amount of change to the
company's existing infrastructure?
A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises.
Set the local cache to 10 TB. Modify existing applications to access the files through the NFS
protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3
bucket that contains the files.
B. Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all
existing data to a virtual tape library. Configure the data backup solution to run nightly after the
initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and
restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in
the virtual tape library.
C. Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to
10 TB. Mount the Volume Gateway cached volume to the existing file server by using iSCSI.
and copy all files to the storage volume. Configure scheduled snapshotsof the storage volume.
 53 / 68
To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS)
volume and attach the EBS volume to an Amazon EC2 instance.
D. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount
of disk space as the existing file storage volume. Mount the Volume Gateway stored volume to
the existing file server by using iSCSI, and copy all files to the storage volume. Configure
scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to
an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an
Amazon EC2 instance.
Answer: D
Explanation:
"The company wants to ensure that end users retain immediate access to all file types from the
on-premises systems " - Cached volumes: low latency access to most recent data - Stored
volumes: entire dataset is on premise, scheduled backups to S3 Hence Volume Gateway stored
volume is the apt choice.
76. An loT company is releasing a mattress that has sensors to collect data about a user's
sleep. The sensors will send data to an Amazon S3 bucket. The sensors collect approximately 2
MB of data every night for each mattress. The company must process and summarize the data
for each mattress. The results need to be available as soon as possible Data processing will
require 1 GB of memory and will finish within 30 seconds.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Glue with a Scalajob.
B. Use Amazon EMR with an Apache Spark script.
C. Use AWS Lambda with a Python script.
D. Use AWS Glue with a PySpark job.
Answer: C
Explanation:
AWS Lambda charges you based on the number of invocations and the execution time of your
function. Since the data processing job is relatively small (2 MB of data), Lambda is a cost-
effective choice. You only pay for the actual usage without the need to provision and maintain
infrastructure.
77. A solutions architect is designing a highly available Amazon ElastiCache for Redis based
solution. The solutions architect needs to ensure that failures do not result in performance
degradation or loss of data locally and within an AWS Region. The solution needs to provide
high availability at the node level and at the Region level.
 54 / 68
Which solution will meet these requirements?
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) tured on.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.
Answer: A
Explanation:
This answer is correct because it provides high availability at the node level and at the Region
level for the ElastiCache for Redis solution. A Multi-AZ Redis replication group consists of a
primary cluster and up to five read replica clusters, each in a different Availability Zone. If the
primary cluster fails, one of the read replicas is automatically promoted to be the new primary
cluster. A Redis replication group with shards enables partitioning of the data across multiple
nodes, which increases the scalability and performance of the solution. Each shard can have
one or more replicas to provide redundancy and read scaling.
Reference:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Shards.html
78. A company is building a new dynamic ordering website. The company wants to minimize
server maintenance and patching. The website must be highly available and must scale read
and write capacity as quickly as possible to meet changes in user demand.
Which solution will meet these requirements?
A. Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and
AWS Lambda Use Amazon DynamoDB with on-demand capacity for the database Configure
Amazon CloudFront to deliver the website content
B. Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and
AWS Lambda Use Amazon Aurora with Aurora Auto Scaling for the database Configure
Amazon CloudFront to deliver the website content
C. Host al the website content on Amazon EC2 instances Create an Auto Scaling group to scale
the EC2 Instances Use an Application Load Balancer to distribute traffic Use Amazon
DynamoDB with provisioned write capacity for the database
D. Host at the website content on Amazon EC2 instances Create an Auto Scaling group to
scale the EC2 instances Use an Application Load Balancer to distribute traffic Use Amazon
Aurora with Aurora Auto Scaling for the database
Answer: A
Explanation:
 55 / 68
Key phrase in the Question is must scale read and write capacity. Aurora is only for Read.
Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your
tables: On-demand Provisioned (default, free-tier eligible)
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWrite
CapacityMode.html
79. A company wants to implement a backup strategy for Amazon EC2 data and multiple
Amazon S3 buckets. Because of regulatory requirements, the company must retain backup files
for a specific time period. The company must not alter the files for the duration of the retention
period.
Which solution will meet these requirements?
A. Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create
the required backup plan.
B. Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
C. Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle
management.
D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create
the required backup plan.
Answer: D
Explanation:
AWS Backup is a fully managed service that allows you to centralize and automate data
protection of AWS services across compute, storage, and database. AWS Backup Vault Lock is
an optional feature of a backup vault that can help you enhance the security and control over
your backup vaults. When a lock is active in Compliance mode and the grace time is over, the
vault configuration cannot be altered or deleted by a customer, account/data owner, or AWS.
This ensures that your backups are available for you until they reach the expiration of their
retention periods and meet the regulatory requirements.
Reference: https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html
80. A company is developing a real-time multiplayer game that uses UDP for communications
between the client and servers In an Auto Scaling group Spikes in demand are anticipated
during the day, so the game server platform must adapt accordingly Developers want to store
gamer scores and other non-relational data in a database solution that will scale without
intervention
Which solution should a solutions architect recommend?
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage
 56 / 68
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for
data storage
C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for
data storage
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global
tables for
data storage
Answer: B
Explanation:
A Network Load Balancer is a type of load balancer that operates at the connection level (Layer
4) and can load balance both TCP and UDP traffic1. A Network Load Balancer is suitable for
scenarios where high performance and low latency are required, such as real-time multiplayer
games1. A Network Load Balancer can also handle sudden and volatile traffic patterns while
using a single static IPaddress per Availability Zone1.
To meet the requirements of the scenario, the solutions architect should use a Network Load
Balancer for traffic distribution between the EC2 instances in the Auto Scaling group. The
Network Load Balancer can route UDP traffic from the client to the servers on the appropriate
port2. The Network Load Balancer can also support TLS offloading for secure communications
between the client and servers1.
Amazon DynamoDB is a fully managed NoSQL database service that can store and retrieve
any amount of data with consistent performance and low latency3. Amazon DynamoDB on-
demand is a flexible billing option that requires no capacity planning and charges only for the
read and write requests that are performed on the tables3. Amazon DynamoDB on-demand is
ideal for scenarios where the application traffic is unpredictable or sporadic, such as gaming
applications3.
To meet the requirements of the scenario, the solutions architect should use Amazon
DynamoDB on-demand for data storage. Amazon DynamoDB on-demand can store gamer
scores and other non-relational data without intervention from the developers. Amazon
DynamoDB on-demand can also scale automatically to handle any level of request traffic
without affecting performance or availability3.
81. A company has an ecommerce checkout workflow that writes an order to a database and
calls a service to process the payment. Users are experiencing timeouts during the checkout
process. When users resubmit the checkout form, multiple unique orders are created for the
same desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple
 57 / 68
orders?
A. Configure the web application to send an order message to Amazon Kinesis Data Firehose.
Set the payment service to retrieve the message from Kinesis Data Firehose and process the
order.
B. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged
application path request Use Lambda to query the database, call the payment service, and pass
in
the order information.
C. Store the order in the database. Send a message that includes the order number to Amazon
Simple Notification Service (Amazon SNS). Set the payment service to poll Amazon SNS.
retrieve the message, and process the order.
D. Store the order in the database. Send a message that includes the order number to an
Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the payment service to retrieve
the message and process the order. Delete the message from the queue.
Answer: D
Explanation:
This approach ensures that the order creation and payment processing steps are separate and
atomic. By sending the order information to an SQS FIFO queue, the payment service can
process the order one at a time and in the order they were received. If the payment service is
unable to process an order, it can be retried later, preventing the creation of multiple orders.
The deletion of the message from the queue after it is processed will prevent the same
message from being processed multiple times.
82. An online retail company has more than 50 million active customers and receives more than
25,000 orders each day. The company collects purchase data for customers and stores this
data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can
perform analytics. The solution must provide the ability to manage fine-grained permissions for
the data and must minimize operational overhead.
Which solution will meet these requirements?
A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit
access.
B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon
S3.
Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit
access.
 58 / 68
C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to
Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls
to limit access.
D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy
data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access
controls to limit access.
Answer: C
Explanation:
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-
formation/
83. A company needs to create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster
to host a digital media streaming application. The EKS cluster will use a managed node group
that is backed by Amazon Elastic Block Store (Amazon EBS) volumes for storage. The
company must encrypt all data at rest by using a customer managed key that is stored in AWS
Key Management Service (AWS KMS)
Which combination of actions will meet this requirement with the LEAST operational overhead?
(Select TWO.)
A. Use a Kubernetes plugin that uses the customer managed key to perform data encryption.
B. After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the
customer managed key.
C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created.
Select the customer managed key as the default key.
D. Create the EKS cluster Create an IAM role that has cuwlicy that grants permission to the
customer managed key. Associate the role with the EKS cluster.
E. Store the customer managed key as a Kubernetes secret in the EKS cluster. Use the
customer managed key to encrypt the EBS volumes.
Answer: C, D
Explanation:
EBS encryption by default is a feature that enables encryption for all new EBS volumes and
snapshots created in a Region1. EBS encryption by default uses a service managed key or a
customer managed key that is stored in AWS KMS1. EBS encryption by default is suitable for
scenarios where data at rest must be encrypted by using a customer managed key, such as the
digital media streaming application in the scenario1.
To meet the requirements of the scenario, the solutions architect should enable EBS encryption
by default in the AWS Region where the EKS cluster will be created. The solutions architect
 59 / 68
should select the customer managed key as the default key for encryption1. This way, all new
EBS volumes and snapshots created in that Region will be encrypted by using the customer
managed key.
EKS encryption provider support is a feature that enables envelope encryption of Kubernetes
secrets in EKS with a customer managed key that is stored in AWS KMS2. Envelope encryption
means that data is encrypted by data encryption keys (DEKs) using AES-GCM; DEKs are
encrypted by key encryption keys (KEKs) according to configuration in AWS KMS3. EKS
encryption provider support is suitable for scenarios where secrets must be encrypted by using
a customer managed key, such as the digital media streaming application in the scenario2.
To meet the requirements of the scenario, the solutions architect should create the EKS cluster
and create an IAM role that has a policy that grants permission to the customer managed key.
The solutions architect should associate the role with the EKS cluster2. This way, the EKS
cluster can use envelope encryption of Kubernetes secrets with the customer managed key.
84. A global marketing company has applications that run in the ap-southeast-2 Region and the
eu-west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate securely
with databases that run in a VPC in ap-southeast-2.
Which network design will meet these requirements?
A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC.
Create an inbound rule in the eu-west-1 application security group that allows traffic from the
database server IP addresses in the ap-southeast-2 security group.
B.Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1
VPC. Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database
security group that references the security group ID of the application servers in eu-west-1.
C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1
VPC. Update the subnet route tables Create an inbound rule in the ap-southeast-2 database
security group that allows traffic from the eu-west-1 application server IP addresses.
D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-
southeast-2 VPC. After the transit gateways are properly peered and routing is configured,
create an inbound rule in the database security group that references the security group ID of
the application servers in eu-west-1.
Answer: C
Explanation:
"You cannot reference the security group of a peer VPC that's in a different Region. Instead,
use the CIDR block of the peer VPC." https://docs.aws.amazon.com/vpc/latest/peering/vpc-
peering-security-groups.html
 60 / 68
85. A company wants to migrate its three-tier application from on premises to AWS. The web
tier and the application tier are running on third-party virtual machines (VMs). The database tier
is running on MySQL.
The company needs to migrate the application by making the fewest possible changes to the
architecture. The company also needs a database solution that can restore data to a specific
point in time.
Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the web tier and the application tier to Amazon EC2 instances in private subnets.
Migrate the database tier to Amazon RDS for MySQL in private subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier
to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in
private subnets.
C. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier
to EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL in
private subnets.
D. Migrate the web tier and the application tier to Amazon EC2 instances in public subnets.
Migrate the database tier to Amazon Aurora MySQL in public subnets.
Answer: C
Explanation:
The solution that meets the requirements with the least operational overhead is to migrate the
web tier to Amazon EC2 instances in public subnets, migrate the application tier to EC2
instances in private subnets, and migrate the database tier to Amazon RDS for MySQL in
private subnets. This solution allows the company to migrate its three-tier application to AWS by
making minimal changes to the architecture, as it preserves the same web, application, and
database tiers and uses the same MySQL database engine. The solution also provides a
database solution that can restore data to a specific point in time, as Amazon RDS for MySQL
supports automated backups and point-in-time recovery. This solution also reduces the
operational overhead by using managed services such as Amazon EC2 and Amazon RDS,
which handle tasks such as provisioning, patching, scaling, and monitoring.
The other solutions do not meet the requirements as well as the first one because they either
involve more changes to the architecture, do not provide point-in-time recovery, or do not follow
best practices for security and availability. Migrating the database tier to Amazon Aurora MySQL
would require changing the database engine and potentially modifying the application code to
ensure compatibility. Migrating the web tier and the application tier to public subnets would
expose them to more security risks and reduce their availability in case of a subnet failure.
 61 / 68
Migrating the database tier to public subnets would also compromise its security and
performance.
Reference:
Migrate Your Application Database to Amazon RDS
Amazon RDS for MySQL
Amazon Aurora MySQL
Amazon VPC
86. A company's application runs on Amazon EC2 instances behind an Application Load
Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple
Availability Zones. On the first day of every month at midnight. The application becomes much
slower when the month-end financial calcualtion bath runs. This causes the CPU utilization of
the EC2 instaces to immediately peak to 100%, which disrupts the application.
What should a solution architect recommend to ensure the application is able to handle the
workload and avoid downtime?
A. Configure an Amazon CloudFront distribution in from of the ALB.
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElasticCache to remove some of the workload from tha EC2 instances.
Answer: C
Explanation:
Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule is the
best option because it allows for the proactive scaling of the EC2 instances before the monthly
batch run begins. This will ensure that the application is able to handle the increased workload
without experiencing downtime. The scheduled scaling policy can be configured to increase the
number of instances in the Auto Scaling group a few hours before the batch run and then
decrease the number of instances after the batch run is complete. This will ensure that the
resources are available when needed and not wasted when not needed. The most appropriate
solution to handle the increased workload during the monthly batch run and avoid downtime
would be to configure an EC2 Auto Scaling scheduled scaling policy based on the monthly
schedule.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-
scaling.html
87. A company has an AWS account used for software engineering. The AWS account has
access to the company's on-premises data center through a pair of AWS Direct Connect
 62 / 68
connections. All non-VPC traffic routes to the virtual private gateway.
A development team recently created an AWS Lambda function through the console. The
development team needs to allow the function to access a database that runs in a private
subnet in the company's data center.
Which solution will meet these requirements?
A. Configure the Lambda function to run in the VPC with the appropriate security group.
B. Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda
function through the VPN.
C. Update the route tables in the VPC to allow the Lambda function to access the on-premises
data center through Direct Connect.
D. Create an Elastic IP address. Configure the Lambda function to send traffic through the
Elastic IP address without an elastic network interface.
Answer: A
Explanation:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-managing-eni
88. A company uses Amazon API Gateway to run a private gateway with two REST APIs in the
same VPC.
The BuyStock RESTful web service calls the CheckFunds RESTful
web service to ensure that enough funds are available before a stock can be purchased. The
company has noticed in the VPC flow logs that the BuyStock RESTful web service calls the
CheckFunds RESTful web service over the internet instead of through the VPC. A solutions
architect must implement a solution so that the APIs communicate through the VPC.
Which solution will meet these requirements with the FEWEST changes to the code?
(Select Correct Option/s and give detailed explanation from AWS Certified Solutions Architect -
Associate (SAA-C03) Study Manual or documents)
A. Add an X-APl-Key header in the HTTP header for authorization.
B. Use an interface endpoint.
C. Use a gateway endpoint.
D. Add an Amazon Simple Queue Service (Amazon SQS) queuebetween the two REST APIs.
Answer: B
Explanation:
Using an interface endpoint will allow the BuyStock RESTful web service and the CheckFunds
RESTful web service to communicate through the VPC without any changes to the code. An
interface endpoint creates an elastic network interface (ENI) in the customer's VPC, and then
configures the route tables to route traffic from the APIs to the ENI. This will ensure that the two
 63 / 68
APIs will communicate through the VPC without any changes to the code.
89. A company has one million users that use its mobile app. The company must analyze the
data usage in near-real time. The company also must encrypt the data in near-real time and
must store the data in a centralized location in Apache Parquet format for further processing.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon
Kinesis Data Analytics application to analyze the data. Invoke an AWS Lambda function to send
the data to the Kinesis Data Analytics application.
B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon
EMR cluster to analyze the data. Invoke an AWS Lambda function to send the data to the EMR
cluster.
C. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3.
Create an Amazon EMR cluster to analyze the data.
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3.
Create an Amazon Kinesis Data Analytics application to analyze the data
Answer: D
Explanation:
This solution will meet the requirements with the least operational overhead as it uses Amazon
Kinesis Data Firehose, which is a fully managed service that can automatically handle the data
collection, data transformation, encryption, and data storage in near-real time. Kinesis Data
Firehose can automatically store the data in Amazon S3 in Apache Parquet format for further
processing. Additionally, it allows you to create an Amazon Kinesis Data Analytics application to
analyze the data in near real-time, with no need to manage any infrastructure or invoke any
Lambda function. This way you can process a large amount of data with the least operational
overhead.
90. An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not
have outbound internet access, but the EC2 instance needs the ability to download monthly
security updates from an outside vendor.
What should a solutions architect do to meet these requirements?
A. Create an internet gateway, and attach it to the VPC. Configure the private subnet route table
to use the internet gateway as the default route.
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route
table to use the NAT gateway as the default route.
C. Create a NAT instance, and place it in the same subnet where the EC2 instance is located.
 64 / 68
Configure the private subnet route table to use the NAT instance as the default route.
D. Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in
the same subnet where the EC2 instance is located. Configure the private subnet route table to
use the internet gateway as the default route.
Answer: B
Explanation:
This approach will allow the EC2 instance to access the internet and download the monthly
security updates while still being located in a private subnet. By creating a NAT gateway and
placing it in a public subnet, it will allow the instances in the private subnet to access the
internet through the NAT gateway. And then, configure the private subnet route table to use the
NAT gateway as the default route. This will ensure that all outbound traffic is directed through
the NAT gateway, allowing the EC2 instance to access the internet while still maintaining the
security of the private subnet.
91. A retail company uses a regional Amazon API Gateway API for its public REST APIs. The
API Gateway endpoint is a custom domain name that points to an Amazon Route 53 alias
record. A solutions architect needs to create a solution that has minimal effects on customers
and minimal data loss to release the new version of APIs.
Which solution will meet these requirements?
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version.
Point an appropriate percentage of traffic to the canary stage. After API verification, promote the
canary stage to the production stage.
B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file
format. Use the import-to-update operation in merge mode into the API in API Gateway. Deploy
the new version of the API to the production stage.
C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file
format. Use the import-to-update operation in overwrite mode into the API in API Gateway.
Deploy the new version of the API to the production stage.
D. Create a new API Gateway endpoint with new versions of the API definitions. Create a
custom domain name for the new API Gateway API. Point the Route 53 alias record to the new
API Gateway API custom domain name.
Answer: A
Explanation:
This answer is correct because it meets the requirements of releasing the new version of APIs
with minimal effects on customers and minimal data loss. A canary release deployment is a
software development strategy in which a new version of an API is deployed for testing
 65 / 68
purposes, and the base version remains deployed as a production release for normal
operations on the same stage. In a canary release deployment, total API traffic is separated at
random into a production release and a canary release with a pre-configured ratio. Typically, the
canary release receives a small percentage of API traffic and the production release takes up
the rest. The updated API features are only visible to API traffic through the canary. You can
adjust the canary traffic percentage to optimize test coverage or performance. By keeping
canary traffic small and the selection random, most users are not adversely affected at any time
by potential bugs in the new version, and no single user is adversely affected all the time. After
the test metrics pass your requirements, you can promote the canary release to the production
release and disable the canary from the deployment. This makes the new features available in
the production stage.
Reference: https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html
92. A company has an on-premises server that uses an Oracle database to process and store
customer information The company wants to use an AWS database service to achieve higher
availability and to improve application performance. The company also wants to offload
reporting from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in
multiple AWS Regions Point the reporting functions toward a separate DB instance from the
primary DB instance.
B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read
replica in the same zone as the primary DB instance. Direct the reporting functions to the read
replica.
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database
Direct the reporting functions to use the reader instance in the cluster deployment
D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora
database. Direct the reporting functions to the reader instances.
Answer: D
Explanation:
Amazon Aurora is a fully managed relational database that is compatible with MySQL and
PostgreSQL. It provides up to five times better performance than MySQL and up tothree times
better performance than PostgreSQL. It also provides high availability and durability by
replicating data across multiple Availability Zones and continuously backing up data to Amazon
S31. By using Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon
Aurora database, the solution can achieve higher availability and improve application
 66 / 68
performance.
Amazon Aurora supports read replicas, which are separate instances that share the same
underlying storage as the primary instance. Read replicas can be used to offload read-only
queries from the primary instance and improve performance. Read replicas can also be used for
reporting functions2. By directing the reporting functions to the reader instances, the solution
can offload reporting from its primary database system.
A) Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in
multiple AWS Regions Point the reporting functions toward a separate DB instance from the pri-
mary DB instance. This solution will not meet the requirement of using an AWS database
service, as AWS DMS is a service that helps users migrate databases to AWS, not a database
service itself. It also involves creating multiple DB instances in different Regions, which may
increase complexity and cost.
B) Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read
replica in the same zone as the primary DB instance. Direct the reporting functions to the read
replica. This solution will not meet the requirement of achieving higher availability, as a Single-
AZ deployment does not provide failover protection in case of an Availability Zone outage. It
also involves using Oracle as the database engine, which may not provide better performance
than Aurora.
C) Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database
Direct the reporting functions to use the reader instance in the cluster deployment. This solution
will not meet the requirement of improving application performance, as Oracle may not provide
better performance than Aurora. It also involves using a cluster deployment, which is only
supported for Aurora, not for Oracle.
Reference URL: https://aws.amazon.com/rds/aurora/
93. A solutions architect needs to design a highly available application consisting of web,
application, and database tiers. HTTPS content delivery should be as close to the edge as
possible, with the least delivery time.
Which solution meets these requirements and is MOST secure?
A. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2
instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the
public ALB as the origin.
B. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances
in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2
instances as the origin.
C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2
 67 / 68
instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the
public ALB as the origin.
D. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances
in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2
instances as the origin.
Answer: C
Explanation:
This solution meets the requirements for a highly available application with web, application,
and database tiers, as well as providing edge-based content delivery. Additionally, it maximizes
security by having the ALB in a private subnet, which limits direct access to the web servers,
while still being able to serve traffic over the Internet via the public ALB. This will ensure that the
web servers are not exposed to the public Internet, which reduces the attack surface and
provides a secure way to access the application.
 
More Hot Exams are available.
350-401 ENCOR Exam Dumps
350-801 CLCOR Exam Dumps
200-301 CCNA Exam Dumps
Powered by TCPDF (www.tcpdf.org)
 68 / 68
https://www.certqueen.com/promotion.asp
https://www.certqueen.com/350-401.html
https://www.certqueen.com/350-801.html
https://www.certqueen.com/200-301.html
http://www.tcpdf.org