AWS Solutions Architect Exam Simulator
Despite the title of this article, this isn’t an AWS Solutions Architect Associate braindump in the traditional sense.
I don’t believe in cheating.
Traditionally, the word “braindump” referred to someone taking an exam, memorizing the questions, and posting them online for others to use. That’s unethical and a direct violation of the AWS certification agreement. There’s no integrity or personal growth in that.
Better than AWS certification exam dumps
This is not an AWS braindump.
All of these questions come from my AWS Solutions Architect Associate Udemy course and from the certificationexams.pro website, which hosts hundreds of free AWS practice questions.
Each question has been carefully written to align with the official AWS Certified Solutions Architect Associate exam domains. They are designed to mirror the tone, logic, and technical depth of real exam scenarios, but they are not copied from the test. They’re built to help you learn the right way. Passing the AWS certification exams with integrity matters.
If you can answer these questions and understand why the incorrect options are wrong, you won’t just pass the AWS Solutions Architect Associate exam, you’ll truly understand how to design secure, resilient, high-performing, and cost-optimized architectures on AWS.
So, if you want to call this your AWS Solutions Architect Associate braindump, go ahead—but know that every question here is built to teach, not to cheat.
Each question below includes a detailed explanation, along with key tips and strategies to help you think like an AWS Solutions Architect on exam day.
Study hard, learn deeply, and best of luck on your exam.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AWS Solutions Architect Associate Exam Questions
Question 1
A retail analytics startup, Scrumtuous Market Insights, stores quarterly sales KPIs in an Amazon DynamoDB table named SalesMetrics. The team is building a lightweight web dashboard to present this data and wants to use fully managed components with the least possible operational overhead. Which architectures would meet these goals while minimizing operational effort? (Choose 2)
-
❏ A. An Application Load Balancer sends traffic to a target group that lists the DynamoDB table as the backend target
-
❏ B. An Amazon API Gateway REST API integrates directly with the DynamoDB table to read the sales metrics
-
❏ C. An Application Load Balancer routes traffic to a target group of Amazon EC2 instances that query the DynamoDB table
-
❏ D. An Amazon API Gateway REST API triggers an AWS Lambda function that retrieves items from the DynamoDB table
-
❏ E. An Amazon Route 53 hosted zone routes requests directly to an AWS Lambda endpoint to run code that reads the DynamoDB table
Question 2
Which AWS service provides low-latency on-premises NFS and SMB file access to objects in Amazon S3 with local caching and minimal cost?
-
❏ A. Amazon FSx for Lustre
-
❏ B. AWS Storage Gateway – S3 File Gateway
-
❏ C. Amazon EFS
-
❏ D. Mountpoint for Amazon S3
Question 3
A digital media startup runs its subscription billing platform on AWS. The application uses an Amazon RDS for MySQL Multi-AZ DB cluster as the database tier. For regulatory reasons, the team must keep database backups for 45 days. Engineers take both automated RDS backups and occasional manual snapshots for point-in-time needs. The company wants to enforce a 45-day retention policy for all backups while preserving any automated and manual backups that were created within the last 45 days. The approach should minimize cost and operational work. Which solution meets these goals most cost effectively?
-
❏ A. Disable RDS automated backups and use AWS Backup daily backup plans with a 45-day retention policy
-
❏ B. Set the RDS automated backup retention to 45 days and schedule a simple script to delete manual snapshots older than 45 days
-
❏ C. Export RDS snapshots to Amazon S3 and rely on S3 Lifecycle rules to delete objects after 45 days
-
❏ D. Use AWS Backup to enforce a 45-day policy on automated backups and invoke AWS Lambda to remove manual snapshots older than 45 days
Question 4
How should a company migrate approximately 250 TB of on premises files to Amazon S3 over an existing 20 Gbps Direct Connect while keeping the traffic private, automating recurring synchronizations, and using an accelerated managed service?
-
❏ A. AWS Storage Gateway file gateway
-
❏ B. AWS DataSync with public endpoint
-
❏ C. AWS Snowball Edge
-
❏ D. AWS DataSync via VPC interface endpoint over Direct Connect
Question 5
Orion Couriers runs a legacy report collector on a single Amazon EC2 instance in a public subnet. The application gathers scanned PDF delivery slips, writes them to an attached Amazon EBS volume, and at 01:00 UTC each night pushes the accumulated files to an Amazon S3 archive bucket. A solutions architect observes that the instance is using the public S3 endpoint over the internet for uploads. The company wants the data transfers to stay on the AWS private network and avoid the public endpoint entirely. What should the architect implement?
-
❏ A. Create an S3 access point in the same Region, grant the instance role access, and update the application to use the access point alias
-
❏ B. Enable S3 Transfer Acceleration on the bucket and update the application to use the acceleration endpoint
-
❏ C. Deploy a gateway VPC endpoint for Amazon S3 and update the subnet route table to use it; restrict access with a bucket or IAM policy tied to the endpoint
-
❏ D. Order an AWS Direct Connect dedicated connection and route VPC traffic to Amazon S3 over it
Question 6
Which scaling approach most effectively reduces cold-start latency during the morning traffic surge for an EC2 Auto Scaling group while keeping costs low?
-
❏ A. Schedule desired capacity to 28 just before business hours
-
❏ B. Enable a warm pool with running instances for the morning surge
-
❏ C. Use target tracking with a lower CPU target and shorter cooldown
-
❏ D. Use step scaling with reduced CPU thresholds and a short cooldown
Question 7
A retail analytics startup moved several cron-style workloads to Amazon EC2 instances running Amazon Linux. Each job runs for about 80 minutes, and different teams wrote them in various programming languages. All jobs currently execute on a single server, which creates throughput bottlenecks and limited scalability. The team wants to run these tasks in parallel across instances while keeping operations simple and avoiding significant rework. What approach will meet these needs with the least operational overhead?
-
❏ A. Run the tasks as jobs in AWS Batch and trigger them on a schedule with Amazon EventBridge
-
❏ B. Create an Amazon Machine Image from the existing EC2 host and use an Auto Scaling group to launch multiple identical instances concurrently
-
❏ C. Rewrite each task as an AWS Lambda function and schedule invocations with Amazon EventBridge
-
❏ D. Containerize the workloads and use Amazon ECS on AWS Fargate with EventBridge scheduled tasks
Question 8
An EC2 hosted microservice behind an Application Load Balancer has one API route that takes about four minutes to complete while other routes finish in about 200 milliseconds; what should be implemented to decouple the long running work and avoid blocking requests?
-
❏ A. AWS Step Functions
-
❏ B. Increase ALB idle timeout
-
❏ C. Amazon SQS with asynchronous processing
-
❏ D. Amazon SNS
Question 9
A sports analytics firm maintains an AWS Direct Connect link to AWS and has moved its enterprise data warehouse into AWS. Data analysts use a business intelligence dashboard to run queries. The average result set returned per query is 80 megabytes, and the dashboard does not cache responses. Each rendered dashboard page is approximately 350 kilobytes. Which approach will deliver the lowest data transfer egress cost for the company?
-
❏ A. Host the BI tool on-premises and fetch results from the AWS data warehouse over the public internet in the same AWS Region
-
❏ B. Host the BI tool in the same AWS Region as the data warehouse and let users access it through the existing Direct Connect from the corporate network
-
❏ C. Host the BI tool on-premises and fetch results from the AWS data warehouse across the Direct Connect link in the same AWS Region
-
❏ D. Host the BI tool in the same AWS Region as the data warehouse and let users access it via an AWS Site-to-Site VPN
Question 10
Which AWS relational database option provides cross Region disaster recovery with approximately a three second recovery point objective and a thirty second recovery time objective?
-
❏ A. Amazon RDS Multi-AZ
-
❏ B. Aurora Global Database
-
❏ C. AWS Elastic Disaster Recovery
-
❏ D. Amazon RDS cross-Region read replica
Question 11
Orion Retail processes shopper photos uploaded to an Amazon S3 bucket and writes summarized demographic attributes as CSV files to a second bucket every 30 minutes. Security requires encryption of all files at rest, and analysts need to run standard SQL against the dataset without managing servers. What should the solutions architect implement to meet these requirements?
-
❏ A. Encrypt the S3 buckets with AWS KMS keys and use Amazon Managed Service for Apache Flink to analyze the files
-
❏ B. Configure S3 SSE-KMS and run SQL queries with Amazon Athena
-
❏ C. Enable S3 server-side encryption and query the data with Amazon Redshift Spectrum
-
❏ D. Use S3 server-side encryption and load the CSVs into Amazon Aurora Serverless for SQL queries
Question 12
Which AWS service uses global anycast, supports UDP traffic, enables rapid cross-Region failover, and permits the continued use of an external DNS provider?
-
❏ A. Amazon CloudFront
-
❏ B. AWS Route 53 Application Recovery Controller
-
❏ C. Amazon Global Accelerator
-
❏ D. Amazon Route 53
Question 13
A sports analytics startup anticipates a massive traffic spike for the kickoff of an interactive live match tracker. Their stack runs on AWS with application servers on Amazon EC2 and a transactional backend on Amazon RDS. The operations team needs proactive visibility into performance during the event with metric updates at intervals of 90 seconds or less, and they prefer something that can be enabled quickly with minimal maintenance. What should they implement?
-
❏ A. Stream EC2 operating system logs to Amazon OpenSearch Service and visualize CPU and memory in OpenSearch Dashboards
-
❏ B. Capture EC2 state change events with Amazon EventBridge, forward to Amazon SNS, and have a dashboard subscribe to view metrics
-
❏ C. Enable EC2 Detailed Monitoring and use Amazon CloudWatch to view 1-minute instance metrics during the launch window
-
❏ D. Install the CloudWatch agent on all instances to publish high-resolution custom metrics and analyze them from CloudWatch Logs with Amazon Athena
Question 14
Which AWS database option provides MySQL compatibility along with automatic compute scaling, built-in high availability, and minimal operational effort?
-
❏ A. Amazon RDS for MySQL Multi-AZ with read replicas
-
❏ B. Amazon Aurora MySQL provisioned with read replica Auto Scaling
-
❏ C. Amazon Aurora MySQL Serverless v2
-
❏ D. Single larger MySQL on EC2
Question 15
An international film distribution firm is moving its core workloads to AWS. The company has built an Amazon S3 data lake to receive and analyze content from external partners. Many partners can upload using S3 APIs, but several operate legacy tools that only support SFTP and refuse to change their process. The firm needs a fully managed AWS solution that gives partners an SFTP endpoint which writes directly to S3 and supports identity federation so internal teams can map each partner to specific S3 buckets or prefixes. Which combination of actions will best meet these needs with minimal ongoing maintenance? (Choose 2)
-
❏ A. Use Amazon AppFlow to ingest files from legacy SFTP systems into S3 on an hourly schedule
-
❏ B. Provision an AWS Transfer Family server with SFTP enabled that stores uploads in an S3 bucket and map each partner user to a dedicated IAM role scoped to that bucket or prefix
-
❏ C. Run a custom OpenSSH-based SFTP server on Amazon EC2 and use cron to copy received files into S3, with CloudWatch for monitoring
-
❏ D. Apply S3 bucket policies that grant IAM role–based, per-partner access and integrate AWS Transfer Family with Amazon Cognito or an external identity provider for federation
-
❏ E. Set up AWS DataSync with an SFTP location to replicate partner files into S3
Question 16
Which AWS architecture provides a scalable and highly available HTTPS endpoint to receive JSON events from devices, processes those events using serverless services, and stores the results durably?
-
❏ A. Amazon EC2 single instance writing to Amazon S3
-
❏ B. Amazon Route 53 to AWS Lambda to Amazon DynamoDB
-
❏ C. Amazon EventBridge with rule to Amazon DynamoDB
-
❏ D. Amazon API Gateway to AWS Lambda to Amazon DynamoDB
Question 17
An online education startup is trialing a Linux-based Python application on a single Amazon EC2 instance. The instance uses one 2 TB Amazon EBS General Purpose SSD (gp3) volume to store customer data. The team plans to scale out the application to several EC2 instances in an Auto Scaling group, and every instance must read and write the same dataset that currently resides on the EBS volume. They want a highly available and cost-conscious approach that requires minimal changes to the application. What should the team implement?
-
❏ A. Set up Amazon FSx for Lustre, link it to an Amazon S3 bucket, and mount the file system on each EC2 instance for shared access
-
❏ B. Use Amazon EBS Multi-Attach with an io2 volume and attach it to all instances in the Auto Scaling group
-
❏ C. Create an Amazon Elastic File System in General Purpose performance mode and mount it across all EC2 instances
-
❏ D. Run a single EC2 instance as an NFS server, attach the existing EBS volume, and export the share to the Auto Scaling group instances
Question 18
Which S3 option automatically reduces storage costs across 500 buckets while requiring minimal ongoing administration and no lifecycle rule management?
-
❏ A. S3 Storage Lens
-
❏ B. S3 Glacier Deep Archive
-
❏ C. S3 Intelligent-Tiering
-
❏ D. S3 One Zone-IA
Question 19
BrightPlay Analytics runs a live leaderboard web app behind an Application Load Balancer and a fleet of Amazon EC2 instances. The service stores game results in Amazon RDS for MySQL. During weekend tournaments, read requests surge to roughly 120,000 per minute, and users see delays and occasional timeouts that trace back to slow database reads. The company needs to improve responsiveness while making the fewest possible changes to the existing architecture. What should the solutions architect recommend?
-
❏ A. Connect the application to the database using Amazon RDS Proxy
-
❏ B. Use Amazon ElastiCache to cache frequently accessed reads in front of the database
-
❏ C. Create Amazon RDS for MySQL read replicas and route read traffic to them
-
❏ D. Migrate the data layer to Amazon DynamoDB
Question 20
How can you replicate S3 objects encrypted with SSE-KMS to another Region while ensuring the same KMS key material and key ID are used in both Regions?
-
❏ A. Use identical KMS key alias names in both Regions and enable S3 replication
-
❏ B. Create a new source bucket using SSE-KMS with a KMS multi-Region key, replicate to a bucket with the replica key, and migrate existing data
-
❏ C. Convert the existing single-Region KMS key to a multi-Region key and use S3 Batch Replication
-
❏ D. Enable S3 replication and share the current KMS key across Regions
Question 21
A regional media startup named Ardent Stream runs an on-premises analytics application that updates and adds files many times per hour. A new compliance rule requires a complete audit trail of storage activity that includes object-level API actions and configuration changes retained for at least 180 days. Local NAS capacity is almost exhausted, and the team wants to offload part of the dataset to AWS without interrupting ongoing writes. Which approach best satisfies the auditing requirement while easing on-premises storage pressure?
-
❏ A. Move existing data to Amazon S3 using AWS DataSync and enable AWS CloudTrail management events
-
❏ B. Use AWS Storage Gateway to back data with Amazon S3 and enable AWS CloudTrail data events for S3
-
❏ C. Enable Amazon S3 Transfer Acceleration for uploads and turn on AWS CloudTrail data events
-
❏ D. Ship the data with AWS Snowball Edge and log AWS CloudTrail management events
Question 22
Which AWS services let you run Kubernetes pods without managing the underlying nodes and provide a managed AMQP compatible message broker with minimal code changes? (Choose 2)
-
❏ A. Amazon SQS
-
❏ B. Amazon EKS on Fargate
-
❏ C. Amazon MSK
-
❏ D. Amazon MQ
-
❏ E. Amazon EKS on EC2 with Karpenter
Question 23
An edtech startup is launching a platform to store learner progress, test submissions, and user preferences. The database must use a relational schema with ACID transactions across related records. Usage surges unpredictably during scheduled practice exams, so capacity should adjust automatically with little administration. The team also needs automated backups while keeping operations minimal. Which solution is the most cost-effective?
-
❏ A. Use Amazon DynamoDB with on-demand capacity and enable Point-in-Time Recovery
-
❏ B. Launch Amazon RDS for MySQL in Multi-AZ with provisioned IOPS and retain automated backups in Amazon S3 Glacier Deep Archive
-
❏ C. Run an open-source relational database on Amazon EC2 Spot Instances in an Auto Scaling group with nightly snapshots to Amazon S3 Standard-Infrequent Access
-
❏ D. Use Amazon Aurora Serverless v2 with automatic scaling and configure automated backups to Amazon S3 with a 10-day retention
Question 24
A company needs to securely store application secrets and have them automatically rotated about every 90 days with minimal operational overhead. Which AWS service should they use?
-
❏ A. AWS Systems Manager Parameter Store
-
❏ B. Amazon DynamoDB
-
❏ C. AWS Key Management Service
-
❏ D. AWS Secrets Manager with automatic rotation
Question 25
A travel booking startup needs to programmatically expand or shrink the geographic area that directs users to a specific application endpoint as demand fluctuates. Which Amazon Route 53 feature provides this capability?
-
❏ A. Weighted routing
-
❏ B. Latency-based routing
-
❏ C. Geoproximity routing
-
❏ D. Geolocation routing
Question 26
Which architecture preserves all tasks, decouples a fast intake stage from a slower processing stage, and lets each stage scale independently based on its backlog?
-
❏ A. Use a single Amazon SQS queue for both stages and scale on message count
-
❏ B. Create two Amazon SQS queues; each worker fleet polls its own queue and scales on its queue length
-
❏ C. Use an Amazon SNS topic to fan out tasks to all workers
-
❏ D. Create two Amazon SQS queues and have Auto Scaling react to queue notifications
Question 27
At LumenWave Retail, analysts load data into an Amazon Redshift warehouse to join and aggregate files stored in Amazon S3. Access patterns show that roughly 60 days after ingestion, these datasets are seldom queried and no longer considered hot. The team must continue to use standard SQL with queries starting immediately while minimizing ongoing Redshift costs as much as possible. What approach should they take? (Choose 2)
-
❏ A. Launch a smaller Amazon Redshift cluster to hold and query the cold data
-
❏ B. Transition the data to Amazon S3 Standard-IA after 60 days
-
❏ C. Use Amazon Redshift Spectrum to query the S3 data while keeping a minimal Redshift cluster
-
❏ D. Move the data to Amazon S3 Glacier Deep Archive after 60 days
-
❏ E. Use Amazon Athena to run SQL directly on the S3 data
Question 28
How can you expose an HTTP service hosted on EC2 instances in private subnets to the internet while keeping the instances private and minimizing operational overhead?
-
❏ A. Amazon API Gateway with VPC Link to a private NLB
-
❏ B. Internet-facing Application Load Balancer in public subnets with private instances as targets
-
❏ C. NAT gateway in a public subnet with routes from private subnets
-
❏ D. Amazon CloudFront
Question 29
A media technology startup is launching an AI-powered image tagging platform made up of small independent services, each handling a distinct processing step. When a service starts, its model loads roughly 700 MB of parameters from Amazon S3 into memory. Customers submit single images or large batches through a REST API, and traffic can spike sharply during seasonal promotions while dropping to near zero overnight. The team needs a design that scales efficiently and remains cost-effective for this bursty workload. What should the solutions architect recommend?
-
❏ A. Expose the API through an Application Load Balancer and implement the ML workers with AWS Lambda using provisioned concurrency to minimize cold starts
-
❏ B. Buffer incoming requests in Amazon SQS and run the ML processors as Amazon ECS services that poll the queue with scaling tied to queue depth
-
❏ C. Front the service with a Network Load Balancer and run the ML services on Amazon EKS with node-based CPU autoscaling
-
❏ D. Publish API events to Amazon EventBridge and have AWS Lambda targets process them with memory and concurrency increased dynamically per payload size
Question 30
Which configuration statements correctly distinguish NAT instances from NAT gateways? (Choose 3)
-
❏ A. A NAT gateway supports port forwarding
-
❏ B. A NAT instance can be used as a bastion
-
❏ C. A NAT instance supports security groups
-
❏ D. Security groups can be attached to a NAT gateway
-
❏ E. A NAT instance can forward specific ports
-
❏ F. A NAT gateway performs TLS termination
Question 31
At Kestrel Dynamics, about 18 engineers need to quickly try AWS managed policies by temporarily attaching them to their own IAM users for short experiments, but you must ensure they cannot elevate privileges by giving themselves the AdministratorAccess policy. What should you implement to meet these requirements?
-
❏ A. Create a Service Control Policy in AWS Organizations that blocks attaching AdministratorAccess to any identity in the account
-
❏ B. Configure an IAM permissions boundary on every engineer’s IAM user to restrict which managed policies they can self-attach
-
❏ C. AWS Control Tower
-
❏ D. Attach an identity-based IAM policy to each engineer that denies attaching the AdministratorAccess policy to their own user
Question 32
How can teams located in different AWS Regions access the same Amazon EFS file system for shared editing while minimizing operational effort?
-
❏ A. Move the file to Amazon S3 with versioning
-
❏ B. Use inter-Region VPC peering to reach EFS mount targets and mount the same file system
-
❏ C. Enable EFS replication to file systems in each Region
-
❏ D. Put EFS behind an NLB and use AWS Global Accelerator
Question 33
A regional transportation company operates a costly proprietary relational database in its on-premises facility. The team plans to move to an open-source engine on AWS to reduce licensing spend while preserving advanced features such as secondary indexes, foreign keys, triggers, and stored procedures. Which pair of AWS services should be used together to run the migration and handle the required schema and code conversion? (Choose 2)
-
❏ A. AWS DataSync
-
❏ B. AWS Database Migration Service (AWS DMS)
-
❏ C. AWS Schema Conversion Tool (AWS SCT)
-
❏ D. AWS Snowball Edge
-
❏ E. Basic Schema Copy in AWS DMS
Question 34
Which AWS design enables near real time fan out of about 1.5 million streaming events per hour to multiple consumers and redacts sensitive fields before storing the sanitized items in a document database for fast reads?
-
❏ A. Amazon EventBridge with Lambda redaction to Amazon DynamoDB; services subscribe to the bus
-
❏ B. Amazon Kinesis Data Firehose with Lambda transform to Amazon DynamoDB; internal services read from Firehose
-
❏ C. Amazon Kinesis Data Streams with AWS Lambda redaction writing to Amazon DynamoDB; attach additional Kinesis consumers
-
❏ D. Write to Amazon DynamoDB, auto-scrub new items, and use DynamoDB Streams for fan-out
Question 35
A regional architecture firm is retiring its on-premises Windows file server clusters and wants to centralize storage on AWS. The team needs highly durable, fully managed file storage that Windows clients in eight branch locations can access natively using the SMB protocol. Which AWS services satisfy these requirements? (Choose 2)
-
❏ A. Amazon Simple Storage Service (Amazon S3)
-
❏ B. Amazon FSx for Windows File Server
-
❏ C. Amazon Elastic Block Store (Amazon EBS)
-
❏ D. AWS Storage Gateway File Gateway
-
❏ E. Amazon Elastic File System (Amazon EFS)
Question 36
Which VPC attributes must be enabled for EC2 instances to resolve Route 53 private hosted zone records using the Amazon provided DNS?
-
❏ A. Create Route 53 Resolver inbound and outbound endpoints
-
❏ B. Turn on VPC DNS resolution and DNS hostnames
-
❏ C. Remove namespace overlap with a public hosted zone
-
❏ D. Set a DHCP options set with custom DNS servers
Question 37
A post-production house named Northlight Works ingests raw 8K footage ranging from 3 to 6 TB per file and applies noise reduction and color matching before delivery. Each file requires up to 35 minutes of compute. The team needs a solution that elastically scales for spikes while remaining cost efficient. Finished videos must stay quickly accessible for at least 120 days. Which approach best meets these requirements?
-
❏ A. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer, use Amazon SQS for job queuing, store metadata in Amazon RDS, and place completed outputs in Amazon S3 Glacier Flexible Retrieval
-
❏ B. Run an on-premises render farm integrated with AWS Storage Gateway for S3 access, keep metadata in Amazon RDS, and depend on gateway caching for frequently used assets
-
❏ C. Use AWS Batch to orchestrate editing jobs on Spot Instances, store working metadata in Amazon ElastiCache for Redis, and place outputs in Amazon S3 Intelligent-Tiering
-
❏ D. Run containerized workers on Amazon ECS with AWS Fargate, keep job metadata in Amazon DynamoDB, and write completed files to Amazon S3 Standard-IA
Question 38
A single instance EC2 application must remain available after an Availability Zone failure while keeping costs minimal. Which actions enable automatic cross Availability Zone recovery? (Choose 3)
-
❏ A. Use an Application Load Balancer in front of the instance
-
❏ B. Allocate an Elastic IP and associate it at boot using user data
-
❏ C. Create an Auto Scaling group across two AZs with min=1, max=1, desired=1
-
❏ D. Enable EC2 Auto Recovery with a CloudWatch alarm
-
❏ E. Attach an instance role permitting AssociateAddress and DescribeAddresses so user data manages the EIP
-
❏ F. AWS Global Accelerator
Question 39
A solutions architect at Northstar Outfitters is deploying an application on Amazon EC2 inside a VPC. The application saves product images in Amazon S3 and stores customer profiles in a DynamoDB table named CustomerProfiles. The security team requires that connectivity from the EC2 subnets to these AWS services stays on the AWS network and does not traverse the public internet. What should the architect implement to meet this requirement?
-
❏ A. Configure interface VPC endpoints for Amazon S3 and Amazon DynamoDB
-
❏ B. Deploy a NAT gateway in a public subnet and update private route tables
-
❏ C. Set up gateway VPC endpoints for Amazon S3 and Amazon DynamoDB
-
❏ D. AWS Direct Connect
Question 40
Static assets are stored in an S3 bucket and served through CloudFront and they must be accessible only from specified corporate IP ranges; what actions will enforce the IP allow list and prevent direct access to the S3 bucket? (Choose 2)
-
❏ A. S3 bucket policy with aws:SourceIp allowing corporate CIDRs
-
❏ B. CloudFront origin access identity and S3 bucket policy limited to that OAI
-
❏ C. Apply AWS WAF to the S3 bucket
-
❏ D. AWS WAF web ACL with IP allow list on CloudFront
-
❏ E. CloudFront signed URLs for users
Question 41
Riverton Robotics is evaluating how to initialize Amazon EC2 instances for a pilot rollout and wants to test the instance user data capability to bootstrap software and configuration. Which statements accurately describe the default behavior and mutability of EC2 user data? (Choose 2)
-
❏ A. You can edit user data from inside the instance using the Instance Metadata Service
-
❏ B. User data runs automatically only on the first boot after the instance is launched
-
❏ C. You can change an instance’s user data while it is running if you use root credentials
-
❏ D. User data scripts execute with root privileges by default
-
❏ E. User data is processed on every reboot of an EC2 instance by default
Question 42
Which design ensures highly available internet egress from private subnets in two Availability Zones?
-
❏ A. Create one NAT gateway in a public subnet
-
❏ B. Two NAT gateways in public subnets, one per AZ
-
❏ C. Two NAT gateways placed in private subnets
-
❏ D. Gateway VPC endpoint
Question 43
SkyTrail Logistics runs multiple Amazon EC2 instances in private subnets across three Availability Zones within a single VPC, and these instances must call Amazon DynamoDB APIs without sending traffic over the public internet. What should the solutions architect do to keep the traffic on the AWS network path? (Choose 2)
-
❏ A. Set up VPC peering from the VPC to the DynamoDB service
-
❏ B. Configure a DynamoDB gateway VPC endpoint in the VPC
-
❏ C. Send the traffic through a NAT gateway in a public subnet
-
❏ D. Add routes in the private subnet route tables that point to the DynamoDB endpoint
-
❏ E. Create interface VPC endpoints for DynamoDB in each private subnet using AWS PrivateLink
Question 44
How should a stateless web tier running behind an Application Load Balancer with Auto Scaling across three Availability Zones be configured to remain highly available and minimize steady state cost while handling daily traffic spikes? (Choose 2)
-
❏ A. Use On-Demand Instances only
-
❏ B. Set Auto Scaling minimum to 2 instances
-
❏ C. Buy Reserved Instances for the steady baseline
-
❏ D. Set Auto Scaling minimum to 4 instances
-
❏ E. Use Spot Instances for baseline capacity
Question 45
An online video-sharing startup needs to answer relationship-heavy questions such as “How many likes are on clips uploaded by the friends of user Mia over the last 72 hours?” The data model includes users, friendships, videos, and reactions, and the team expects frequent multi-hop traversals with low-latency aggregations across connected entities. Which AWS database service is the best fit for this requirement?
-
❏ A. Amazon Redshift
-
❏ B. Amazon Neptune
-
❏ C. Amazon OpenSearch Service
-
❏ D. Amazon Aurora
Question 46
Which actions allow authenticated Amazon Cognito users to upload directly to Amazon S3 using temporary credentials while ensuring the traffic remains on the AWS network? (Choose 2)
-
❏ A. Route S3 traffic through a NAT gateway
-
❏ B. Configure a Cognito identity pool to exchange user pool logins for temporary IAM credentials to S3
-
❏ C. Create a VPC endpoint for Amazon S3
-
❏ D. Require Cognito user pool tokens in the S3 bucket policy
-
❏ E. Call STS AssumeRoleWithWebIdentity directly with Cognito user pool tokens
Question 47
A technical publisher stores about 18 TB of training videos and PDFs in a single Amazon S3 bucket in one AWS Region. A partner company in a different Region has cross-account read access to pull the content into its own platform. The publisher wants to keep its own data transfer charges as low as possible when the partner downloads the objects. What should a solutions architect recommend?
-
❏ A. Set up S3 Cross-Region Replication to the partner’s S3 bucket
-
❏ B. Enable Requester Pays on the publisher’s S3 bucket
-
❏ C. Turn on S3 Transfer Acceleration for the bucket
-
❏ D. Serve the files through Amazon CloudFront with the S3 bucket as the origin
Question 48
Which architecture provides low-cost, highly available, on-demand image transformations for objects stored in Amazon S3 that are requested through API Gateway and delivered to internet clients?
-
❏ A. EC2 Auto Scaling with an Application Load Balancer, S3 for originals and derivatives, CloudFront over S3
-
❏ B. API Gateway and Lambda returning images directly to clients without S3 or CloudFront
-
❏ C. API Gateway + AWS Lambda for transforms, store originals and outputs in S3, deliver via CloudFront with S3 origin
-
❏ D. EC2 for processing, S3 for sources, DynamoDB for transformed images, CloudFront on S3
Question 49
A ride-sharing platform runs an Auto Scaling group of Amazon EC2 instances across two Availability Zones in eu-west-2, with an Application Load Balancer distributing all traffic to the group. During a staging exercise, the team manually terminated three instances in eu-west-2a, leaving capacity uneven across zones. Later, the load balancer health check flagged an instance in eu-west-2b as unhealthy. What outcomes should you expect from Amazon EC2 Auto Scaling in response to these events? (Choose 2)
-
❏ A. When the ALB reports an instance as unhealthy, Amazon EC2 Auto Scaling first launches a replacement instance and then later terminates the unhealthy one
-
❏ B. For Availability Zone imbalance, Amazon EC2 Auto Scaling rebalances by launching instances in the under-provisioned zone first and only then terminates excess capacity
-
❏ C. Instance Refresh
-
❏ D. When an instance is marked unhealthy by the load balancer, Amazon EC2 Auto Scaling records a scaling activity to terminate it and after termination starts a new instance to maintain desired capacity
-
❏ E. For Availability Zone rebalancing, Amazon EC2 Auto Scaling terminates old instances before launching new ones so that no extra instances are created
Question 50
How can you ensure that objects in an Amazon S3 bucket are accessible only through a CloudFront distribution and cannot be retrieved directly via S3 URLs?
-
❏ A. Enable S3 Block Public Access only
-
❏ B. Attach an IAM role to CloudFront and allow it in the S3 bucket policy
-
❏ C. Use a CloudFront origin access identity with an S3 bucket policy allowing it
-
❏ D. Keep S3 public and use CloudFront signed URLs
Question 51
At LumaRide, a fleet telemetry processor runs on Amazon EC2 Linux instances in multiple Availability Zones. The application writes log objects using standard HTTP API calls and must keep these logs for at least 10 years while supporting concurrent access by many instances. Which AWS storage option most cost-effectively meets these requirements?
-
❏ A. Amazon EBS
-
❏ B. Amazon EFS
-
❏ C. Amazon S3
-
❏ D. Amazon EC2 instance store
Question 52
Which AWS feature provides a centralized, repeatable way to deploy standardized infrastructure across multiple AWS accounts and Regions?
-
❏ A. AWS Organizations SCPs
-
❏ B. CloudFormation StackSets
-
❏ C. AWS Resource Access Manager
-
❏ D. AWS CloudFormation stacks
Question 53
BluePeak Institute runs a nightly Python job that typically completes in about 45 minutes. The task is stateless and safe to retry, so if it gets interrupted the team simply restarts it from the beginning. It currently executes in a colocation data center, and they want to move it to AWS while minimizing compute spend. What is the most cost-effective way to run this workload?
-
❏ A. AWS Lambda
-
❏ B. Amazon EMR
-
❏ C. EC2 Spot Instance with a persistent request
-
❏ D. Application Load Balancer
Question 54
How can an EC2 hosted service be privately accessed by other VPCs and AWS accounts in the same Region without exposing any other resources in the hosting VPC and while requiring minimal management? (Choose 2)
-
❏ A. VPC peering
-
❏ B. AWS PrivateLink service
-
❏ C. Network Load Balancer in service VPC
-
❏ D. AWS Transit Gateway
-
❏ E. AWS Global Accelerator
Question 55
A Canadian startup operates an online design portfolio platform hosted on multiple Amazon EC2 instances behind an Application Load Balancer. The site currently has users in four countries, but a new compliance mandate requires the application to be reachable only from Canada and to deny requests from all other countries. What should the team configure to meet this requirement?
-
❏ A. Update the security group associated with the Application Load Balancer to allow only the approved country
-
❏ B. Attach an AWS WAF web ACL with a geo match rule to the Application Load Balancer to permit only Canada
-
❏ C. Use Amazon Route 53 geolocation routing to return responses only to Canadian users
-
❏ D. Enable Amazon CloudFront geo restriction on a distribution in an Amazon VPC
Question 56
In Elastic Beanstalk an installation takes over 45 minutes but instances must be ready in under 60 seconds, and the environment has static components that are identical across instances while dynamic assets are unique to each instance, which combination of actions will satisfy these constraints? (Choose 2)
-
❏ A. AWS CodeDeploy
-
❏ B. Run dynamic setup in EC2 user data at first boot
-
❏ C. Store installers in Amazon S3
-
❏ D. Enable Elastic Beanstalk rolling updates
-
❏ E. Prebake static components into a custom AMI
Question 57
An oceanography institute runs an image-processing workflow on AWS. Field researchers upload raw photos for processing, which are staged on an Amazon EBS volume attached to an Amazon EC2 instance. Each night at 01:00 UTC, the job writes the processed images to an Amazon S3 bucket for archival. The architect has determined that the S3 uploads are traversing the public internet. The institute requires that all traffic from the EC2 instance to Amazon S3 stay on the AWS private network and not use the public internet. What should the architect do?
-
❏ A. Deploy a NAT gateway and update the private subnet route so the instance egresses through the NAT gateway to S3
-
❏ B. Create a gateway VPC endpoint for Amazon S3 and add the S3 prefix list route to the instance subnet route table
-
❏ C. Configure an S3 Access Point and have the application upload via the access point alias
-
❏ D. Set up VPC peering to Amazon S3 and update routes to use the peering connection
Question 58
In Amazon EKS how can you assign pod IP addresses from four specific private subnets spanning two Availability Zones while ensuring pods retain private connectivity to VPC resources?
-
❏ A. Kubernetes network policies
-
❏ B. AWS PrivateLink
-
❏ C. Security groups for pods
-
❏ D. Amazon VPC CNI with custom pod networking
Question 59
A nonprofit research lab runs several Amazon EC2 instances, a couple of Amazon RDS databases, and stores data in Amazon S3. After 18 months of operations, their monthly AWS spend is higher than expected for their workloads. Which approach would most appropriately