AWS Professional Solutions Architect Exam Topics
The AWS Certified Solutions Architect Professional exam tests your ability to design distributed systems across multiple AWS accounts and regions while maintaining scalability, reliability, and cost efficiency. You can start by studying with AWS Solutions Architect Professional Practice Questions and AWS Professional Solutions Architect Sample Questions.
These questions reflect the complexity of real AWS environments and challenge you to make architecture decisions under constraints involving performance, budget, and compliance. They prepare you to think like an experienced architect rather than rely on pattern memorization.
For comprehensive preparation, try the AWS Solutions Architect Professional Exam Simulator. It replicates the pace and format of the official exam so you can practice managing your time while solving deep technical problems.
AWS Architect Exam Simulators
Supplement your learning with the Professional Solutions Architect Braindump and Professional Solutions Architect Exam Dump. These study sets are organized by domain, including Design for Organizational Complexity, Cost Control, Migration Planning, and Continuous Improvement for existing solutions.
By practicing with the AWS Solutions Architect Professional Questions and Answers, you will strengthen your ability to apply architectural best practices to real-world enterprise challenges. Each exercise encourages reasoning about multi-account strategies, hybrid networking, and advanced IAM configurations.
Real AWS Architect Exam Questions
Work through these materials to develop the analytical and problem-solving mindset needed to succeed as an AWS Solutions Architect Professional. Consistent practice and structured study will build the confidence and competence required to design resilient, optimized, and secure cloud architectures.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AWS Solutions Architect Professional Sample Questions
Question 1
The platform team at NovaPlay Labs is preparing disaster recovery for a containerized web API that runs on Amazon ECS with AWS Fargate and uses Amazon RDS for MySQL, and they require the service to come back online in a different AWS Region with the least possible downtime if the primary Region fails. Which approach best enables rapid failover to the secondary Region with minimal interruption?
-
❏ A. Use an AWS Lambda function to create the second ECS cluster and Fargate service only when a failure is detected, then snapshot and copy the RDS instance to the other Region, restore it there, and change Route 53 records, with EventBridge triggering the workflow
-
❏ B. Enable Amazon RDS Multi AZ for the MySQL instance in the primary Region and place AWS Global Accelerator in front of the application to reroute traffic during issues
-
❏ C. Maintain a second ECS cluster and Fargate service in the backup Region and configure an Amazon RDS cross Region read replica there, then use a Lambda function to promote the replica to primary and update Route 53 during failover, with EventBridge invoking the promotion
-
❏ D. Precreate an ECS cluster and Fargate service in the recovery Region, then use an AWS Lambda function that regularly snapshots the RDS instance, copies the snapshot to the recovery Region, restores a new RDS instance from that snapshot, and updates Amazon Route 53 to point traffic to the standby service, with an Amazon EventBridge schedule invoking the function
Question 2
BlueRiver Tickets runs its customer-facing services on Amazon EC2 behind a single Application Load Balancer and hosts the public zone example.com in Amazon Route 53. The company will expose several hostnames such as m.example.com, www.example.com and api.example.com and they also want the apex example.com name to reach the web tier. You need to design scalable ALB listener rules so that each hostname forwards to the correct target group without adding more load balancers. Which configuration should you implement? (Choose 2)
-
❏ A. Use Path conditions in the ALB listener to route *.example.com to appropriate target groups
-
❏ B. Use Host conditions in the ALB listener to route example.com to the correct target group
-
❏ C. Google Cloud Load Balancing
-
❏ D. Use Host conditions in the ALB listener to route *.example.com to the right target groups
-
❏ E. Use the Path component of Redirect actions in the ALB listener to route example.com to target groups
Question 3
Orchid Dynamics wants to cut data transfer and compute spending across 18 developer AWS accounts while allowing engineers to quickly pull data from Amazon S3 and still move fast when launching EC2 instances and building VPCs. Which approach will minimize cost without reducing developer agility?
-
❏ A. Create SCPs to block unapproved EC2 types and distribute a CloudFormation template that builds a standard VPC with S3 interface endpoints and restrict IAM so developers create VPC resources only through CloudFormation
-
❏ B. Adopt Google Cloud Deployment Manager and VPC Service Controls to govern egress and access to Cloud Storage and connect AWS workloads through hybrid networking
-
❏ C. Publish an AWS Service Catalog portfolio that provisions a vetted VPC with S3 gateway endpoints and approved EC2 options and share it to the engineer accounts with a launch constraint role then scope developer IAM to use Service Catalog
-
❏ D. Set daily cost budgets with AWS Budgets for EC2 and S3 data transfer and send alerts at 70 percent of forecast then trigger actions to stop EC2 and remove VPCs at 95 percent of actual
Question 4
A ticketing startup that operates example.com runs its storefront on Amazon EC2 behind an Auto Scaling group and uses Amazon RDS for PostgreSQL in one Region. The team needs a budget friendly disaster recovery plan that can achieve a recovery point objective of 45 seconds and a recovery time objective of 12 minutes. Which approach best meets these requirements while keeping costs under control?
-
❏ A. Use infrastructure as code to stand up the DR environment in a second Region and create a cross Region read replica for the RDS database and configure AWS Backup to produce cross Region backups for the EC2 instances and the database every 45 seconds and restore instances from the newest backup during an incident and use an Amazon Route 53 geolocation policy so traffic shifts to the DR Region after a disaster
-
❏ B. Set up AWS Backup to create cross Region backups for the EC2 fleet and the database on a 45 second schedule and use infrastructure as code to create the DR networking and subnets and restore the backups onto new instances when needed and use an Amazon Route 53 simple policy to move users to the DR Region
-
❏ C. Build the DR stack with infrastructure as code and create a cross Region read replica for the RDS database and set up AWS Elastic Disaster Recovery to stream changes for the EC2 instances into the DR Region and keep a minimal number of instances running in the DR Region and use an Amazon Route 53 failover policy to switch during an outage then scale out the Auto Scaling group
-
❏ D. Define the DR environment with infrastructure as code and migrate the database to Amazon Aurora PostgreSQL with an Aurora global database and use AWS Elastic Disaster Recovery for the EC2 instances and run the Auto Scaling group at full capacity in the DR Region and use an Amazon Route 53 failover policy to switch during an event
Question 5
Riverton Digital Press is moving its editorial publishing site to AWS. The organization must allow continuous content edits by multiple contributors and also move 250 TB of archived media from an on premises NAS into Amazon S3. They will continue using their existing Site to Site VPN and will run web servers on Amazon EC2 instances behind an Application Load Balancer. Which combination of actions will fulfill these requirements? (Choose 2)
-
❏ A. Configure an Amazon Elastic Block Store EBS Multi Attach volume that is shared by the EC2 instances for content access then build a script to synchronize that volume with the NAS each night
-
❏ B. Order an AWS Snowball Edge Storage Optimized device and copy the static archives to the device then return it to AWS
-
❏ C. Create an Amazon EventBridge schedule that invokes an AWS Lambda function every hour to push updates from the NAS directly to the EC2 instances
-
❏ D. Mount an Amazon Elastic File System EFS file system from the on premises servers over the VPN and also mount the same file system on the EC2 instances to present the latest content
-
❏ E. Google Transfer Appliance
Question 6
HarborView Logistics needs an internal application where employees submit travel and expense reimbursements, with activity spiking on the fifteenth and again on the last business day of each month. The finance team must be able to produce consistent month end reports from the stored data. The platform must be highly available and scale automatically while keeping operational effort as low as possible. Which combination of solutions will best meet these requirements with the least ongoing management? (Choose 2)
-
❏ A. Store the expense records in Amazon S3 and use Amazon Athena and Amazon QuickSight to query and visualize the data
-
❏ B. Deploy the application on Amazon EC2 behind an Application Load Balancer and use Amazon EC2 Auto Scaling with scheduled scale outs
-
❏ C. Host the web front end in Amazon S3 served by Amazon CloudFront and build the API with Amazon API Gateway using an AWS Lambda proxy integration
-
❏ D. Run the application on Amazon ECS with AWS Fargate behind an Application Load Balancer and use Service Auto Scaling with scheduled capacity adjustments
-
❏ E. Persist the expense data in Amazon EMR and use Amazon QuickSight to report directly from the EMR cluster
Question 7
Luma Media runs a public web API on Amazon EC2 instances in one Availability Zone. Leadership has asked a Solutions Architect to redesign the platform so it is highly available across at least two Availability Zones and enforces strong request filtering. The security team requires that inbound traffic be inspected for common web exploits and that any blocked requests are delivered to an external auditing service at logs.example.com for compliance review. What architecture should be implemented?
-
❏ A. Set up an Application Load Balancer with a target group that includes the existing EC2 instances in the current Availability Zone, create Amazon Kinesis Data Firehose with the logs.example.com service as the destination, attach an AWS WAF web ACL to the ALB, enable WAF logging to the Firehose stream, and subscribe to AWS Managed Rules
-
❏ B. Use Google Cloud Armor in front of a Google Cloud HTTP Load Balancer and export blocked events from Cloud Logging to the logs.example.com auditing service
-
❏ C. Configure a Multi AZ Auto Scaling group from the application AMI with an Application Load Balancer in front, attach an AWS WAF web ACL with AWS Managed Rules, and enable WAF logging to Amazon Kinesis Data Firehose that delivers blocked requests to the logs.example.com auditing endpoint
-
❏ D. Deploy an Application Load Balancer and register the EC2 instances as targets, attach an AWS WAF web ACL, enable logging with Amazon CloudWatch Logs, and use an AWS Lambda function to forward entries to the logs.example.com auditing service
Question 8
BrightPaws is a fast growing startup with a viral mobile photo app where people upload pet pictures and add captions, and active users have climbed past 12 million across several continents. The backend runs on Amazon EC2 with Amazon EFS and the instances are behind an Application Load Balancer. Traffic is unpredictable and spikes cause slow responses during busy evenings. What architectural changes should a Solutions Architect propose to cut costs and improve global performance?
-
❏ A. Create an Amazon CloudFront distribution and place the Application Load Balancer behind it and store user images in Amazon S3 using the S3 Standard Infrequent Access storage class
-
❏ B. Use AWS Global Accelerator in front of the Application Load Balancer and migrate static files to Amazon FSx for Windows File Server while using an AWS Lambda function to compress images during the cutover
-
❏ C. Move the image bucket to Google Cloud Storage and serve content through Cloud CDN while continuing to run the dynamic service behind the existing Application Load Balancer
-
❏ D. Place user photos in Amazon S3 with the Intelligent Tiering storage class and front S3 with Amazon CloudFront while using AWS Lambda to perform image processing
Question 9
BlueRiver Foods operates a primary data center and needs a private connection to AWS that is encrypted and delivers consistent low latency with high throughput near 4 Gbps. The team can budget roughly eight weeks for provisioning and is willing to manage the setup effort. Which approach provides end to end connectivity that satisfies these requirements while requiring the least additional infrastructure?
-
❏ A. Create a Site-to-Site VPN from the data center to an Amazon VPC over the internet
-
❏ B. Provision AWS Direct Connect without adding a VPN
-
❏ C. Provision AWS Direct Connect and layer an AWS Site-to-Site VPN over it to encrypt traffic
-
❏ D. Google Cloud Interconnect
Question 10
NorthPeak Systems has a platform engineering group that manages environments using AWS CloudFormation and they need to ensure that mission critical data in Amazon RDS instances and Amazon EBS volumes is not removed if a production stack is deleted. What should the team implement to prevent accidental data loss when someone deletes the stack?
-
❏ A. Create IAM policies that deny delete operations on RDS and EBS when the “aws:cloudformation:stack-name” tag is present
-
❏ B. Set DeletionPolicy of Retain on the RDS and EBS resources in the CloudFormation templates
-
❏ C. Apply a CloudFormation stack policy that blocks deletion actions for RDS and EBS resources
-
❏ D. Enable termination protection on the production CloudFormation stack
Question 11
SpryCart runs an Amazon RDS database in private subnets within an Amazon VPC where outbound internet access is not allowed. The team stored credentials in AWS Secrets Manager and enabled rotation with an AWS Lambda function placed in the same VPC. Recent rotation attempts fail and Amazon CloudWatch Logs show the Lambda function timing out when it tries to call Secrets Manager APIs. The environment must remain without internet egress. What should the team implement so that secret rotation completes successfully under these constraints?
-
❏ A. Create an interface VPC endpoint for the Lambda service in the VPC to allow the function to run without internet access
-
❏ B. Configure a NAT gateway in the VPC and update private route tables to provide outbound access to AWS endpoints
-
❏ C. Create an interface VPC endpoint for Secrets Manager and ensure the rotation function uses it for API calls
-
❏ D. Recreate the rotation function from the latest Secrets Manager blueprint to ensure SSL and TLS support
Question 12
Northwind Labs uses AWS Organizations and has two organizational units named Analytics and Platform under the root. Due to regulatory policy the company must ensure that all workloads run only in ap-southeast-2, and the Platform OU must be limited to a defined list of Amazon EC2 instance types. A solutions architect needs to implement controls with minimal ongoing administration. Which combination of actions should be implemented to satisfy these requirements? (Choose 2)
-
❏ A. Deploy AWS Config rules in every account to detect noncompliant Regions and EC2 instance types and trigger Systems Manager Automation for remediation
-
❏ B. Create a Service Control Policy that uses the aws:RequestedRegion condition to deny all Regions except ap-southeast-2 and attach it to the organization root
-
❏ C. Create IAM users in all accounts and attach an inline policy to each that uses the aws:RequestedRegion condition to allow only ap-southeast-2
-
❏ D. Attach a Service Control Policy to the Platform OU that uses the ec2:InstanceType condition to allow only the approved instance types
-
❏ E. Create a Service Control Policy that uses the ec2:Region condition and apply it to the root, Analytics, and Platform OUs
Question 13
NorthRiver Media plans to launch a microservices streaming service that is expected to grow from 8 million to 45 million users within nine months. The platform runs on Amazon ECS with AWS Fargate and all requests must use HTTPS. The solutions architect must enable blue or green deployments, send traffic through a load balancer, and adjust running tasks automatically by using Amazon CloudWatch alarms. Which solution should the team implement?
-
❏ A. Configure ECS services for blue or green deployments with a Network Load Balancer and request a higher tasks per service quota
-
❏ B. Configure ECS services for blue or green deployments with an Application Load Balancer and enable ECS Service Auto Scaling per service
-
❏ C. Configure ECS services for blue or green deployments with an Application Load Balancer and attach an Auto Scaling group that is managed by Cluster Autoscaler
-
❏ D. Configure ECS services for rolling update deployments with an Application Load Balancer and use ECS Service Auto Scaling per service
Question 14
The Orion initiative at DataVista Labs is overspending on EC2, and its AWS account is deliberately kept separate from DataVista’s AWS Organization. To control costs, what should a solutions architect put in place to ensure developers in the Orion initiative can launch only t3.micro EC2 instances in the eu-west-1 Region?
-
❏ A. Attach a Service Control Policy that denies all EC2 launches except t3.micro in eu-west-1 to the account
-
❏ B. Apply an IAM policy in the project account that allows only t3.micro launches in eu-west-1 and attach it to developer roles
-
❏ C. Use Google Cloud Organization Policy to restrict machine types to e2-micro and limit the region to europe-west1
-
❏ D. Create a new developer account and move workloads to eu-west-1 then add it to the company AWS Organization and enforce a tag policy for Region placement
Question 15
The research team at Orion Data Labs is rolling out a new machine learning service on six Amazon EC2 instances within a single AWS Region. They require very high throughput and very low network latency between all instances and they are not concerned with fault tolerance or hardware diversity. What should they implement to satisfy these needs?
-
❏ A. Distribute six EC2 instances across a spread placement group and attach an additional elastic network interface to each instance
-
❏ B. Put six EC2 instances in a partition placement group and select instance types with enhanced networking
-
❏ C. Place six EC2 instances in a cluster placement group and choose instance types that support enhanced networking
-
❏ D. Compute Engine
Question 16
Following a shift from manually managed EC2 servers to an Auto Scaling group to handle a surge in users, the platform team at Apex Retail faces a patching issue. Security updates that run every 45 days require a reboot, and while an instance is rebooting, the Auto Scaling group replaces it, which leaves the fleet with fresh but unpatched instances. Which actions should a solutions architect propose to ensure instances are patched without being prematurely terminated? (Choose 2)
-
❏ A. Place an Application Load Balancer in front of the Auto Scaling group and rely on target health checks during replacements
-
❏ B. Automate baking a patched AMI, update the launch template to that AMI, and trigger an Auto Scaling instance refresh
-
❏ C. Enable termination protection on EC2 instances in the Auto Scaling group
-
❏ D. Stand up a parallel Auto Scaling group before the maintenance window and patch and reboot instances in both groups during the window
-
❏ E. Set the Auto Scaling termination policy to prefer instances launched from the oldest launch template or configuration
Question 17
Northpeak Analytics is rolling out a global SaaS platform and wants clients to be routed automatically to the closest AWS Region while security teams require static IP addresses that customers can add to their allow lists. The application runs on Amazon EC2 instances behind a Network Load Balancer and uses an Auto Scaling group that spans four Availability Zones in each Region. What solution will fulfill these needs?
-
❏ A. Create an Amazon CloudFront distribution with an origin group that includes the NLB in each Region and provide customers the IP ranges for CloudFront edge locations
-
❏ B. Create an AWS Global Accelerator standard accelerator and add an endpoint group for the NLB in every active Region then share the accelerator’s static IP addresses with customers
-
❏ C. Configure Amazon Route 53 latency based routing with health checks and assign Elastic IP addresses to NLBs in each Region then distribute those IPs to customers
-
❏ D. Create an AWS Global Accelerator custom routing accelerator and configure a listener and port mappings for the NLBs in each Region then give customers the accelerator IP addresses
Question 18
NovaMetrics Ltd. operates a managed API in its AWS account and a client in a different AWS account needs to invoke that service from automation using the CLI while maintaining least privilege and avoiding long lived credentials. How should NovaMetrics grant the client secure access to the service?
-
❏ A. Publish the service with AWS PrivateLink and require an API key for callers
-
❏ B. Create an IAM user for the client and share its access keys
-
❏ C. Create a cross account IAM role with only the needed permissions and let the client assume it using the role ARN without an external ID
-
❏ D. Create a cross account IAM role with only the needed permissions and require an external ID in the trust policy then have the client assume the role using its ARN and the external ID
Question 19
Rivertown Retail Co. runs a customer portal on EC2 instances behind an Application Load Balancer. Orders are stored in an Amazon RDS for MySQL database and related PDFs are kept in Amazon S3. The finance team’s ad hoc reporting slows the production database. Leadership requires a disaster recovery strategy that can keep the application available during a regional outage and that limits data loss to only a few minutes and that also removes the reporting workload from the primary database. What should the solutions architect build?
-
❏ A. Migrate the transactional database to Amazon DynamoDB with global tables, direct the finance team to query a global table in a secondary Region, use an AWS Lambda function on a schedule to copy objects to an S3 bucket in that Region, and stand up EC2 and a new ALB there while pointing the app to the new bucket
-
❏ B. Create an RDS for MySQL cross-Region read replica and route finance queries to that replica, enable S3 Cross-Region Replication to a bucket in the recovery Region, build AMIs of the web tier and copy them to that Region, and during a disaster promote the replica then launch EC2 from the AMIs behind a new ALB and update the app to use the replicated bucket
-
❏ C. Launch more EC2 instances in a second Region and register them with the existing ALB, create an RDS read replica in the second Region for finance use, enable S3 Cross-Region Replication, and in an outage promote the replica and repoint the application
-
❏ D. Replatform the workload by moving the database to Google Cloud Spanner, export reports to BigQuery, front the app with Google Cloud Load Balancing, and store documents in dual-region Cloud Storage for resilience
Question 20
Northwind Publications stores media in Amazon S3 with objects under the prefixes s3://cdn-media-001/photos and s3://cdn-media-001/thumbs. Newly uploaded photos receive heavy traffic, and after 60 days the photos are rarely read while the thumbnails continue to be accessed. After 210 days the team wants to archive both the photos and the thumbnails. The design must remain highly available across multiple Availability Zones to mitigate the impact of an AZ outage. Which actions will deliver the most cost effective approach while meeting these goals? (Choose 2)
-
❏ A. Create a lifecycle rule that transitions only the photos prefix to S3 One Zone-IA after 60 days
-
❏ B. Create a lifecycle rule that transitions all objects to S3 Glacier Flexible Retrieval after 210 days
-
❏ C. Create a lifecycle rule that transitions all objects to S3 Standard-IA after 60 days
-
❏ D. Create a lifecycle rule that transitions only the photos prefix to S3 Standard-IA after 60 days
-
❏ E. Create a lifecycle rule that transitions only the photos prefix to S3 Glacier after 210 days
AWS Professional Solutions Architect Questions Answered
Question 1
The platform team at NovaPlay Labs is preparing disaster recovery for a containerized web API that runs on Amazon ECS with AWS Fargate and uses Amazon RDS for MySQL, and they require the service to come back online in a different AWS Region with the least possible downtime if the primary Region fails. Which approach best enables rapid failover to the secondary Region with minimal interruption?
-
✓ C. Maintain a second ECS cluster and Fargate service in the backup Region and configure an Amazon RDS cross Region read replica there, then use a Lambda function to promote the replica to primary and update Route 53 during failover, with EventBridge invoking the promotion
The correct approach is Maintain a second ECS cluster and Fargate service in the backup Region and configure an Amazon RDS cross Region read replica there, then use a Lambda function to promote the replica to primary and update Route 53 during failover, with EventBridge invoking the promotion.
With a cross Region read replica and a warm standby ECS service the database is continuously replicated and the compute layer is already provisioned. Promoting the replica and updating Route 53 to direct traffic to the secondary Region is fast, which minimizes recovery time and limits data loss to replication lag. EventBridge and Lambda orchestrate the promotion and DNS change so failover is quick and consistent.
The option to Use an AWS Lambda function to create the second ECS cluster and Fargate service only when a failure is detected, then snapshot and copy the RDS instance to the other Region, restore it there, and change Route 53 records, with EventBridge triggering the workflow causes long downtime because building infrastructure and restoring from snapshots after an outage is slow and increases the risk of data loss.
The suggestion to Enable Amazon RDS Multi AZ for the MySQL instance in the primary Region and place AWS Global Accelerator in front of the application to reroute traffic during issues does not meet cross Region recovery needs because Multi AZ protects only within a single Region and there is no replicated database or service in a second Region to receive traffic.
The plan to Precreate an ECS cluster and Fargate service in the recovery Region, then use an AWS Lambda function that regularly snapshots the RDS instance, copies the snapshot to the recovery Region, restores a new RDS instance from that snapshot, and updates Amazon Route 53 to point traffic to the standby service, with an Amazon EventBridge schedule invoking the function results in slow recovery during restore and stale data between snapshots, and it adds unnecessary cost and operational complexity compared to continuous replication.
Cameron’s AWS Architect Exam Tip
When a question emphasizes least downtime across Regions, prefer a warm standby with preprovisioned compute and cross Region read replicas, then automate promotion and DNS updates for a fast and reliable failover.
Question 2
BlueRiver Tickets runs its customer-facing services on Amazon EC2 behind a single Application Load Balancer and hosts the public zone example.com in Amazon Route 53. The company will expose several hostnames such as m.example.com, www.example.com and api.example.com and they also want the apex example.com name to reach the web tier. You need to design scalable ALB listener rules so that each hostname forwards to the correct target group without adding more load balancers. Which configuration should you implement? (Choose 2)
-
✓ B. Use Host conditions in the ALB listener to route example.com to the correct target group
-
✓ D. Use Host conditions in the ALB listener to route *.example.com to the right target groups
The correct options are Use Host conditions in the ALB listener to route example.com to the correct target group and Use Host conditions in the ALB listener to route *.example.com to the right target groups.
Host header based rules are the intended way to send traffic to different target groups based on the requested hostname. You create listener rules that evaluate the Host header and then use forward actions to the appropriate target groups. This scales cleanly because a single Application Load Balancer can support many host conditions and rules without adding more load balancers.
For the apex name you configure a Route 53 alias A or AAAA record pointing example.com to the ALB so the browser sends the Host header for example.com and the listener rule that matches that host forwards to the correct target group.
For subdomains you can add host rules for specific names such as api.example.com or you can use a wildcard host condition to match a set of subdomains that share the same target group. You can then add additional host rules for any subdomains that must go to different target groups.
Use Path conditions in the ALB listener to route *.example.com to appropriate target groups is incorrect because path based routing evaluates the URL path after the hostname and it cannot select a target group based on the requested host.
Google Cloud Load Balancing is incorrect because the scenario and services are on AWS and a different cloud provider does not apply.
Use the Path component of Redirect actions in the ALB listener to route example.com to target groups is incorrect because a redirect changes the client URL and does not forward to a target group to serve the request. You should use a forward action with host conditions for this use case.
Cameron’s AWS Architect Exam Tip
When a question mentions multiple hostnames on one load balancer think host based rules on the listener and not path based rules. Remember that the apex name uses a Route 53 alias to the ALB and the listener uses a forward action to target groups.
Question 3
Orchid Dynamics wants to cut data transfer and compute spending across 18 developer AWS accounts while allowing engineers to quickly pull data from Amazon S3 and still move fast when launching EC2 instances and building VPCs. Which approach will minimize cost without reducing developer agility?
-
✓ C. Publish an AWS Service Catalog portfolio that provisions a vetted VPC with S3 gateway endpoints and approved EC2 options and share it to the engineer accounts with a launch constraint role then scope developer IAM to use Service Catalog
The correct option is Publish an AWS Service Catalog portfolio that provisions a vetted VPC with S3 gateway endpoints and approved EC2 options and share it to the engineer accounts with a launch constraint role then scope developer IAM to use Service Catalog.
This approach minimizes cost because S3 gateway endpoints keep VPC to S3 traffic on the AWS network and do not add endpoint hourly or data processing charges. It preserves developer agility by giving engineers a curated and fast self service path to create a standardized VPC and launch approved EC2 options. Launch constraints and scoped IAM provide guardrails while sharing the portfolio to the engineer accounts lets teams move quickly in their own accounts with consistent governance.
Create SCPs to block unapproved EC2 types and distribute a CloudFormation template that builds a standard VPC with S3 interface endpoints and restrict IAM so developers create VPC resources only through CloudFormation is less cost efficient and reduces agility. It relies on S3 interface endpoints which add per hour and data processing charges, while S3 gateway endpoints have no additional charge. Forcing all VPC work through a single CloudFormation path can slow developers and increases operational overhead compared to a curated self service catalog.
Adopt Google Cloud Deployment Manager and VPC Service Controls to govern egress and access to Cloud Storage and connect AWS workloads through hybrid networking does not address the AWS requirement. It introduces another cloud and governs Google Cloud Storage instead of Amazon S3, which adds complexity and cost without solving the stated need for the AWS accounts.
Set daily cost budgets with AWS Budgets for EC2 and S3 data transfer and send alerts at 70 percent of forecast then trigger actions to stop EC2 and remove VPCs at 95 percent of actual is reactive and harms developer agility. Budgets can notify and apply some controls but they do not prevent costly patterns up front and they do not safely remove VPCs. This does not provide a governed self service way to create compliant VPCs or reduce S3 transfer costs.
Cameron’s AWS Architect Exam Tip
When cost control and speed are both required, look for governed self service patterns such as AWS Service Catalog and prefer S3 gateway endpoints for Amazon S3 traffic. Be cautious of relying on reactive alerts or broad restrictions without offering an easy approved path.
Question 4
A ticketing startup that operates example.com runs its storefront on Amazon EC2 behind an Auto Scaling group and uses Amazon RDS for PostgreSQL in one Region. The team needs a budget friendly disaster recovery plan that can achieve a recovery point objective of 45 seconds and a recovery time objective of 12 minutes. Which approach best meets these requirements while keeping costs under control?
-
✓ C. Build the DR stack with infrastructure as code and create a cross Region read replica for the RDS database and set up AWS Elastic Disaster Recovery to stream changes for the EC2 instances into the DR Region and keep a minimal number of instances running in the DR Region and use an Amazon Route 53 failover policy to switch during an outage then scale out the Auto Scaling group
The correct option is Build the DR stack with infrastructure as code and create a cross Region read replica for the RDS database and set up AWS Elastic Disaster Recovery to stream changes for the EC2 instances into the DR Region and keep a minimal number of instances running in the DR Region and use an Amazon Route 53 failover policy to switch during an outage then scale out the Auto Scaling group.
This approach meets the 45 second recovery point objective because AWS Elastic Disaster Recovery continuously replicates block level changes from the EC2 instances which delivers an RPO measured in seconds. The cross Region read replica for Amazon RDS PostgreSQL provides asynchronous replication with lag typically in seconds so the database side also aligns with the required point objective. Using a Route 53 failover policy allows health based redirection to the recovery Region and keeping only a minimal footprint warm in that Region reduces ongoing cost while the Auto Scaling group can expand quickly after failover to meet the 12 minute recovery time objective. Defining the stack with infrastructure as code makes rehearsals and clean cutovers consistent and repeatable which further improves time to recover.
Use infrastructure as code to stand up the DR environment in a second Region and create a cross Region read replica for the RDS database and configure AWS Backup to produce cross Region backups for the EC2 instances and the database every 45 seconds and restore instances from the newest backup during an incident and use an Amazon Route 53 geolocation policy so traffic shifts to the DR Region after a disaster is not viable because AWS Backup and snapshot based restores cannot achieve a 45 second RPO and restore based recovery of EC2 fleets typically cannot meet a strict 12 minute RTO. Geolocation routing sends users based on their location and does not perform health based failover during an outage.
Set up AWS Backup to create cross Region backups for the EC2 fleet and the database on a 45 second schedule and use infrastructure as code to create the DR networking and subnets and restore the backups onto new instances when needed and use an Amazon Route 53 simple policy to move users to the DR Region is incorrect because periodic backups cannot provide sub minute RPO and restoring instances during an event is unlikely to meet the 12 minute RTO. A simple routing policy does not provide automated failover driven by health checks so traffic will not reliably move during an outage.
Define the DR environment with infrastructure as code and migrate the database to Amazon Aurora PostgreSQL with an Aurora global database and use AWS Elastic Disaster Recovery for the EC2 instances and run the Auto Scaling group at full capacity in the DR Region and use an Amazon Route 53 failover policy to switch during an event would meet the objectives but it is not budget friendly. Running full capacity in the recovery Region doubles steady state cost and migrating to Aurora Global adds cost and complexity that is unnecessary when a cross Region read replica for RDS can satisfy the requirements.
Cameron’s AWS Architect Exam Tip
Translate the numbers to mechanisms. Sub minute RPO requires continuous replication rather than periodic backups and a minutes level RTO needs pre wired routing with failover and some warm capacity. Prefer policies that use health checks and prefer designs that scale out after cutover to control cost.
Question 5
Riverton Digital Press is moving its editorial publishing site to AWS. The organization must allow continuous content edits by multiple contributors and also move 250 TB of archived media from an on premises NAS into Amazon S3. They will continue using their existing Site to Site VPN and will run web servers on Amazon EC2 instances behind an Application Load Balancer. Which combination of actions will fulfill these requirements? (Choose 2)
-
✓ B. Order an AWS Snowball Edge Storage Optimized device and copy the static archives to the device then return it to AWS
-
✓ D. Mount an Amazon Elastic File System EFS file system from the on premises servers over the VPN and also mount the same file system on the EC2 instances to present the latest content
The correct options are Order an AWS Snowball Edge Storage Optimized device and copy the static archives to the device then return it to AWS and Mount an Amazon Elastic File System EFS file system from the on premises servers over the VPN and also mount the same file system on the EC2 instances to present the latest content.
The Snowball Edge device is built for bulk offline transfers so moving 250 TB this way will be faster and more predictable than sending the data across a Site to Site VPN. After you ship the device back the service imports the data into Amazon S3 which satisfies the archive migration requirement.
Using Amazon EFS provides a managed NFS file system that supports many concurrent readers and writers which meets the need for continuous edits by multiple contributors. You can mount EFS from on premises over the existing VPN and mount it on all EC2 web servers so every instance serves the same up to date content.
The option Configure an Amazon Elastic Block Store EBS Multi Attach volume that is shared by the EC2 instances for content access then build a script to synchronize that volume with the NAS each night is incorrect because Multi Attach is limited to specific volume types and instances in a single Availability Zone and it is not a general purpose shared file system for many writers. It would also introduce lag and complexity with nightly synchronization and would not support concurrent edits safely without a cluster aware file system.
The option Create an Amazon EventBridge schedule that invokes an AWS Lambda function every hour to push updates from the NAS directly to the EC2 instances is incorrect because copying files into individual instances does not provide a single authoritative store. It risks configuration drift across servers and does not enable simultaneous collaborative editing.
The option Google Transfer Appliance is incorrect because it is a Google Cloud product and cannot be used to ingest data into AWS.
Cameron’s AWS Architect Exam Tip
When a workload needs many writers across multiple servers think about managed shared file systems rather than copying files to instance disks. When data size is hundreds of terabytes and the network is constrained think about offline ingestion to meet timelines reliably.
Question 6
HarborView Logistics needs an internal application where employees submit travel and expense reimbursements, with activity spiking on the fifteenth and again on the last business day of each month. The finance team must be able to produce consistent month end reports from the stored data. The platform must be highly available and scale automatically while keeping operational effort as low as possible. Which combination of solutions will best meet these requirements with the least ongoing management? (Choose 2)
-
✓ A. Store the expense records in Amazon S3 and use Amazon Athena and Amazon QuickSight to query and visualize the data
-
✓ C. Host the web front end in Amazon S3 served by Amazon CloudFront and build the API with Amazon API Gateway using an AWS Lambda proxy integration
The correct options are Store the expense records in Amazon S3 and use Amazon Athena and Amazon QuickSight to query and visualize the data and Host the web front end in Amazon S3 served by Amazon CloudFront and build the API with Amazon API Gateway using an AWS Lambda proxy integration.
The first option uses durable object storage for all expense records and queries them with a serverless SQL service, then presents results through a fully managed visualization service. This design has virtually no infrastructure to manage, scales automatically with usage, and supports consistent month end reporting when you write immutable files and partition or snapshot the data so reports run against a stable view.
The second option delivers a static front end with global caching and a fully managed API layer that invokes serverless compute for business logic. This provides high availability by default and scales seamlessly during the mid month and end of month spikes while keeping operational effort very low because there are no servers or containers to patch or capacity to right size.
Deploy the application on Amazon EC2 behind an Application Load Balancer and use Amazon EC2 Auto Scaling with scheduled scale outs is not the best fit because you must manage instances, operating systems, images, patching, and scaling schedules. Scheduled actions may not match real demand and this increases ongoing management compared to a serverless approach.
Run the application on Amazon ECS with AWS Fargate behind an Application Load Balancer and use Service Auto Scaling with scheduled capacity adjustments reduces server maintenance but still requires building and updating container images, task definitions, deployments, and scaling configuration. This is more operational work than a fully serverless web and API design.
Persist the expense data in Amazon EMR and use Amazon QuickSight to report directly from the EMR cluster is unsuitable because that service is intended for big data processing rather than as a primary data store. Keeping a cluster available for reporting increases cost and operational complexity and it does not provide the simplicity and automatic scaling that serverless querying on object storage offers.
Cameron’s AWS Architect Exam Tip
When a question highlights least management and spiky workloads, prefer serverless and fully managed services for storage, compute, and APIs. Pair object storage with serverless query engines for reporting and use managed front ends and APIs to minimize operations.
Question 7
Luma Media runs a public web API on Amazon EC2 instances in one Availability Zone. Leadership has asked a Solutions Architect to redesign the platform so it is highly available across at least two Availability Zones and enforces strong request filtering. The security team requires that inbound traffic be inspected for common web exploits and that any blocked requests are delivered to an external auditing service at logs.example.com for compliance review. What architecture should be implemented?
-
✓ C. Configure a Multi AZ Auto Scaling group from the application AMI with an Application Load Balancer in front, attach an AWS WAF web ACL with AWS Managed Rules, and enable WAF logging to Amazon Kinesis Data Firehose that delivers blocked requests to the logs.example.com auditing endpoint
The correct choice is Configure a Multi AZ Auto Scaling group from the application AMI with an Application Load Balancer in front, attach an AWS WAF web ACL with AWS Managed Rules, and enable WAF logging to Amazon Kinesis Data Firehose that delivers blocked requests to the logs.example.com auditing endpoint. This design achieves high availability by spreading instances across at least two Availability Zones and it inspects inbound requests with AWS WAF using managed rule groups. It also satisfies the requirement to deliver blocked requests to an external auditing service by using WAF logging sent to a Firehose delivery stream that posts to the specified endpoint.
Using a Multi AZ Auto Scaling group with an Application Load Balancer provides resilient capacity across multiple subnets and Availability Zones and it maintains health and replaces failed instances automatically. Attaching a web ACL to the load balancer centralizes inspection of all inbound traffic so the application instances only receive requests that pass the WAF rules.
Enabling AWS WAF logging to a Firehose delivery stream captures detailed information about allowed and blocked requests. Firehose can deliver to an HTTP endpoint destination which allows direct delivery of blocked events to the logs.example.com auditing service and this meets the compliance requirement without custom log parsing.
The option Set up an Application Load Balancer with a target group that includes the existing EC2 instances in the current Availability Zone, create Amazon Kinesis Data Firehose with the logs.example.com service as the destination, attach an AWS WAF web ACL to the ALB, enable WAF logging to the Firehose stream, and subscribe to AWS Managed Rules is not correct because it keeps all instances in a single Availability Zone and therefore does not provide the required high availability across at least two Availability Zones.
The option Use Google Cloud Armor in front of a Google Cloud HTTP Load Balancer and export blocked events from Cloud Logging to the logs.example.com auditing service is not correct because it proposes Google Cloud services while the workload runs on Amazon EC2 and the requirement is to redesign the platform within AWS.
The option Deploy an Application Load