Certified Professional AWS Solutions Architect Exam Braindumps

AWS Professional Solutions Architect Exam Topics

Despite the title, this is not a Professional Solutions Architect Braindump in the traditional sense. I do not believe in cheating. The term “braindump” once referred to someone taking an exam, memorizing the questions, and posting them online for others to use. That approach is unethical, violates the AWS certification agreement, and prevents true learning.

This is not an exam dump or copied content. All of these questions come from my AWS Solutions Architect Professional Udemy course and from AWS Solutions Architect Professional Practice Questions available on certificationexams.pro.

Each question is crafted to align with the official AWS Certified Solutions Architect – Professional exam blueprint. They reflect the tone, logic, and complexity of real AWS scenarios, but none are taken from the actual test. These exercises help you learn and reason through architectural trade-offs, cost decisions, and governance strategies.

AWS Architect Exam Simulators

If you can answer these questions and understand why incorrect options are wrong, you will not only pass the real exam but also gain a deep understanding of how to design complex AWS solutions that balance performance, cost, and reliability.

Each scenario includes detailed explanations and realistic examples to help you think like an AWS architect. Practice using the AWS Solutions Architect Professional Exam Simulator and the Professional Solutions Architect Practice Test to build your timing, reasoning, and analytical skills.

Real AWS Architect Exam Questions

So if you choose to call this your Professional Solutions Architect Exam Dump, remember that every question is built to teach, not to cheat. Approach your preparation with integrity, focus, and curiosity. Success as a certified AWS Solutions Architect Professional comes from understanding architecture, not memorizing answers.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AWS Professional Solutions Architect Braindump Questions

Question 1

Orion Retail Group operates about 18 AWS accounts for its online storefronts and partner integrations. Each EC2 instance runs the unified CloudWatch agent and publishes logs to CloudWatch Logs. The company must consolidate all security events in a separate AWS account that is dedicated to immutable log retention. The network operations team needs to aggregate and correlate events from every account with near real time latency so analysts can respond within one minute. What is the best way to implement this cross account centralized ingestion?

  • ❏ A. Create an IAM role in every source account and run a Lambda function in the logging account every 45 minutes that assumes the role and exports each CloudWatch Logs group to an S3 bucket in the logging account

  • ❏ B. Create a CloudWatch Logs destination in the logging account that targets a Kinesis Data Firehose delivery stream and configure subscription filters on the security log groups in each source account to stream events to that destination and deliver them to an S3 bucket in the logging account

  • ❏ C. Configure CloudWatch Logs in each source account to forward events directly to CloudWatch Logs in the logging account and then subscribe a Kinesis Data Firehose stream to Amazon EventBridge to store the data in S3

  • ❏ D. Send the security events into Google Pub Sub and process them with Dataflow to BigQuery and then export the results to Amazon S3

Question 2

Sundial Apps runs a RESTful API on Amazon EC2 instances in an Auto Scaling group across three private subnets. An Application Load Balancer spans two public subnets and is configured as the only origin for an Amazon CloudFront distribution. The team needs to ensure that only CloudFront can reach the ALB and that direct access from the internet is blocked at the origin. What approach should the solutions architect choose to strengthen origin security?

  • ❏ A. Associate an AWS WAF web ACL to the ALB with an IP match that includes the published CloudFront service ranges and migrate the ALB into two private subnets

  • ❏ B. Enable AWS Shield Advanced and attach a security group policy that only permits CloudFront service addresses to the ALB

  • ❏ C. Store a random token in AWS Secrets Manager with automated rotation using AWS Lambda then have CloudFront pass the token in a custom origin header and enforce a header match rule in an AWS WAF web ACL that is associated to the ALB

  • ❏ D. Save a shared key in AWS Systems Manager Parameter Store with rotation configured then configure CloudFront to send the key as a custom header and add custom code on the target instances to check the header and drop requests without the key

Question 3

Riverbend Logistics plans to run its connected van telemetry platform on AWS to collect signals from roughly 9,000 vehicles every three seconds so it can update routes and estimated arrival times in near real time. The engineering team requires a fully serverless design that scales automatically without any shard or instance capacity to size or tune and the team does not want to intervene during traffic spikes. As the AWS Certified Solutions Architect Professional advising this initiative, which approach should they implement?

  • ❏ A. Stream the data into Amazon Kinesis Data Firehose and configure it to deliver records directly into an Amazon DynamoDB table

  • ❏ B. Publish the telemetry to an Amazon SNS topic that invokes an AWS Lambda function which stores items in an Amazon DynamoDB table

  • ❏ C. Send messages to an Amazon SQS standard queue and have an AWS Lambda function process batches and write them to an auto scaled Amazon DynamoDB table

  • ❏ D. Ingest the telemetry into an Amazon Kinesis Data Stream and use a consumer application on Amazon EC2 to read the stream and store data in an Amazon DynamoDB table

Question 4

EduNova at example.com plans to distribute private downloads and a subscriber only library using Amazon CloudFront. Only newly registered customers who have paid for a 12 month plan should be allowed to fetch the desktop installer, and only active subscribers should be able to view any files in the members area. What should the architect implement to enforce these restrictions while keeping delivery efficient with CloudFront? (Choose 2)

  • ❏ A. Configure CloudFront signed cookies to authorize access to all content in the members area

  • ❏ B. Cloud CDN signed URLs

  • ❏ C. Configure CloudFront signed cookies to authorize access to the single installer object

  • ❏ D. Configure CloudFront signed URLs to protect the installer download

  • ❏ E. Configure CloudFront signed URLs for every object in the members area

Question 5

The platform engineering team at Riverbend Manufacturing plans to move several hundred virtual machines from two colocation facilities into AWS, and they must inventory their workloads, map service dependencies, and produce a consolidated assessment report. They have already initiated a Migration Evaluator engagement and they are allowed to install collection software on all VMs. Which approach will deliver the required insights with the least operational effort?

  • ❏ A. Configure the AWS Application Discovery Service Agentless Collector in the data centers. After a 30 day collection window, use AWS Migration Hub to inspect dependency maps. Export the server inventory and upload it to Migration Evaluator to generate the Quick Insights assessment

  • ❏ B. Install the AWS Application Discovery Agent on every on-premises VM. After a 30 day collection window, use AWS Migration Hub to view application dependencies. Download the Quick Insights assessment report directly from Migration Hub

  • ❏ C. Deploy the Migration Evaluator Collector to all VMs. When the 30 day collection completes, use Migration Evaluator to review discovered servers and dependencies. Export the inventory to Amazon QuickSight and then download the Quick Insights assessment from the generated dashboard

  • ❏ D. Set up the Migration Evaluator Collector in the environment and also install the AWS Application Discovery Agent on each VM. After the 30 day run, use AWS Migration Hub for dependency visualization and retrieve the Quick Insights assessment from Migration Evaluator

Question 6

A fintech startup named LumaPay lets customers submit high resolution receipt photos from a mobile app to validate cashback offers. The app stores images in an Amazon S3 bucket in the us-east-2 Region. The business recently launched across several European countries and those users report long delays when sending images from their phones. What combination of changes should a Solutions Architect implement to speed up the image upload experience for these users? (Choose 2)

  • ❏ A. Create an Amazon CloudFront distribution with the S3 bucket as the origin

  • ❏ B. Update the mobile app to use Amazon S3 multipart upload

  • ❏ C. Change the bucket storage class to S3 Intelligent-Tiering

  • ❏ D. Enable Amazon S3 Transfer Acceleration on the bucket

  • ❏ E. Provision an AWS Direct Connect link from Europe to the us-east-2 Region

Question 7

NorthPeak Lending runs its containerized platform on Amazon ECS with Amazon API Gateway in front and stores relational data in Amazon Aurora and key value data in Amazon DynamoDB. The team provisions with the AWS CDK and releases through AWS CodePipeline. The company requires an RPO of 90 minutes and an RTO of 3 hours for a regional outage while keeping spend as low as possible. Which approach should the architects implement to meet these goals?

  • ❏ A. Use AWS Database Migration Service for Aurora replication and use DynamoDB Streams with Amazon EventBridge and AWS Lambda for DynamoDB replication to a second Region, deploy API Gateway Regional endpoints in both Regions, and configure Amazon Route 53 failover to move traffic during a disaster

  • ❏ B. Create an Aurora global database and enable DynamoDB global tables in a secondary Region, deploy API Gateway Regional endpoints in each Region, and use Amazon Route 53 failover routing to shift clients to the standby Region during an outage

  • ❏ C. Configure AWS Backup to copy Aurora and DynamoDB backups into a secondary Region, deploy API Gateway Regional endpoints in both Regions, and use Amazon Route 53 failover to direct users to the secondary Region when needed

  • ❏ D. Create an Aurora global database and DynamoDB global tables to a second Region, deploy API Gateway Regional endpoints in each Region, and place Amazon CloudFront in front with origin failover to route users to the secondary Region during an event

Question 8

Orbit Finance must decommission its on-premises server room on short notice and urgently move its datasets to AWS. The facility has a 1.5 Gbps internet connection and a 700 Mbps AWS Direct Connect link. The team needs to transfer 28 TB of files into a new Amazon S3 bucket. Which approach will finish the transfer in the shortest time?

  • ❏ A. Use AWS DataSync to move the files into the S3 bucket

  • ❏ B. Load the data onto a 100 TB AWS Snowball and return the device for import

  • ❏ C. Enable Amazon S3 Transfer Acceleration on the bucket and upload over the internet

  • ❏ D. Send the data over the existing AWS Direct Connect link to S3

Question 9

Arcadia Retail Group plans to retire its on-premises hardware so it can shift teams to machine learning initiatives and customer personalization. As part of this modernization the company needs to archive roughly 8.5 PB of data from its primary data center into durable long term storage on AWS with the fastest migration and the most cost effective outcome. As a Solutions Architect Professional what approach should you recommend to move and store this data?

  • ❏ A. Transfer the on-premises data into a Snowmobile and import it into Amazon S3 then apply a lifecycle policy to transition the objects to S3 Glacier

  • ❏ B. Load the on-premises data onto multiple Snowball Edge Storage Optimized devices then copy it into Amazon S3 and use a lifecycle policy to transition the data to S3 Glacier

  • ❏ C. Transfer the on-premises data into a Snowmobile and import it directly into S3 Glacier

  • ❏ D. Load the on-premises data onto multiple Snowball Edge Storage Optimized devices and import it directly into S3 Glacier

Question 10

FerroTech Media runs many AWS accounts under AWS Organizations and wants to keep Amazon EC2 costs from spiking without warning. The cloud finance team needs an automatic alert whenever any account shows EC2 usage or spend that rises beyond a threshold based on recent behavior. If EC2 consumption or charges increase by more than 20% compared to the 60 day rolling average then the team wants a daily notification. The solution must function across all member accounts with minimal ongoing effort. Which approach will best satisfy these needs?

  • ❏ A. Publish EC2 instance hour counts as Amazon CloudWatch custom metrics in every account and build management account alarms for deviations from trend

  • ❏ B. Enable AWS Cost Anomaly Detection for a linked account group that covers the organization and configure a service monitor for Amazon EC2 with daily emails when anomalies exceed 20% of the 60 day average

  • ❏ C. Ingest AWS Cost and Usage Report into Amazon S3 and query with Amazon Athena on a daily schedule to compare EC2 costs to a 60 day average and then publish alerts to Amazon SNS

  • ❏ D. Create AWS Budgets in each member account with fixed EC2 spend limits and send notifications through AWS Budgets alerts

Question 11

Ravenwood Analytics is introducing client-side encryption for files that will be stored in a new Amazon S3 bucket. The engineers created a customer managed key in AWS Key Management Service to support the encryption workflow. They attached the following IAM policy to the role that the uploader uses. { “Version”: “2012-10-17”, “Id”: “key-policy-2”, “Statement”: [ { “Sid”: “GetPut”, “Effect”: “Allow”, “Action”: [ “s3:GetObject”, “s3:PutObject” ], “Resource”: “arn:aws:s3:::ravenwood-uploads-east/*” }, { “Sid”: “KMS”, “Effect”: “Allow”, “Action”: [ “kms:Decrypt”, “kms:Encrypt” ], “Resource”: “arn:aws:kms:us-east-2:444455556666:key/keyid-90210” } ] } Test downloads from the bucket worked, but every attempt to upload a new object failed with an AccessDenied error that reported the action was forbidden. Which additional IAM action must be added to this policy so that client-side encrypted uploads can succeed?

  • ❏ A. kms:GetKeyPolicy

  • ❏ B. Cloud KMS

  • ❏ C. kms:GenerateDataKey

  • ❏ D. kms:GetPublicKey

Question 12

BrightLeaf Media runs an image processing backend on AWS and wants to lower costs and reduce operational effort while keeping the environment secure. The VPC spans two Availability Zones with both public and private subnets. Amazon EC2 instances in the private subnets host the application behind an Application Load Balancer located in the public subnets. The instances currently reach the internet through two NAT gateways, and about 900 GB of new images are stored in Amazon S3 each day. What should a solutions architect do to meet these goals without weakening security?

  • ❏ A. Create Amazon S3 interface VPC endpoints in each Availability Zone and update the route tables for the private subnets to use these endpoints

  • ❏ B. Relocate the EC2 instances into the public subnets and remove the NAT gateways

  • ❏ C. Create an Amazon S3 gateway VPC endpoint in the VPC and apply an endpoint policy that allows only the required S3 actions for the bucket

  • ❏ D. Use Auto Scaling NAT instances in place of the NAT gateways and point the private subnet routes at the instances

Question 13

Cedar Peak Engineering is creating a disaster recovery plan for a mission critical Windows application that runs in its data center. About 250 Windows servers access a shared SMB file repository and the business mandates an RTO of 12 minutes and an RPO of 4 minutes, and operations expects native failover and straightforward failback. Which approach delivers these goals in the most cost effective way?

  • ❏ A. Use AWS Application Migration Service to replicate the on premises servers and place the shared files in Amazon S3 with AWS DataSync behind AWS Storage Gateway File Gateway, then update DNS to route clients to AWS during a disaster and copy data back when returning to on premises

  • ❏ B. Set up AWS Elastic Disaster Recovery for the Windows servers and use AWS DataSync to replicate the SMB data to Amazon FSx for Windows File Server, then fail over the servers to AWS during an event and use Elastic Disaster Recovery to fail back to new or existing on premises hosts

  • ❏ C. Build infrastructure templates with AWS CloudFormation and replicate all file data to Amazon Elastic File System using AWS DataSync, then deploy the stack during an incident with a pipeline and synchronize back afterward

  • ❏ D. Deploy AWS Storage Gateway File Gateway and schedule nightly backups of the Windows servers to Amazon S3, then restore servers from those backups during an outage and run temporary instances on Amazon EC2 during tailback

Question 14

HarborPoint Analytics has adopted a hybrid work model and needs to provide employees with secure remote access to internal services that run in five AWS accounts. The VPCs are already connected using existing VPC peering and some corporate resources are reachable through an AWS Site-to-Site VPN. The architect must deploy an AWS Client VPN that scales and keeps ongoing cost low while enabling access across the peered VPCs. What is the most cost-effective approach?

  • ❏ A. Provision a transit gateway for all VPCs and place a Client VPN endpoint in the shared services account that forwards traffic through the transit gateway

  • ❏ B. Integrate a Client VPN endpoint with AWS Cloud WAN and attach all VPCs to the core network

  • ❏ C. Create a Client VPN endpoint in the shared services account and advertise routes over the existing VPC peering to reach applications in other accounts

  • ❏ D. Deploy a Client VPN endpoint in each AWS account and configure routes to the application subnets

Question 15

The platform engineering team at Aurora Digital is building an Amazon EKS cluster to run an event driven thumbnail rendering service. The workload uses ephemeral stateless pods that can surge from about 30 to more than 600 replicas within minutes during traffic spikes. They want a configuration that most improves node resilience and limits the blast radius if an Availability Zone experiences an outage. What should they implement?

  • ❏ A. Consolidate node groups and switch to larger instance sizes to run more pods per node

  • ❏ B. Google Kubernetes Engine

  • ❏ C. Apply Kubernetes topology spread constraints keyed on Availability Zone so replicas are evenly distributed across zones

  • ❏ D. Configure the Kubernetes Cluster Autoscaler to keep capacity slightly underprovisioned during spikes

Question 16

A retail technology firm named Alder Cove Systems is moving its workloads to AWS and needs a multi account plan. There are five product squads and each wants strict isolation from the others. The Finance department needs clear chargeback so that costs and usage are separated by squad. The Security team wants centralized oversight with least privilege access and the ability to set preventive guardrails across environments. What account strategy should the company adopt to meet these requirements?

  • ❏ A. Use AWS Control Tower to set up the landing zone and keep a single shared workload account for all squads while using cost allocation tags for billing and rely on guardrails for governance

  • ❏ B. Use AWS Organizations to establish a management account then provision a dedicated account for each squad and create a separate security tooling account with cross account access and apply service control policies to all workload accounts and have the security team write IAM policies that grant least privilege

  • ❏ C. Create separate AWS accounts for each squad and set the security account as the management account and enable consolidated billing and allow the security team to administer other accounts through a cross account role

  • ❏ D. Create a single AWS account and use Active Directory federation for access and rely on resource tags to split billing by team and manage permissions with IAM policies that grant only the required access

Question 17

The engineering group at BrightWave Logistics is launching a relational backend that must support cross Region disaster recovery. The business requires an RPO below 4 minutes and an RTO below 12 minutes for approximately 12 TB of data while keeping costs as low as possible. Which approach will meet these targets at the lowest cost?

  • ❏ A. Provision Amazon Aurora DB clusters in two Regions and use AWS Database Migration Service to stream ongoing changes into the secondary cluster

  • ❏ B. Deploy Amazon RDS with Multi AZ in one Region and rely on the automatic failover capability during an outage

  • ❏ C. Run Amazon RDS in a primary Region with a cross Region read replica and plan to promote the replica to primary during a Regional disruption

  • ❏ D. Use Amazon Aurora Global Database with a writer in the primary Region and a reader in a secondary Region to enable rapid cross Region recovery

Question 18

HarborTech Labs plans to move its on premises file processing system to AWS. Customers upload files through a web portal at example.com and the files are currently kept on a network file share. A backend worker fleet reads tasks from a queue to process each file and individual jobs can run for up to 50 minutes. Traffic spikes during weekday business hours and is quiet overnight and on weekends. Which migration approach would be the most cost effective while fulfilling these needs?

  • ❏ A. Use Amazon SQS for the queue and have the existing web tier publish messages then trigger AWS Lambda to process each file and store results in Amazon S3

  • ❏ B. Use Amazon MQ for the queue and modify the web tier to publish messages then spin up an Amazon EC2 instance when messages arrive to process files and save results on Amazon EFS and stop the instance when done

  • ❏ C. Use Amazon SQS for the queue and have the web tier publish messages then run Amazon EC2 instances in an Auto Scaling group that scales on SQS queue depth to process files and store results in Amazon S3

  • ❏ D. Use Amazon MQ for the queue and have the web tier publish messages then trigger AWS Lambda to process each file and write outputs to Amazon EFS

Question 19

The operations group at Orion Metrics runs a licensed application on Amazon EC2 that stores shared files on an Amazon EFS file system that is encrypted with AWS KMS. The file system is protected by AWS Backup with the default backup plan. The business now requires a recovery point objective of 90 minutes for these files. What should a solutions architect change to meet this objective while keeping encryption in place?

  • ❏ A. Create a new backup plan and update the KMS key policy to allow the AWSServiceRoleForBackup service role to use the key, then run a backup every 45 minutes by using a custom cron expression

  • ❏ B. Use the existing backup plan, update the KMS key policy to allow the AWSServiceRoleForBackup role to use the key, and enable cross Region replication for the EFS file system

  • ❏ C. Create a dedicated IAM role for backups and a new backup plan, update the KMS key policy to permit that role to use the key, and schedule backups every hour

  • ❏ D. Create a new IAM role, keep the current backup plan, update the KMS key policy to allow the new role to use the key, and enable continuous backups for point in time recovery

Question 20

The platform group at SkyVertex Systems reports a rise in errors on PUT operations against their public REST API. Logs indicate that one client is sending large request bursts that exhaust capacity. They want to protect other users and keep responses user friendly while avoiding changes to backend code. What should the solutions architect recommend?

  • ❏ A. Attach AWS WAF to the API Gateway and create a rate based rule that limits bursts from a single source

  • ❏ B. Configure API Gateway usage plans with per key throttling limits and have the client handle HTTP 429 responses gracefully

  • ❏ C. Set reserved concurrency on the Lambda integration to handle sudden spikes

  • ❏ D. Enable API caching on the production stage and run 20 minute load tests to tune cache capacity

Question 21

PixelForge Studios plans to launch a real time multiplayer quiz application on AWS for internet users. The service will run on one Amazon EC2 instance and clients will connect using UDP. Leadership requires a highly secure architecture while keeping the design simple. As the Solutions Architect, what actions should you implement? (Choose 3)

  • ❏ A. Deploy AWS Global Accelerator with an Elastic Load Balancer as the endpoint

  • ❏ B. Place a Network Load Balancer in front of the EC2 instance and create a Route 53 record game.example.com that resolves to the NLB Elastic IP address

  • ❏ C. Create AWS WAF rules to drop any non UDP traffic and attach them to the load balancer

  • ❏ D. Enable AWS Shield Advanced on all internet facing resources

  • ❏ E. Use an Application Load Balancer in front of the instance and publish a friendly DNS name in Amazon Route 53 that aliases to the ALB public name

  • ❏ F. Configure subnet network ACLs to deny all protocols except UDP and associate them to the subnets that contain the load balancer nodes

Question 22

A regional home goods retailer named BayTrail Living runs its shopping site on three Amazon EC2 instances behind an Application Load Balancer, and the application stores order data in an Amazon DynamoDB table named OrdersProd. Traffic surges during quarterly flash sales and throughput on reads and writes degrades at the busiest moments. What change will provide a scalable architecture that rides through peaks with the least development effort?

  • ❏ A. Add DynamoDB Accelerator DAX and keep the existing EC2 fleet and ALB

  • ❏ B. Replatform the web tier to AWS Lambda and increase provisioned read capacity and write capacity for the DynamoDB table

  • ❏ C. Create Auto Scaling groups for the web tier and enable DynamoDB auto scaling

  • ❏ D. Create Auto Scaling groups for the web tier and add Amazon SQS with a Lambda function to batch writes into DynamoDB

Question 23

HarborPeak Logistics is moving a dual-tier web platform from its on-premises environment into AWS. The team will use Amazon Aurora PostgreSQL-Compatible Edition, EC2 Auto Scaling, and an Elastic Load Balancer to support a rapidly expanding audience. The application is stateful because it keeps session data in memory and users expect consistent interactions during traffic spikes. Which approach will ensure session consistency while allowing both the application tier and the database tier to scale?

  • ❏ A. Enable Aurora Replicas auto scaling and use a Network Load Balancer configured with least outstanding requests and stickiness

  • ❏ B. Enable Aurora Replicas auto scaling and place an Application Load Balancer in front with round robin routing and sticky sessions turned on

  • ❏ C. Turn on auto scaling for Aurora writers and use a Network Load Balancer with least outstanding requests and stickiness

  • ❏ D. Turn on auto scaling for Aurora writers and use an Application Load Balancer with round robin routing and sticky sessions

Question 24

Ridgeview Analytics operates an internal reporting tool that writes CSV exports to an Amazon S3 bucket. The files contain confidential information and are normally accessed only by the company’s IAM users. The team needs to share one specific CSV file with an external auditor for a 36 hour review. A solutions architect used an IAM user to call PutObjectAcl to add a public read ACL to that object, but the request returned “AccessDenied”. What is the most likely reason this operation failed?

  • ❏ A. The bucket has the BlockPublicPolicy setting turned on

  • ❏ B. S3 Object Lock in compliance mode is enabled for the object version

  • ❏ C. The bucket is configured with BlockPublicAcls enabled

  • ❏ D. The IAM user is not listed on the object ACL with write ACL permission

Question 25

RiverStone Capital uses AWS Control Tower and needs to apply cost governance across more than 320 developer accounts inside a Sandbox organizational unit. The company wants to require burstable EC2 and RDS instance classes and to block services that do not apply to their workloads. What should a solutions architect propose?

  • ❏ A. Define a custom detective guardrail in AWS Control Tower that flags non burstable instance launches and disallowed services and apply it to the Sandbox OU

  • ❏ B. Use Google Cloud Organization Policy constraints to restrict machine types and services across development projects

  • ❏ C. Craft a Service Control Policy in AWS Organizations that permits only burstable EC2 and RDS instance families and denies nonessential services and attach it to the Sandbox OU

  • ❏ D. Implement a custom preventive guardrail in AWS Control Tower that enforces only burstable EC2 and RDS instance types and blocks nonapproved services and enable it on the Sandbox OU

AWS Solutions Architect Professional Exam Dump Answers

Question 1

Orion Retail Group operates about 18 AWS accounts for its online storefronts and partner integrations. Each EC2 instance runs the unified CloudWatch agent and publishes logs to CloudWatch Logs. The company must consolidate all security events in a separate AWS account that is dedicated to immutable log retention. The network operations team needs to aggregate and correlate events from every account with near real time latency so analysts can respond within one minute. What is the best way to implement this cross account centralized ingestion?

  • ✓ B. Create a CloudWatch Logs destination in the logging account that targets a Kinesis Data Firehose delivery stream and configure subscription filters on the security log groups in each source account to stream events to that destination and deliver them to an S3 bucket in the logging account

The correct choice is Create a CloudWatch Logs destination in the logging account that targets a Kinesis Data Firehose delivery stream and configure subscription filters on the security log groups in each source account to stream events to that destination and deliver them to an S3 bucket in the logging account.

This design uses CloudWatch Logs subscription filters to stream events continuously from every source account with near real time latency. A destination in the logging account with an appropriate resource policy enables cross account delivery. Kinesis Data Firehose then buffers, compresses and encrypts the data and delivers it to Amazon S3 in the logging account. Firehose buffering can be tuned to small sizes or short intervals which supports the one minute response requirement. S3 provides durable centralized storage and can be configured for immutable retention with Object Lock if needed.

Create an IAM role in every source account and run a Lambda function in the logging account every 45 minutes that assumes the role and exports each CloudWatch Logs group to an S3 bucket in the logging account is not suitable because CloudWatch Logs exports are batch operations and are not near real time. They introduce significant delay and operational overhead across many accounts.

Configure CloudWatch Logs in each source account to forward events directly to CloudWatch Logs in the logging account and then subscribe a Kinesis Data Firehose stream to Amazon EventBridge to store the data in S3 is incorrect because CloudWatch Logs does not forward directly to another Logs group. Subscription filters deliver only to Kinesis Data Streams, Kinesis Data Firehose or Lambda. Firehose is not subscribed to EventBridge in this way and this path would not provide the required streaming flow.

Send the security events into Google Pub Sub and process them with Dataflow to BigQuery and then export the results to Amazon S3 is not appropriate because it introduces a different cloud platform without any benefit for this requirement. It adds latency, cost and complexity and does not align with native AWS cross account streaming.

Cameron’s AWS Architect Exam Tip

When you see a requirement to centralize CloudWatch Logs across many accounts with near real time delivery to S3, look for subscription filters to a cross account destination that targets Kinesis Data Firehose.

Question 2

Sundial Apps runs a RESTful API on Amazon EC2 instances in an Auto Scaling group across three private subnets. An Application Load Balancer spans two public subnets and is configured as the only origin for an Amazon CloudFront distribution. The team needs to ensure that only CloudFront can reach the ALB and that direct access from the internet is blocked at the origin. What approach should the solutions architect choose to strengthen origin security?

  • ✓ C. Store a random token in AWS Secrets Manager with automated rotation using AWS Lambda then have CloudFront pass the token in a custom origin header and enforce a header match rule in an AWS WAF web ACL that is associated to the ALB

The correct option is Store a random token in AWS Secrets Manager with automated rotation using AWS Lambda then have CloudFront pass the token in a custom origin header and enforce a header match rule in an AWS WAF web ACL that is associated to the ALB.

This approach ensures that only requests carrying a secret header value reach the Application Load Balancer because CloudFront is configured to add the header on every origin request. You attach an AWS WAF web ACL to the ALB and create a rule that allows traffic only when the expected header and value are present, which blocks direct internet requests that do not include the secret. CloudFront natively supports adding custom headers to origin requests and AWS WAF can evaluate HTTP headers at the ALB. Managing the secret in AWS Secrets Manager with rotation through Lambda keeps the token fresh without manual updates which strengthens the security posture while remaining operationally sound.

Associate an AWS WAF web ACL to the ALB with an IP match that includes the published CloudFront service ranges and migrate the ALB into two private subnets is incorrect because placing the ALB in private subnets would prevent CloudFront from reaching it since CloudFront connects to origins over the public internet. Relying on IP match rules for CloudFront address ranges is also brittle and operationally heavy as address ranges change.

Enable AWS Shield Advanced and attach a security group policy that only permits CloudFront service addresses to the ALB is incorrect because Shield Advanced provides DDoS protections and visibility but it does not manage or attach security group policies, and it cannot be used to restrict origin access to only CloudFront.

Save a shared key in AWS Systems Manager Parameter Store with rotation configured then configure CloudFront to send the key as a custom header and add custom code on the target instances to check the header and drop requests without the key is incorrect because Parameter Store does not provide native secret rotation like Secrets Manager and enforcing the check in application code allows unwanted traffic to reach the instances before being rejected which is less secure and less efficient than blocking at the ALB with AWS WAF.

Cameron’s AWS Architect Exam Tip

When CloudFront must be the only client for an ALB, prefer a secret custom header added by CloudFront and enforced by an ALB-attached AWS WAF rule. Verify whether the origin must remain publicly reachable since CloudFront requires a public origin for ALB targets.

Question 3

Riverbend Logistics plans to run its connected van telemetry platform on AWS to collect signals from roughly 9,000 vehicles every three seconds so it can update routes and estimated arrival times in near real time. The engineering team requires a fully serverless design that scales automatically without any shard or instance capacity to size or tune and the team does not want to intervene during traffic spikes. As the AWS Certified Solutions Architect Professional advising this initiative, which approach should they implement?

  • ✓ C. Send messages to an Amazon SQS standard queue and have an AWS Lambda function process batches and write them to an auto scaled Amazon DynamoDB table

The correct option is Send messages to an Amazon SQS standard queue and have an AWS Lambda function process batches and write them to an auto scaled Amazon DynamoDB table.

This choice is fully serverless and requires no shard or instance capacity to size or tune. SQS absorbs bursty traffic and provides buffering and backpressure so Lambda scales concurrency automatically as queue depth grows and slows when traffic subsides. Batch processing improves throughput and cost efficiency while retries and dead letter queues increase resilience with no operator intervention. Using DynamoDB with on demand capacity or auto scaling removes the need to pre provision write capacity so the database scales automatically with the ingestion rate. This meets the near real time requirement for frequent telemetry updates while keeping operations hands off during spikes.

Stream the data into Amazon Kinesis Data Firehose and configure it to deliver records directly into an Amazon DynamoDB table is incorrect because Firehose does not support DynamoDB as a delivery destination. Even if it did, the requirement to write straight into DynamoDB would not be met by this service.

Publish the telemetry to an Amazon SNS topic that invokes an AWS Lambda function which stores items in an Amazon DynamoDB table is not the best fit because SNS is push based and lacks buffering and backpressure. During spikes it can drive very high Lambda concurrency and overwhelm DynamoDB which often requires manual controls to avoid throttling.

Ingest the telemetry into an Amazon Kinesis Data Stream and use a consumer application on Amazon EC2 to read the stream and store data in an Amazon DynamoDB table is not fully serverless because it relies on EC2 instances to consume the stream. In addition, Kinesis Data Streams commonly requires shard capacity planning and tuning which conflicts with the requirement to avoid managing shards.

Cameron’s AWS Architect Exam Tip

When a question emphasizes fully serverless with no shards to manage and no operator action during spikes, favor patterns that provide buffering and backpressure such as SQS triggering Lambda and pair them with DynamoDB on demand capacity.

Question 4

EduNova at example.com plans to distribute private downloads and a subscriber only library using Amazon CloudFront. Only newly registered customers who have paid for a 12 month plan should be allowed to fetch the desktop installer, and only active subscribers should be able to view any files in the members area. What should the architect implement to enforce these restrictions while keeping delivery efficient with CloudFront? (Choose 2)

  • ✓ A. Configure CloudFront signed cookies to authorize access to all content in the members area

  • ✓ D. Configure CloudFront signed URLs to protect the installer download

The correct options are Configure CloudFront signed URLs to protect the installer download and Configure CloudFront signed cookies to authorize access to all content in the members area.

Using the first option for the installer lets the application issue a time limited link only after confirming the customer is newly registered and has paid for the 12 month plan. This ensures precise per object control with expirations and optional constraints while still letting CloudFront cache and deliver efficiently.

Using the second option for the members area lets a single authorization grant cover many objects under the protected paths. This avoids creating and managing a separate link for every file and it keeps delivery efficient because the edge can continue to serve cached content to authorized subscribers.

Cloud CDN signed URLs is incorrect because it refers to a Google Cloud service while the scenario uses Amazon CloudFront, so it does not apply to this distribution.

Configure CloudFront signed cookies to authorize access to the single installer object is not the best choice because cookies are better for granting access to multiple objects, and per object downloads are more cleanly controlled with a single signed link.

Configure CloudFront signed URLs for every object in the members area would be operationally heavy and error prone because it requires generating and attaching a link for every file and it does not scale as well as a single cookie that authorizes access across the protected paths.

Cameron’s AWS Architect Exam Tip

Match the mechanism to the scope of access. Use signed URLs for a single file or a small set and use signed cookies when you need to authorize many files under common paths. Watch for provider mismatches such as choosing a GCP feature when the scenario specifies AWS.

Question 5

The platform engineering team at Riverbend Manufacturing plans to move several hundred virtual machines from two colocation facilities into AWS, and they must inventory their workloads, map service dependencies, and produce a consolidated assessment report. They have already initiated a Migration Evaluator engagement and they are allowed to install collection software on all VMs. Which approach will deliver the required insights with the least operational effort?

  • ✓ B. Install the AWS Application Discovery Agent on every on-premises VM. After a 30 day collection window, use AWS Migration Hub to view application dependencies. Download the Quick Insights assessment report directly from Migration Hub

The correct option is Install the AWS Application Discovery Agent on every on-premises VM. After a 30 day collection window, use AWS Migration Hub to view application dependencies. Download the Quick Insights assessment report directly from Migration Hub. This path collects the detailed host and network telemetry needed for dependency mapping and exposes the Quick Insights report in Migration Hub, which satisfies the inventory, dependency, and assessment requirements with minimal overhead.

The Application Discovery Agent gathers process and network connection data in addition to system and performance metrics. Migration Hub uses that agent data to visualize server to server dependencies so the team can map services accurately. Because they already have a Migration Evaluator engagement, Migration Hub can surface the Quick Insights assessment directly without extra export or transform steps, which reduces operational effort while producing the consolidated report they need.

Configure the AWS Application Discovery Service Agentless Collector in the data centers. After a 30 day collection window, use AWS Migration Hub to inspect dependency maps. Export the server inventory and upload it to Migration Evaluator to generate the Quick Insights assessment is incorrect because the agentless collector does not capture the process and network flow details required for dependency maps. It also adds unnecessary steps to export and upload data when Quick Insights can be accessed in Migration Hub once agent data is available.

Deploy the Migration Evaluator Collector to all VMs. When the 30 day collection completes, use Migration Evaluator to review discovered servers and dependencies. Export the inventory to Amazon QuickSight and then download the Quick Insights assessment from the generated dashboard is incorrect because Migration Evaluator focuses on cost and right sizing insights and does not build application dependency maps. Exporting to Amazon QuickSight is not required to obtain Quick Insights and would add avoidable work.

Set up the Migration Evaluator Collector in the environment and also install the AWS Application Discovery Agent on each VM. After the 30 day run, use AWS Migration Hub for dependency visualization and retrieve the Quick Insights assessment from Migration Evaluator is incorrect because running two collectors increases operational effort without adding necessary capability. With the agent in place, Migration Hub already provides dependency visualization and can present the Quick Insights assessment directly.

Cameron’s AWS Architect Exam Tip

When a question emphasizes dependency mapping for on premises servers choose the Application Discovery Agent because it collects process and network connection data that the agentless collector does not. For least operational effort avoid stacking multiple collectors when one satisfies all stated requirements.

Question 6

A fintech startup named LumaPay lets customers submit high resolution receipt photos from a mobile app to validate cashback offers. The app stores images in an Amazon S3 bucket in the us-east-2 Region. The business recently launched across several European countries and those users report long delays when sending images from their phones. What combination of changes should a Solutions Architect implement to speed up the image upload experience for these users? (Choose 2)

  • ✓ B. Update the mobile app to use Amazon S3 multipart upload

  • ✓ D. Enable Amazon S3 Transfer Acceleration on the bucket

The correct options are Update the mobile app to use Amazon S3 multipart upload and Enable Amazon S3 Transfer Acceleration on the bucket.

Update the mobile app to use Amazon S3 multipart upload improves performance for large images over long distance and variable mobile networks because the client can upload parts in parallel, retry only failed parts, and maximize throughput. This reduces the impact of high latency and intermittent connectivity that European users experience when sending high resolution photos to a bucket in another continent.

Enable Amazon S3 Transfer Acceleration on the bucket speeds uploads from geographically distant clients by directing them to the nearest acceleration endpoint and then routing traffic over the AWS global network to the bucket. European users connect to nearby edge locations which reduces first mile latency and yields faster and more consistent uploads. The app must use the accelerate endpoint for this benefit.

Create an Amazon CloudFront distribution with the S3 bucket as the origin is designed to cache and accelerate content delivery to viewers, primarily for downloads. It does not provide a simple or supported path for end user clients to upload directly to an S3 origin, so it would not meaningfully improve the mobile upload experience.

Stay Informed

Get the best articles every day for FREE. Cancel anytime.