Professional AWS Solutions Architect Practice Questions

AWS Solutions Architect Exam Topics

Preparing for the AWS Certified Solutions Architect Professional exam requires both depth and precision. You can begin with AWS Solutions Architect Professional Practice Questions and Real AWS Solutions Architect Professional Exam Questions to understand the style, tone, and logic of the real certification test.

This credential demonstrates your ability to design, deploy, and manage complex AWS solutions that are secure, cost-efficient, and highly available. It validates your expertise in advanced architecture design, hybrid environments, governance, and cost optimization.

AWS Architect Exam Simulators

The AWS Solutions Architect Professional certification confirms your capacity to choose the right AWS services, define migration strategies, and ensure performance and reliability for global-scale applications.

To prepare effectively, use the AWS Solutions Architect Professional Exam Simulator. It mirrors the timing, structure, and challenge of the real AWS certification, helping you assess readiness under realistic conditions.

Real AWS Architect Exam Questions

Each of the AWS Solutions Architect Professional Questions and Answers includes a clear explanation to reinforce key topics like VPC design, hybrid connectivity, high availability, and security best practices. These resources are not about memorization. They help you think critically about trade-offs, scaling patterns, and real-world AWS architecture decisions.

If you seek realistic practice content, explore Professional Solutions Architect Exam Questions and Professional Solutions Architect Practice Test. They are designed to help you master both the theory and application of AWS architecture design.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Professional Solutions Architect Practice Exam

Question 1

Ardent Mutual is moving users from traditional desktops to Amazon WorkSpaces with zero clients so that staff can access claims processing applications, and the security policy mandates that connections must originate only from corporate branch networks while a new branch will go live within 45 days. Which approach will satisfy these controls while providing the highest operational efficiency?

  • ❏ A. Create an AWS Client VPN endpoint and require WorkSpaces users to connect through the VPN from branch networks

  • ❏ B. Publish a custom WorkSpaces image that uses the Windows Firewall to allow only the public IP ranges of the branches

  • ❏ C. Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory

  • ❏ D. Issue device certificates with AWS Certificate Manager to office endpoints and enable device trust on the WorkSpaces directory

Question 2

LumaPress, a global publishing firm with operations on several continents, runs fifteen separate AWS accounts that are administered by local IT teams. The headquarters governance team must gain centralized visibility and enforce consistent security standards across every member account. The architect has already enabled AWS Organizations and all member accounts have joined. What should be done next to allow the headquarters team to audit and manage the security posture across these accounts?

  • ❏ A. Create an IAM policy named SecurityAudit in each member account and attach it to the regional administrator groups

  • ❏ B. Create a SecurityAudit IAM role in every member account and set a trust policy that allows the management account to assume it

  • ❏ C. Provision a dedicated IAM user for the headquarters team in each member account with AdministratorAccess

  • ❏ D. Google Cloud Organization Policy

Question 3

As part of a staged move out of a corporate data center into AWS, a solutions architect at Northstar Analytics must automatically discover network dependencies for their on premises Linux virtual machines that run on VMware during the last 21 days and produce a diagram that includes host IP addresses, hostnames, and TCP connection details. Which approach should be used to accomplish this?

  • ❏ A. Use the AWS Application Discovery Service Agentless Collector for VMware to gather server inventory and then export topology images from AWS Migration Hub as .png files

  • ❏ B. Deploy the AWS Application Discovery Service agent on the Linux servers after setting an AWS Migration Hub home Region and grant the service permission to publish discovery data so network diagrams can be created

  • ❏ C. Install the AWS Systems Manager Agent on the VMs and enable Inventory and Application Manager to capture software and relationship data across the fleet

  • ❏ D. Install the AWS Application Migration Service replication agent on the VMs and use Workload Discovery on AWS to build network diagrams from data stored in Migration Hub

Question 4

Helios Imaging is launching a mobile app that uploads pictures for processing, and usage jumps to 25 times normal during evening beta runs. The solutions architect needs a design that scales image processing elastically and also notifies users on their devices when their jobs complete. Which actions should the architect implement to meet these requirements? (Choose 3)

  • ❏ A. Publish a message to Amazon SNS to send a mobile push notification when processing is finished

  • ❏ B. Configure Amazon MQ as the target for S3 notifications so that workers can read from the broker

  • ❏ C. Process each image in an AWS Lambda function that is invoked by messages from the SQS queue

  • ❏ D. Use S3 Batch Operations to process objects individually when a message arrives in the queue

  • ❏ E. Have the mobile app upload directly to Amazon S3 and configure an S3 event that posts messages to an Amazon SQS standard queue

  • ❏ F. Send a push notification to the app by using Amazon Simple Email Service when processing is complete

Question 5

Bryer Logistics operates an Amazon FSx for Windows File Server that was deployed as Single-AZ 2 for back-office file shares. A new corporate standard now requires highly available file storage across Availability Zones for all teams. The operations group must also observe storage and performance on the file system and capture detailed end-user access events on the FSx share for audit. Which combination of changes and monitoring should the team implement? (Choose 2)

  • ❏ A. Recreate the file system as Single-AZ 1 and move data with AWS DataSync, then depend on snapshots for continuity

  • ❏ B. Use Amazon CloudWatch to watch capacity and performance metrics for FSx and enable file access auditing to publish user access events to CloudWatch Logs or stream them with Amazon Kinesis Data Firehose

  • ❏ C. Track file system activity with AWS CloudTrail and log end-user actions to CloudWatch Logs

  • ❏ D. Set up a new Amazon FSx for Windows File Server with Multi-AZ, copy data with AWS DataSync, update client mappings, and trigger a controlled failover by changing the file system throughput capacity

  • ❏ E. Test Multi-AZ failover by modifying or deleting the elastic network interfaces that FSx created

Question 6

A regional credit union needs the ability to shift 420 employees to remote work within hours during a crisis. Their environment includes Windows and Linux desktops that run office productivity and messaging applications. The solution must integrate with the company’s existing on premises Active Directory so employees keep their current credentials. It must also enforce multifactor authentication and present a desktop experience that closely resembles what users already use. Which AWS solution best satisfies these needs?

  • ❏ A. Use Amazon AppStream 2.0 to stream applications and customize a desktop style image while connecting to the data center over a site to site VPN and integrating identity with Active Directory Federation Services

  • ❏ B. Deploy Amazon WorkSpaces and link it to the on premises Active Directory through an AD Connector over a site to site VPN and enable MFA by configuring a RADIUS server for WorkSpaces

  • ❏ C. Provision Amazon WorkSpaces and connect it to the on premises network with a VPN and an AD Connector and enable MFA directly in the WorkSpaces console without a RADIUS service

  • ❏ D. Use Amazon WorkSpaces Web with SAML federation to Active Directory Federation Services and require MFA at the identity provider while publishing access over a site to site VPN

Question 7

Evergreen Fabrication needs an inexpensive Amazon S3 based backup for its data center file shares that must present NFS to on-premises servers. The business wants the data to transition to an archive tier after 7 days and it accepts that disaster recovery restores can take several days. Which approach best satisfies these needs at the lowest cost?

  • ❏ A. Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Standard Infrequent Access after 7 days

  • ❏ B. Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Glacier Deep Archive after 7 days

  • ❏ C. Deploy AWS Storage Gateway volume gateway with an S3 bucket and configure a lifecycle rule to move objects to S3 Glacier Deep Archive after 7 days

  • ❏ D. Migrate the file shares to Amazon EFS and enable EFS lifecycle to infrequent access to reduce costs

Question 8

PixelForge Studios runs several cross platform titles that track session state, player profiles, match history, and a global scoreboard, and the company plans to migrate these systems to AWS to handle tens of millions of concurrent players and API requests while keeping latency in the low single digit milliseconds. The engineering group needs an in memory data layer that can power a highly available real time and personalized leaderboard at internet scale. Which approach should they implement? (Choose 2)

  • ❏ A. Build the leaderboard on Amazon Neptune to model player relationships and query results quickly

  • ❏ B. Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale

  • ❏ C. Use Google Cloud Spanner to host score data for globally consistent and highly available storage

  • ❏ D. Implement the scoreboard on Amazon ElastiCache for Redis to keep rankings in memory and serve lookups in near real time

  • ❏ E. Store scores directly in Amazon DynamoDB and query the table for ranking

Question 9

The platform engineering team at MeridianLabs needs to manage AWS infrastructure as code to support a rapid scale up. Their current footprint runs in one AWS Region, but their near term plan requires standardized deployments into three AWS Regions and six AWS accounts with centralized governance. What should the solutions architect implement to meet these requirements?

  • ❏ A. Author AWS CloudFormation templates, attach per account IAM policies, and run the templates in each target Region independently

  • ❏ B. Use AWS Organizations with AWS Control Tower, then push CloudFormation stacks from the management account to every account

  • ❏ C. Adopt AWS Organizations with AWS CloudFormation StackSets, assign a delegated administrator, and distribute one template across many accounts and Regions from a single control point

  • ❏ D. Publish standardized products in AWS Service Catalog and share them to all accounts so each team can launch the products in their own Region

Question 10

BrightBazaar, an online marketplace, runs a static site on Amazon S3 served through Amazon CloudFront, with Amazon API Gateway invoking AWS Lambda for cart and checkout, and the functions write to an Amazon RDS for MySQL cluster that currently uses On-Demand instances with steady utilization for the last 18 months. Security scans and logs show repeated SQL injection and other web exploit attempts, customers report slower checkout during traffic spikes, and the team observes Lambda cold starts at peak times. The company wants to keep latency low under bursty load, reduce database spend given the stable usage pattern, and add targeted protection against SQL injection and similar attacks. Which approach best satisfies these goals?

  • ❏ A. Increase the memory of the Lambda functions, migrate the transactional database to Amazon Redshift, and integrate Amazon Inspector with CloudFront

  • ❏ B. Configure Lambda with provisioned concurrency for anticipated surges, purchase RDS Reserved Instances for the MySQL cluster, and attach AWS WAF to the CloudFront distribution with managed rules for SQL injection and common exploits

  • ❏ C. Raise Lambda timeouts during peak hours, switch the database to RDS Reserved Instances, and subscribe to AWS Shield Advanced on CloudFront

  • ❏ D. Migrate the workload to Google Cloud SQL and place the site behind Cloud CDN with security policies enforced by Cloud Armor

Question 11

SummitPay processes purchase events in Amazon DynamoDB tables and the risk operations group needs to detect suspicious activity and requires that every item change be captured and available for analysis within 45 minutes. What should a Solutions Architect implement to satisfy this requirement while enabling near real-time anomaly detection and alerting?

  • ❏ A. Forward data changes to Google Cloud Pub/Sub and run streaming detection in Dataflow with notifications through Cloud Functions

  • ❏ B. Enable DynamoDB Streams on the tables and trigger an AWS Lambda function that writes the change records to Amazon Kinesis Data Streams then analyze anomalies with Amazon Kinesis Data Analytics and send alerts with Amazon SNS

  • ❏ C. Export the DynamoDB tables to Apache Hive on Amazon EMR every hour and run batch queries to flag anomalies then publish Amazon SNS notifications

  • ❏ D. Use AWS CloudTrail to record all DynamoDB write APIs and create Amazon SNS notifications using CloudTrail event filters when behavior looks suspicious

Question 12

North Coast Furnishings plans to move an undocumented set of VMware vSphere virtual machines into AWS for a consolidation effort and the team needs an easy way to automatically discover and inventory the VMs for migration planning with minimal ongoing work. Which approach best meets these needs while keeping operational overhead low?

  • ❏ A. Install the AWS Application Migration Service agent on every VM then aggregate configuration and performance data into Amazon Redshift and build dashboards in Amazon QuickSight

  • ❏ B. Deploy the agentless Migration Evaluator collector to the ESXi hypervisor layer and review the results in Migration Evaluator then exclude idle VMs and send the inventory to AWS Migration Hub

  • ❏ C. Export the vCenter inventory to a .csv file and manually check disk and CPU usage for each server then import the subset into AWS Application Migration Service and migrate any remaining systems with AWS Server Migration Service

  • ❏ D. Run Google Migration Center discovery in the data center and use its right sizing report to plan the move

Question 13

The media platform team at VistaMosaic Media is experiencing failures in its image transformation workflow as uploaded files now average 85 MB. A Python 3.11 Lambda function reacts to Amazon S3 object created events, downloads the file from one bucket, performs the transformation, writes the output to another bucket, and updates a DynamoDB table named FramesCatalog. The Lambda frequently reaches the 900 second maximum timeout and the company wants a redesign that prevents these timeouts while avoiding any server management. Which combination of changes will satisfy these goals? (Choose 2)

  • ❏ A. Set up an AWS Step Functions workflow that invokes the existing Lambda in parallel and raise its provisioned concurrency

  • ❏ B. Migrate images to Amazon EFS and move metadata to Amazon RDS then mount the EFS file system from the Lambda function

  • ❏ C. Create an Amazon ECS task definition for AWS Fargate that uses the container image from Amazon ECR and update the Lambda to start a task when a new object lands in Amazon S3

  • ❏ D. Create an Amazon ECS task definition with the EC2 launch type and have the Lambda trigger those tasks on file arrival

  • ❏ E. Build a container image of the processor and publish it to Amazon ECR

Question 14

BlueOrbit Media runs analytics, recommendations and video processing on AWS and ingests about 25 TB of VPC Flow Logs each day to reveal cross Region traffic and place dependent services together for better performance. The logs are written to Amazon Kinesis Data Streams and that stream is configured as the source for a Kinesis Data Firehose delivery stream that forwards to an S3 bucket. The team installed Kinesis Agent on a new set of network appliances and pointed the agent at the same Firehose delivery stream, but no records appear at the destination. As the Solutions Architect Professional, what is the most likely root cause?

  • ❏ A. Kinesis Agent can only publish to Kinesis Data Streams and cannot send directly to Kinesis Data Firehose

  • ❏ B. Kinesis Data Firehose has reached a scaling limit and requires manual capacity increases

  • ❏ C. A Firehose delivery stream that uses a Kinesis data stream as its source does not accept direct writes from Kinesis Agent

  • ❏ D. The IAM role used by the Kinesis Agent is missing firehose:PutRecord permissions

Question 15

Harvest Logistics operates a three tier workload in a single AWS Region and must implement disaster recovery with a recovery time objective of 45 minutes and a recovery point objective of 4 minutes for the database layer. The web and API tiers run on stateless Amazon EC2 Auto Scaling groups and the data layer is a 24 TB Amazon Aurora MySQL cluster. Which combination of actions will meet these objectives while keeping costs under control? (Choose 2)

  • ❏ A. Use AWS Database Migration Service to continuously replicate from Aurora to an Amazon RDS MySQL instance in another Region

  • ❏ B. Run a minimal hot standby of the web and application layers in another Region and scale out during failover

  • ❏ C. Configure manual snapshots of the Aurora cluster every 4 minutes and copy them to another Region

  • ❏ D. Provision an Aurora MySQL cross Region read replica and plan to promote it during a disaster

  • ❏ E. Enable Multi AZ for the Aurora cluster and rely on automatic backups for recovery

Question 16

Marston Media purchased four boutique agencies and rolled their accounts into a single AWS Organization, yet the accounting team cannot easily produce combined cost views for every subsidiary and they want to feed the results into their own reporting application. What approach will enable consistent cross entity cost reporting for their self managed app?

  • ❏ A. Use the AWS Price List Query API to fetch per account pricing data and create a saved view in AWS Cost Explorer for the finance group

  • ❏ B. Publish an AWS Cost and Usage Report at the organization level and enable cost allocation tags and cost categories and have the accounting team rely on a reusable report in AWS Cost Explorer

  • ❏ C. Create an organization wide AWS Cost and Usage Report and turn on cost allocation tags and cost categories and query the CUR with Amazon Athena using an external table named acct_rollup_ledger and build and share an Amazon QuickSight dataset for the finance team

  • ❏ D. Configure AWS Billing Conductor to define billing groups for each subsidiary and export the pro forma charges for the self managed application to ingest

Question 17

A DevOps team at Riverbend Labs is trying to scale a compute intensive workload by adding more Amazon EC2 instances to an existing cluster placement group in a single Availability Zone, but the launches fail with an insufficient capacity error. What should the solutions architect do to troubleshoot this situation?

  • ❏ A. Use a spread placement group to distribute instances across distinct hardware in multiple Availability Zones

  • ❏ B. Create a new placement group and attempt to merge it with the existing placement group

  • ❏ C. Stop and then start all instances in the placement group and try the launches again

  • ❏ D. Google Compute Engine

Question 18

StreamForge Media is moving its catalog metadata API to a serverless design on AWS. About 30 percent of its older set top boxes fail when certain response headers are present and the on premises load balancer currently strips those headers according to the User Agent. The team must preserve this behavior for at least the next 12 months while directing requests to different Lambda functions based on request type. Which architecture should they implement to satisfy these goals?

  • ❏ A. Build an Amazon API Gateway REST API that integrates with multiple AWS Lambda functions per route and customize gateway responses to remove the offending headers for requests from specific User Agent values

  • ❏ B. Place an Amazon CloudFront distribution in front of an Application Load Balancer and have the ALB invoke the appropriate AWS Lambda function per request type and use a CloudFront Function to strip the problematic headers when the User Agent matches legacy clients

  • ❏ C. Use Google Cloud CDN with Cloud Load Balancing and Cloud Functions to route traffic and remove the headers for legacy user agents

  • ❏ D. Front the service with Amazon CloudFront and an Application Load Balancer and have a Lambda@Edge function delete the problematic headers during viewer responses based on the User Agent

Question 19

RoadGrid is a logistics SaaS that runs a multi-tenant fleet tracking platform using shared Amazon DynamoDB tables with AWS Lambda handling requests. Each carrier includes a unique carrier_id in every API call. The business wants to introduce tiered billing that charges tenants based on actual DynamoDB consumption for both reads and writes. The finance team already ingests hourly AWS Cost and Usage Reports into a centralized payer account and plans to use those reports for monthly chargebacks. They need the most precise and economical approach that requires minimal ongoing maintenance to measure and allocate DynamoDB usage per tenant. Which approach should they choose?

  • ❏ A. Enable DynamoDB Streams and process them with a separate Lambda to extract carrier_id and item size on every write then aggregate writes per tenant and map that to the overall DynamoDB bill

  • ❏ B. Use AWS Application Cost Profiler to ingest per tenant usage records from the app and produce allocated DynamoDB costs by customer

  • ❏ C. Log structured JSON from the Lambda handlers that includes carrier_id plus computed RCUs and WCUs for each request into CloudWatch Logs and run a scheduled Lambda to aggregate by tenant and pull the monthly DynamoDB spend from the Cost Explorer API to allocate costs by proportion

  • ❏ D. Apply a cost allocation tag for carrier_id to the shared DynamoDB table and activate the tag in the billing console then query the CUR by tag to analyze tenant consumption and cost

Question 20

Riverbend Media recently rolled out SAML 2.0 single sign on using its on premises identity provider to grant workforce access to its AWS accounts. The architect validated the setup with a lab account through the federation portal and the console opened successfully. Later three pilot users tried to sign in through the same portal and they could not get access to the AWS environment. What should the architect verify to confirm that the identity federation is configured correctly? (Choose 3)

  • ❏ A. Enable Google Cloud Identity Aware Proxy in front of the federation portal to pass authenticated headers to AWS

  • ❏ B. Configure each IAM role that federated users will assume to trust the SAML provider as the principal

  • ❏ C. Ensure the identity provider issues SAML assertions that map users or groups to specific IAM roles with the required permissions

  • ❏ D. Verify that the federation portal calls the AWS STS AssumeRoleWithSAML API with the SAML provider ARN the target role ARN and the SAML assertion from the identity provider

  • ❏ E. Require IAM users to enter a time based one time password during portal sign in

  • ❏ F. Add the on premises identity provider as an event source in AWS STS to process authentication requests

AWS Solutions Architect Professional Practice Exam Answered

Question 1

Ardent Mutual is moving users from traditional desktops to Amazon WorkSpaces with zero clients so that staff can access claims processing applications, and the security policy mandates that connections must originate only from corporate branch networks while a new branch will go live within 45 days. Which approach will satisfy these controls while providing the highest operational efficiency?

  • ✓ C. Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory

The correct option is Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory.

WorkSpaces IP access control groups enforce source IP allow lists at the WorkSpaces service edge, which ensures that streaming sessions can be initiated only from the specified corporate branch public IP ranges. Because the control is applied to the directory, it covers all WorkSpaces registered to that directory and is managed centrally. It is straightforward to add the new branch public IPs when they become available, which aligns with the 45 day timeline, and changes take effect quickly without reimaging WorkSpaces, installing agents, or changing user workflows. This approach also works with zero clients since enforcement occurs in the service rather than on the device.

Create an AWS Client VPN endpoint and require WorkSpaces users to connect through the VPN from branch networks is incorrect because zero clients typically cannot run a VPN client and this adds unnecessary infrastructure and operational overhead. It also does not inherently restrict access to branch origin unless you separately constrain the VPN, which duplicates what the service already provides with Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory, and it introduces additional latency and complexity.

Publish a custom WorkSpaces image that uses the Windows Firewall to allow only the public IP ranges of the branches is incorrect because filtering inside the guest OS does not control session establishment at the WorkSpaces edge. The WorkSpaces instance typically sees connections from the streaming gateways rather than the user’s public IP, so the Windows Firewall rules would not reliably enforce branch-only access. This also creates image maintenance overhead and does not meet the requirement as effectively as Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory.

Issue device certificates with AWS Certificate Manager to office endpoints and enable device trust on the WorkSpaces directory is incorrect because device trust verifies managed devices but does not guarantee that connections originate from branch networks. Many zero clients cannot host device certificates, which limits applicability, and standing up certificate issuance and ongoing lifecycle management reduces operational efficiency compared to Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory.

Cameron’s AWS Architect Exam Tip

When a question asks to restrict where sessions originate, prefer native network allow lists at the managed service boundary. Look for features that apply centrally at the directory or service level for easier rollout to new sites and avoid solutions that require agents or custom images on each desktop.

Question 2

LumaPress, a global publishing firm with operations on several continents, runs fifteen separate AWS accounts that are administered by local IT teams. The headquarters governance team must gain centralized visibility and enforce consistent security standards across every member account. The architect has already enabled AWS Organizations and all member accounts have joined. What should be done next to allow the headquarters team to audit and manage the security posture across these accounts?

  • ✓ B. Create a SecurityAudit IAM role in every member account and set a trust policy that allows the management account to assume it

The correct option is Create a SecurityAudit IAM role in every member account and set a trust policy that allows the management account to assume it.

This approach establishes a standard cross account access pattern that lets the headquarters governance team assume a role into each member account. By defining a trust relationship that grants the management account permission to assume the SecurityAudit IAM role, the team can use temporary credentials to centrally inspect configurations and security settings without creating long lived users. Attaching the AWS managed SecurityAudit permissions to the SecurityAudit IAM role grants broad read only visibility into security relevant metadata across services while keeping write privileges out of scope for auditing.

Create an IAM policy named SecurityAudit in each member account and attach it to the regional administrator groups is incorrect because a policy alone does not enable cross account access and attaching it to local groups does not grant the headquarters team any ability to assume access from the management account.

Provision a dedicated IAM user for the headquarters team in each member account with AdministratorAccess is incorrect because creating users in every account increases operational overhead and credential sprawl and it violates least privilege by granting full administrator rights when only audit level access is required.

Google Cloud Organization Policy is incorrect because it applies to Google Cloud rather than AWS and does not address governance in AWS Organizations.

Cameron’s AWS Architect Exam Tip

When a question asks for centralized visibility or governance across many AWS accounts, look for the cross account assume role pattern that uses a trust policy and attach only the least privilege permissions needed for the task.

Question 3

As part of a staged move out of a corporate data center into AWS, a solutions architect at Northstar Analytics must automatically discover network dependencies for their on premises Linux virtual machines that run on VMware during the last 21 days and produce a diagram that includes host IP addresses, hostnames, and TCP connection details. Which approach should be used to accomplish this?

  • ✓ B. Deploy the AWS Application Discovery Service agent on the Linux servers after setting an AWS Migration Hub home Region and grant the service permission to publish discovery data so network diagrams can be created

The correct option is Deploy the AWS Application Discovery Service agent on the Linux servers after setting an AWS Migration Hub home Region and grant the service permission to publish discovery data so network diagrams can be created.

The Application Discovery Service agent collects in-guest data that includes hostnames, IP addresses, running processes, and detailed TCP connection information over time. This data supports dependency mapping in Migration Hub so you can view communication between servers and create network diagrams that satisfy a 21 day lookback. Setting a Migration Hub home Region and granting permissions to publish discovery data are required steps for the agent to send data that Migration Hub can visualize.

Use the AWS Application Discovery Service Agentless Collector for VMware to gather server inventory and then export topology images from AWS Migration Hub as .png files is incorrect because the agentless collector gathers VM inventory from vCenter and basic performance data but it does not collect in-guest process or TCP connection details that are needed for dependency mapping. Migration Hub dependency visualization relies on agent data and it is not designed to export PNG topology images from agentless inventory alone.

Install the AWS Systems Manager Agent on the VMs and enable Inventory and Application Manager to capture software and relationship data across the fleet is incorrect because Systems Manager Inventory focuses on software and configuration metadata and Application Manager organizes resources. These features do not discover or map on-premises TCP connections or server-to-server dependencies.

Install the AWS Application Migration Service replication agent on the VMs and use Workload Discovery on AWS to build network diagrams from data stored in Migration Hub is incorrect because the replication agent is for lift and shift replication and does not collect discovery or network dependency data. Workload Discovery visualizes AWS resources rather than on-premises network dependencies and it does not build diagrams from Migration Hub discovery data.

Cameron’s AWS Architect Exam Tip

When a question emphasizes network dependencies or TCP connection details for on premises servers, prefer the agent based AWS Application Discovery Service approach and remember to set a Migration Hub home Region before collecting data.

Question 4

Helios Imaging is launching a mobile app that uploads pictures for processing, and usage jumps to 25 times normal during evening beta runs. The solutions architect needs a design that scales image processing elastically and also notifies users on their devices when their jobs complete. Which actions should the architect implement to meet these requirements? (Choose 3)

  • ✓ A. Publish a message to Amazon SNS to send a mobile push notification when processing is finished

  • ✓ C. Process each image in an AWS Lambda function that is invoked by messages from the SQS queue

  • ✓ E. Have the mobile app upload directly to Amazon S3 and configure an S3 event that posts messages to an Amazon SQS standard queue

The correct options are Publish a message to Amazon SNS to send a mobile push notification when processing is finished, Process each image in an AWS Lambda function that is invoked by messages from the SQS queue, and Have the mobile app upload directly to Amazon S3 and configure an S3 event that posts messages to an Amazon SQS standard queue.

Having the app upload to Amazon S3 and using an S3 event that posts to an Amazon SQS standard queue creates an asynchronous buffer that absorbs the 25 times spike during evening runs. SQS provides durable queuing and high throughput so producers and consumers are decoupled and the system can smooth bursty traffic.

Processing each message with AWS Lambda that is invoked from SQS enables elastic scaling because Lambda increases consumer concurrency in response to queue depth. This pattern keeps workers stateless and cost efficient and it scales automatically with demand.

Publishing to Amazon SNS mobile push when an image completes allows the service to notify devices through platform push services without managing device tokens yourself. This cleanly completes the workflow by informing users as soon as their jobs finish.

The option Configure Amazon MQ as the target for S3 notifications so that workers can read from the broker is not valid because Amazon MQ is not a supported destination for Amazon S3 event notifications. S3 can notify only Amazon SQS, Amazon SNS, or AWS Lambda.

The option Use S3 Batch Operations to process objects individually when a message arrives in the queue does not meet the need for real time elastic processing because S3 Batch Operations is designed for large batch jobs that are started by a manifest or filter and it does not trigger from a queue message.

The option Send a push notification to the app by using Amazon Simple Email Service when processing is complete is incorrect because Amazon Simple Email Service sends email rather than mobile push notifications. Device notifications for this use case belong with Amazon SNS mobile push.

Cameron’s AWS Architect Exam Tip

Map event sources to their supported destinations. For S3 notifications think of SQS, SNS, and Lambda. For bursty ingestion choose SQS standard to buffer and Lambda to scale consumers, then use SNS mobile push to notify users.

Question 5

Bryer Logistics operates an Amazon FSx for Windows File Server that was deployed as Single-AZ 2 for back-office file shares. A new corporate standard now requires highly available file storage across Availability Zones for all teams. The operations group must also observe storage and performance on the file system and capture detailed end-user access events on the FSx share for audit. Which combination of changes and monitoring should the team implement? (Choose 2)

  • ✓ B. Use Amazon CloudWatch to watch capacity and performance metrics for FSx and enable file access auditing to publish user access events to CloudWatch Logs or stream them with Amazon Kinesis Data Firehose

  • ✓ D. Set up a new Amazon FSx for Windows File Server with Multi-AZ, copy data with AWS DataSync, update client mappings, and trigger a controlled failover by changing the file system throughput capacity

The correct options are Use Amazon CloudWatch to watch capacity and performance metrics for FSx and enable file access auditing to publish user access events to CloudWatch Logs or stream them with Amazon Kinesis Data Firehose and Set up a new Amazon FSx for Windows File Server with Multi-AZ, copy data with AWS DataSync, update client mappings, and trigger a controlled failover by changing the file system throughput capacity.

The first choice meets the observability and audit requirements. FSx for Windows File Server publishes storage and performance metrics that can be monitored with CloudWatch, which lets operations track capacity, throughput, and latency. FSx file access auditing generates Windows Security event logs which can be delivered to CloudWatch Logs for search and retention or streamed through Kinesis Data Firehose for delivery to a destination such as Amazon S3, which satisfies detailed end user access auditing.

The second choice addresses the new availability standard. Multi AZ deployment provides high availability across Availability Zones for FSx for Windows File Server. DataSync is the recommended service to copy file data into the new file system efficiently. After updating client mappings to the new share, you can initiate a controlled failover by changing the throughput capacity, which exercises the Multi AZ failover path without disrupting managed networking components.

Recreate the file system as Single-AZ 1 and move data with AWS DataSync, then depend on snapshots for continuity is incorrect because Single AZ does not meet the requirement for high availability across Availability Zones and snapshots do not provide automatic cross AZ service continuity for client access.

Track file system activity with AWS CloudTrail and log end-user actions to CloudWatch Logs is incorrect because CloudTrail records control plane API activity and does not capture file level user access events. FSx file access auditing must be used to obtain detailed end user access logs.

Test Multi-AZ failover by modifying or deleting the elastic network interfaces that FSx created is incorrect because the service manages these network interfaces and they must not be altered. You should use supported operations such as modifying throughput capacity to trigger a safe failover.

Cameron’s AWS Architect Exam Tip

Map each requirement to the native feature of the service. For FSx for Windows, think CloudWatch metrics for performance, file access auditing for user events, Multi AZ for cross AZ resilience, and use DataSync for migrations. Avoid actions that tamper with managed infrastructure.

Question 6

A regional credit union needs the ability to shift 420 employees to remote work within hours during a crisis. Their environment includes Windows and Linux desktops that run office productivity and messaging applications. The solution must integrate with the company’s existing on premises Active Directory so employees keep their current credentials. It must also enforce multifactor authentication and present a desktop experience that closely resembles what users already use. Which AWS solution best satisfies these needs?

  • ✓ B. Deploy Amazon WorkSpaces and link it to the on premises Active Directory through an AD Connector over a site to site VPN and enable MFA by configuring a RADIUS server for WorkSpaces

The correct choice is Deploy Amazon WorkSpaces and link it to the on premises Active Directory through an AD Connector over a site to site VPN and enable MFA by configuring a RADIUS server for WorkSpaces.

This meets every requirement because Amazon WorkSpaces provides managed Windows and Linux desktops with a familiar, full desktop experience. It integrates with your on premises Active Directory using an AD Connector over a site to site VPN so employees keep their existing credentials. You can enforce multifactor authentication by configuring a RADIUS server with the directory that registers your WorkSpaces. It also scales quickly which fits the need to move 420 employees to remote work within hours.

Use Amazon AppStream 2.0 to stream applications and customize a desktop style image while connecting to the data center over a site to site VPN and integrating identity with Active Directory Federation Services is not the best fit because AppStream 2.0 streams individual applications rather than provisioning full desktops. This does not closely match the requested traditional desktop experience for office productivity and messaging clients.

Provision Amazon WorkSpaces and connect it to the on premises network with a VPN and an AD Connector and enable MFA directly in the WorkSpaces console without a RADIUS service is incorrect because WorkSpaces does not offer a native MFA switch for AD integrated directories. MFA is enforced by integrating a RADIUS server with the directory, not by enabling it directly in the console.

Use Amazon WorkSpaces Web with SAML federation to Active Directory Federation Services and require MFA at the identity provider while publishing access over a site to site VPN does not meet the requirement because WorkSpaces Web delivers a secure browser for web applications rather than full Windows or Linux desktops. It cannot replicate the full desktop experience that users already have.

Cameron’s AWS Architect Exam Tip

When a scenario requires a full desktop experience with existing AD credentials, map it to Amazon WorkSpaces with AD Connector over private connectivity and remember that MFA is implemented using a RADIUS server. If an option suggests enabling MFA directly in the console or proposes AppStream 2.0 for full desktops, reconsider.

Question 7

Evergreen Fabrication needs an inexpensive Amazon S3 based backup for its data center file shares that must present NFS to on-premises servers. The business wants the data to transition to an archive tier after 7 days and it accepts that disaster recovery restores can take several days. Which approach best satisfies these needs at the lowest cost?

  • ✓ B. Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Glacier Deep Archive after 7 days

The correct option is Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Glacier Deep Archive after 7 days.

This approach meets the NFS requirement because File Gateway presents NFS file shares on premises while storing objects in Amazon S3. It also directly aligns with the desire for an S3 based backup. Using an S3 lifecycle rule to transition data to Glacier Deep Archive after a short warm period delivers the lowest storage cost for long term retention. The organization accepts that disaster recovery restores can take several days, and Glacier Deep Archive retrievals are intentionally slower, which matches that tolerance while minimizing spend.

Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Standard Infrequent Access after 7 days does not minimize cost as well as Glacier Deep Archive for archival data. Standard Infrequent Access provides quicker access than needed in this scenario and carries a higher per gigabyte storage price than Glacier Deep Archive.

Deploy AWS Storage Gateway volume gateway with an S3 bucket and configure a lifecycle rule to move objects to S3 Glacier Deep Archive after 7 days fails the NFS requirement because Volume Gateway exposes iSCSI block storage rather than NFS file shares. In addition, lifecycle rules target S3 objects and do not apply to the block storage volumes and their snapshots in the same way, so this design does not meet the stated needs.

Migrate the file shares to Amazon EFS and enable EFS lifecycle to infrequent access to reduce costs does not satisfy the request for an S3 based backup and would generally cost more for archival use than Glacier Deep Archive. While EFS offers NFS, it is a different managed file system service and is not the low cost archival pattern described in the question.

Cameron’s AWS Architect Exam Tip

Match the required on premises NFS protocol to the right gateway and then map restore tolerance to the S3 storage class. If slow recovery is acceptable, prefer S3 Glacier Deep Archive with lifecycle transitions for the lowest cost.

Question 8

PixelForge Studios runs several cross platform titles that track session state, player profiles, match history, and a global scoreboard, and the company plans to migrate these systems to AWS to handle tens of millions of concurrent players and API requests while keeping latency in the low single digit milliseconds. The engineering group needs an in memory data layer that can power a highly available real time and personalized leaderboard at internet scale. Which approach should they implement? (Choose 2)

  • ✓ B. Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale

  • ✓ D. Implement the scoreboard on Amazon ElastiCache for Redis to keep rankings in memory and serve lookups in near real time

The correct options are Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale and Implement the scoreboard o

Stay Informed

Get the best articles every day for FREE. Cancel anytime.