Google Cloud Architect Certification Practice Exams

Free GCP Certification Exam Topics Tests

Over the past few months, I’ve been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters who have been displaced by AI and ML technologies learn new skills and accreditations by getting them certified on technologies that are in critically high demand.

In my opinion, one of the most reputable organizations providing credentials is Google, and one of their most respected designations is that of the Certified Google Cloud Architect Professional.

So how do you get Google certified and get Google certified quickly? I have a simple plan that has now helped thousands, and it’s a pretty simple strategy.

Google Cloud Certification Practice Exams

First, pick your designation of choice. In this case, it’s Google’s Professional Cloud Architect certification.

Then look up the exam objectives and make sure they match your career goals and competencies. The next step? It’s not buying an online course or study guide. Next, find a Google Cloud Architect exam simulator or a set of practice questions for the GCP Architect exam.

Yes, find a set of Cloud Architect sample questions first and use them to drive your study.

First, go through your practice tests and just look at the GCP exam questions and answers.

That will help you get familiar with what you know and what you don’t know.

When you find topics you don’t know, use AI and Machine Learning powered tools like ChatGPT, Cursor, or Claude to write tutorials for you on the topic.

Really take control of your learning and have the new AI and ML tools help you customize your learning experience by writing tutorials that teach you exactly what you need to know to pass the exam. It’s an entirely new way of learning.

About GCP Exam Dumps

And one thing I will say is try to avoid the Google Cloud Architect Professional exam dumps. You want to get certified honestly, you don’t want to pass simply by memorizing somebody’s GCP Architect braindump. There’s no integrity in that.

If you do want some real Google Cloud Architect Professional exam questions, I have over a hundred free exam questions and answers on my website, with almost 300 free exam questions and answers if you register. But there are plenty of other great resources available on LinkedIn Learning, Udemy, and even YouTube, so check those resources out as well to help fine-tune your learning path.

The bottom line? Generative AI is changing the IT landscape in disruptive ways, and IT professionals need to keep up. One way to do that is to constantly update your skills.
Get learning, get certified, and stay on top of all the latest trends. You owe it to your future self to stay trained, stay employable, and stay knowledgeable about how to use and apply all of the latest technologies.

Now for the GCP Certified Architect Professional exam questions.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Google Cloud Architect Professional Practice Exams

Question 1

Nimbus Playworks is preparing to launch a new mobile multiplayer title and has rebuilt its backend on Google Compute Engine with a managed NoSQL store to improve global scale and uptime. Before release, the team needs to validate the Android and iOS clients across dozens of OS versions and hundreds of device models while keeping testing costs under control and minimizing operational effort. Which approach should they use to perform broad and efficient device coverage testing across both platforms?

  • ❏ A. Set up Android and iOS virtual machines on Google Cloud and install the app for manual and scripted tests

  • ❏ B. Use Google Kubernetes Engine to run Android and iOS emulators in containers and execute automated UI tests

  • ❏ C. Use Firebase Test Lab to upload the app and run automated and manual tests on a wide range of real and virtual Android and iOS devices

  • ❏ D. Upload app builds with different configurations to Firebase Hosting and validate behavior from hosted links

Question 2

Which Google Cloud architecture uses Cloud Load Balancing to stay low cost at about 200 daily requests yet reliably absorbs bursts up to 60,000 for public HTTP APIs and static content?

  • ❏ A. Use Cloud CDN for static content, run the APIs on App Engine Standard, and use Cloud SQL

  • ❏ B. Serve static assets through Cloud CDN, run the APIs on a Compute Engine managed instance group, and use Cloud SQL

  • ❏ C. Use Cloud CDN for static content, run the APIs on Cloud Run, and use Cloud SQL

  • ❏ D. Put static files in Cloud Storage, deploy the APIs to a regional GKE Autopilot cluster, and use Cloud Spanner

Question 3

LumaPay is moving over 30 internal applications to Google Cloud and the security operations group requires read only visibility across every resource in the entire organization for audit readiness. You already hold the Organization Administrator role through Resource Manager. Following Google recommended practices, which IAM roles should you grant to the security team?

  • ❏ A. Organization administrator, Project browser

  • ❏ B. Security Center admin, Project viewer

  • ❏ C. Organization viewer, Project viewer

  • ❏ D. Organization viewer, Project owner

Question 4

Which business risks should you consider when adopting Google Cloud Deployment Manager for infrastructure automation? (Choose 2)

  • ❏ A. Requires Cloud Build to run deployments

  • ❏ B. Template errors can delete critical resources

  • ❏ C. Cloud Deployment Manager manages only Google Cloud resources

  • ❏ D. Must use a Google APIs service account

Question 5

After CedarPeak Data deployed a custom Linux kernel module to its Compute Engine batch worker virtual machines to speed up the overnight jobs, three days later roughly 60% of the workers failed during the nightly run. You need to collect the most relevant evidence so the developers can troubleshoot efficiently. Which actions should you take first? (Choose 3)

  • ❏ A. Check the activity log for live migration events on the failed instances

  • ❏ B. Use Cloud Logging to filter for kernel and module logs from the affected instances

  • ❏ C. Review the Compute Engine audit activity log through the API or the console

  • ❏ D. Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs

  • ❏ E. Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics

  • ❏ F. Enable Cloud Trace for the batch application and collect traces during the next window

Question 6

Two Google Cloud Shared VPC host networks in separate organizations have some overlapping IP ranges such as 10.60.0.0/16, and you need private connectivity only for nonoverlapping subnets with minimal redesign. What should you do?

  • ❏ A. Private Service Connect with internal load balancers

  • ❏ B. HA VPN with Cloud Router advertising only nonoverlapping prefixes

  • ❏ C. VPC Network Peering with custom route exchange

  • ❏ D. Dedicated Interconnect with custom route advertisements

Question 7

Orchid Outfitters operates a three layer application in a single VPC on Google Cloud. The web, service and database layers scale independently using managed instance groups. Network flows must go from the web layer to the service layer and then from the service layer to the database layer, and there must be no direct traffic from the web layer to the database layer. How should you configure the network to meet these constraints?

  • ❏ A. Place each layer in separate subnetworks within the VPC

  • ❏ B. Configure Cloud Router with custom dynamic routes and use route priorities to force traffic to traverse the service tier

  • ❏ C. Apply network tags to each layer and create VPC firewall rules that allow web to service and service to database while preventing web to database

  • ❏ D. Add tags to each layer and create custom routes to allow only the desired paths

Question 8

Which Google Cloud storage option minimizes cost for telemetry that will be rarely accessed for the next 18 months?

  • ❏ A. Stream telemetry to BigQuery partitions with long term storage pricing

  • ❏ B. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Coldline

  • ❏ C. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Nearline

  • ❏ D. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Archive

Question 9

VerdantCart wants to move a mission critical web application from an on premises facility to Google Cloud and it must continue serving users if an entire region goes offline with automatic traffic failover. How should you design the deployment?

  • ❏ A. Run the application on a single Compute Engine VM and attach an external HTTP(S) Load Balancer to handle failover between instances

  • ❏ B. Place the application in two Compute Engine managed instance groups in two different regions within one project and use a global external HTTP(S) Load Balancer to fail over between the groups

  • ❏ C. Place the application in two Compute Engine managed instance groups in different regions that live in different projects and configure a global HTTP(S) Load Balancer to shift traffic between them

  • ❏ D. Launch two standalone Compute Engine VMs in separate regions within one project and set up an HTTP(S) Load Balancer to route traffic from one VM to the other when needed

Question 10

How should you migrate Windows Server 2022 Datacenter VMs to Google Cloud so you can keep using existing Microsoft volume licenses in compliance?

  • ❏ A. Compute Engine Windows image

  • ❏ B. Import as Windows 2022 Datacenter BYOL and use Sole Tenant Node

  • ❏ C. Migrate to Virtual Machines with license included images

  • ❏ D. Import disk and run on shared tenancy

Question 11

The compliance team at HarborView Insurance needs to retain Cloud VPN log events for 18 months to meet audit obligations. You must configure Google Cloud so these logs are stored appropriately. What should you do?

  • ❏ A. Build a Cloud Logging dashboard that shows Cloud VPN metrics for the past 18 months

  • ❏ B. Create a Cloud Logging sink with a filter for Cloud VPN entries and export them to a Cloud Storage bucket for long term retention

  • ❏ C. Configure a Cloud Logging export that publishes matching entries to Pub/Sub

  • ❏ D. Enable firewall rule logging on the Compute Engine rules that handle VPN traffic

Question 12

An organization must validate disaster recovery every 60 days using only Google Cloud. Which approach enables repeatable full stack provisioning in a secondary region with actionable telemetry for each drill?

  • ❏ A. Terraform and Google Cloud Observability

  • ❏ B. Deployment Manager and Google Cloud Observability

  • ❏ C. gcloud scripts and Cloud Audit Logs

Question 13

SierraForge Industries is migrating telemetry files from field equipment into Cloud Storage and wants to keep each file for one year while reducing storage costs during that time. What lifecycle configuration should you implement?

  • ❏ A. Create one lifecycle rule that transitions objects to Nearline after 30 days and add a second rule that deletes objects when they reach 365 days

  • ❏ B. Create one lifecycle rule that transitions objects to Archive after 60 days and add a second rule that deletes objects when they reach 365 days

  • ❏ C. Create one lifecycle rule that transitions objects to Coldline after 45 days in Standard and create another lifecycle rule that deletes objects when they reach 366 days in Coldline

  • ❏ D. Create one lifecycle rule that transitions objects to Coldline after 180 days and add a second rule that deletes objects when they reach 1095 days

Question 14

In BigQuery what approach enables reliable deletion of a single individual’s health records upon request?

  • ❏ A. Cloud DLP with Data Catalog

  • ❏ B. Set table or partition expiration to 30 days

  • ❏ C. Use a stable user ID and delete rows by that ID

  • ❏ D. BigQuery dynamic data masking

Question 15

Your team at example.com is preparing to run a stateful service on Google Cloud that can scale out across multiple virtual machines. Every instance must read and write to the same POSIX file system and during peak periods the service needs to sustain up to 180 MB per second of write throughput. Which approach should you choose to meet these requirements while keeping the design managed and reliable?

  • ❏ A. Attach an individual persistent disk to each instance

  • ❏ B. Mount a Cloud Storage bucket on each instance using gcsfuse

  • ❏ C. Create a Cloud Filestore instance and mount it on all virtual machines

  • ❏ D. Set up an NFS server on a Compute Engine VM backed by a large SSD persistent disk and mount it from all instances

Question 16

Which Google Cloud services should you use for high velocity time series ingestion, transactional user profiles and game state, and interactive analytics on 30 TB of historical events?

  • ❏ A. Use Cloud Spanner for time series, use Cloud Spanner for transactions, and export to Cloud Storage for historical analytics

  • ❏ B. Use Cloud Pub/Sub for time series ingestion, use AlloyDB for transactions, and use Cloud Dataproc for historical analytics

  • ❏ C. Use Cloud Bigtable for time series, use BigQuery for historical analytics, and use Cloud Spanner for transactions

Question 17

Riverton Media is preparing to switch traffic to a new marketing site on Google Cloud. You created a managed instance group with autoscaling and attached it as a backend to an external HTTP(S) load balancer. After enabling the backend, you observe that the virtual machines are being recreated roughly every 90 seconds. The instances do not have public IP addresses, and you can successfully curl the service from an internal test host. What should you change to ensure the backend is configured correctly?

  • ❏ A. Add a network tag that matches the load balancer name and create a rule that allows sources with that tag to reach the instances

  • ❏ B. Assign a public IP to each VM and open a firewall rule so the load balancer can reach the public addresses

  • ❏ C. Create a firewall rule that allows traffic from Google health check IP ranges to the instance group on the configured health check ports

  • ❏ D. Increase the autoscaler cool down period so instances are not replaced as often

Question 18

Which Google Cloud solutions let development VMs retain data across reboots and provide ongoing spend visibility without manual reporting? (Choose 2)

  • ❏ A. Local SSD on Compute Engine

  • ❏ B. Export Cloud Billing data with labels to BigQuery and Looker Studio

  • ❏ C. Cloud Billing budgets and alerts

  • ❏ D. Compute Engine with persistent disks

Question 19

A travel booking platform at Northstar Tickets processes card payments and wants to reduce the PCI DSS scope to the smallest footprint while keeping the ability to analyze purchase behavior and payment method trends. Which architectural approach should you adopt to satisfy these requirements?

  • ❏ A. Export Cloud Logging to BigQuery and restrict auditor access using dataset ACLs and authorized views

  • ❏ B. Place all components that handle cardholder data into a separate Google Cloud project

  • ❏ C. Implement a tokenization service and persist only tokens in your systems

  • ❏ D. Create dedicated subnetworks and isolate the services that process cardholder data

  • ❏ E. Label every virtual machine that processes PCI data to simplify audit discovery

Question 20

How should you configure BigQuery IAM so analysts can run queries in the project and only read data in their country’s dataset?

  • ❏ A. Grant bigquery.jobUser and bigquery.dataViewer at the project to a global analysts group

  • ❏ B. Grant bigquery.jobUser at the project to a global analysts group and bigquery.dataViewer only on each country dataset to its country group

  • ❏ C. Use one shared dataset with row level security by country and grant bigquery.jobUser to all analysts

Professional GCP Solutions Architect Practice Exam Answers

Question 1

Nimbus Playworks is preparing to launch a new mobile multiplayer title and has rebuilt its backend on Google Compute Engine with a managed NoSQL store to improve global scale and uptime. Before release, the team needs to validate the Android and iOS clients across dozens of OS versions and hundreds of device models while keeping testing costs under control and minimizing operational effort. Which approach should they use to perform broad and efficient device coverage testing across both platforms?

  • ✓ C. Use Firebase Test Lab to upload the app and run automated and manual tests on a wide range of real and virtual Android and iOS devices

The correct approach is Use Firebase Test Lab to upload the app and run automated and manual tests on a wide range of real and virtual Android and iOS devices.

Firebase Test Lab is a managed testing service that offers broad device coverage for both Android and iOS with real and virtual devices, which fits the need to validate across many OS versions and device models. It supports automated frameworks like Espresso, XCTest and Robo, and it can run tests in parallel across a device matrix to speed feedback while reducing operational effort. Because it is managed and offers virtual devices and transparent pricing, it helps control costs while still letting you scale to hundreds of device and OS combinations.

Set up Android and iOS virtual machines on Google Cloud and install the app for manual and scripted tests is not appropriate because it would be highly manual and would not provide the required breadth of real device and OS coverage. Running iOS simulators requires macOS on Apple hardware, which standard Compute Engine virtual machines do not provide, so it would not meet the cross platform requirement or the goal of minimizing operational effort.

Use Google Kubernetes Engine to run Android and iOS emulators in containers and execute automated UI tests increases complexity and operational overhead without delivering real device coverage. iOS simulators are not supported in typical Linux containers and emulator only testing misses hardware and OEM variances that matter for games, so this does not meet the goal of broad and efficient device coverage across both platforms.

Upload app builds with different configurations to Firebase Hosting and validate behavior from hosted links is incorrect because Firebase Hosting serves web content and does not execute native Android or iOS applications. It does not provide device farms, test orchestration or automated UI testing capabilities.

Cameron’s Google Cloud Certification Exam Tip

Look for purpose built managed services when a question emphasizes broad device coverage, both Android and iOS, minimal operations and cost control. Firebase Test Lab is designed for this and keywords like real and virtual devices, device matrix and automated tests are strong indicators.

Question 2

Which Google Cloud architecture uses Cloud Load Balancing to stay low cost at about 200 daily requests yet reliably absorbs bursts up to 60,000 for public HTTP APIs and static content?

  • ✓ C. Use Cloud CDN for static content, run the APIs on Cloud Run, and use Cloud SQL

The correct option is Use Cloud CDN for static content, run the APIs on Cloud Run, and use Cloud SQL.

This design keeps steady state costs very low because Cloud Run scales to zero and bills per request, which matches a workload of about 200 daily requests. It also absorbs sudden spikes to tens of thousands of requests because Cloud Run automatically and quickly scales out with high concurrency. Cloud CDN serves static assets from the edge, which offloads traffic from the origin and helps handle large bursts with low latency. Cloud Load Balancing integrates with Cloud Run through serverless network endpoint groups and with Cloud CDN, which provides global anycast entry, fast TLS termination, and resilient distribution during bursts. Cloud SQL is a good fit for a transactional relational backend at this scale and cost profile.

Use Cloud CDN for static content, run the APIs on App Engine Standard, and use Cloud SQL can functionally work, yet it is less cost effective for very low steady traffic with rare large spikes. To avoid cold starts and meet burst needs you often configure minimum instances which adds fixed cost, and its scaling characteristics are not as rapid or cost optimized as Cloud Run for this pattern.

Serve static assets through Cloud CDN, run the APIs on a Compute Engine managed instance group, and use Cloud SQL requires virtual machines to be running even when request volume is low, which raises baseline cost. Instance group scaling also takes longer to add capacity during sudden spikes compared to the near instant scaling of Cloud Run, so this is not the best low cost choice for infrequent large bursts.

Put static files in Cloud Storage, deploy the APIs to a regional GKE Autopilot cluster, and use Cloud Spanner is significantly more expensive and operationally heavier for a workload with only hundreds of daily requests. Spanner is a global, high throughput database with a high minimum cost, and running APIs on GKE Autopilot adds complexity and baseline spend compared to a fully managed serverless platform. This combination does not align with the low cost requirement.

Cameron’s Google Cloud Certification Exam Tip

When you see very low steady traffic with rare large bursts, think serverless for APIs and CDN for static content. Avoid options that keep VMs warm or use premium databases like Spanner unless the question clearly requires their features.

Question 3

LumaPay is moving over 30 internal applications to Google Cloud and the security operations group requires read only visibility across every resource in the entire organization for audit readiness. You already hold the Organization Administrator role through Resource Manager. Following Google recommended practices, which IAM roles should you grant to the security team?

  • ✓ C. Organization viewer, Project viewer

The correct option is Organization viewer, Project viewer.

Organization viewer provides read only access to organization level metadata and policies through Resource Manager which gives the security team visibility into the hierarchy and constraints without the ability to change anything. Project viewer grants read only access to project resources so when it is granted at the organization node it inherits down to all folders and projects which provides consistent visibility across every application while still following least privilege.

Organization administrator, Project browser is incorrect because Organization administrator is a powerful administrative role that allows modifying IAM and resources which violates the read only requirement. Project browser only lists and gets metadata and does not provide broad read access to resource data which is often needed for audits.

Security Center admin, Project viewer is incorrect because Security Command Center admin is an administrative role and it only covers Security Command Center resources. It neither limits the team to read only actions nor guarantees visibility across all Google Cloud services.

Organization viewer, Project owner is incorrect because Project owner grants full control of projects including write and IAM changes which is not appropriate for an audit focused read only use case.

Cameron’s Google Cloud Certification Exam Tip

When you see a requirement for read only visibility across an entire organization think about inheritance from the organization node. Pair Organization viewer with Project viewer and avoid any role that includes admin or owner. Remember that Browser is limited to metadata and not full read access.

Question 4

Which business risks should you consider when adopting Google Cloud Deployment Manager for infrastructure automation? (Choose 2)

  • ✓ B. Template errors can delete critical resources

  • ✓ C. Cloud Deployment Manager manages only Google Cloud resources

The correct options are Template errors can delete critical resources and Cloud Deployment Manager manages only Google Cloud resources.

Template errors can delete critical resources is a real business risk because updates reconcile the live environment to what your configuration and templates declare. If a template removes or renames a resource or applies an unintended change then an update can delete or recreate production resources. You can mitigate this with previews and careful review but the risk remains if changes are pushed without safeguards.

Cloud Deployment Manager manages only Google Cloud resources is also a business risk because it limits you to Google Cloud resource types. This creates vendor lock in for your infrastructure automation and prevents you from using the same toolchain to orchestrate on premises or other cloud providers.

Requires Cloud Build to run deployments is incorrect because Deployment Manager runs through the API and gcloud and it does not require Cloud Build. You can optionally integrate it into Cloud Build pipelines but that is a choice rather than a requirement.

Must use a Google APIs service account is incorrect because you can deploy using your user credentials or a suitable service account with the necessary IAM roles. Google managed service agents may operate behind the scenes for specific services yet you are not forced to adopt a particular Google APIs service account as a business prerequisite.

Cameron’s Google Cloud Certification Exam Tip

When options include words like only or must check whether they imply lock in or hard requirements. Distinguish real operational risks such as unintended deletions from optional implementation details such as CI integration.

Question 5

After CedarPeak Data deployed a custom Linux kernel module to its Compute Engine batch worker virtual machines to speed up the overnight jobs, three days later roughly 60% of the workers failed during the nightly run. You need to collect the most relevant evidence so the developers can troubleshoot efficiently. Which actions should you take first? (Choose 3)

  • ✓ B. Use Cloud Logging to filter for kernel and module logs from the affected instances

  • ✓ D. Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs

  • ✓ E. Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics

The correct options are Use Cloud Logging to filter for kernel and module logs from the affected instances, Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs, and Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics.

Use Cloud Logging to filter for kernel and module logs from the affected instances gives developers immediate visibility into errors emitted by the custom kernel module and the Linux kernel. Filtering by instance identifiers and keywords related to kernel, module loading, crashes, or taints focuses the results on what changed and when. This surfaces stack traces, oops messages, and module load or unload failures that directly explain why workers failed.

Connect to the VM serial console with gcloud or the Google Cloud console to observe kernel and boot logs is crucial when VMs become unreachable or reboot during failures. The serial port captures early boot messages and kernel panics that may not reach system logs on disk, so it preserves decisive evidence even if the VM hung or crashed mid run.

Narrow the time window in Cloud Logging and Cloud Monitoring to the failure period and examine batch VM metrics helps correlate log events with system behavior such as restarts, CPU spikes, memory exhaustion, or disk errors. Constraining the time range reduces noise and makes patterns across many workers visible so you can confirm scope and timing of the regression introduced by the kernel module.

Check the activity log for live migration events on the failed instances is not a first action because live migration is designed to be transparent and is unlikely to explain widespread module related kernel failures. Even if live migration occurred it would not provide the kernel level detail needed to debug a custom module issue.

Review the Compute Engine audit activity log through the API or the console focuses on administrative and access operations rather than guest OS behavior. Audit logs do not capture kernel panics or module crashes, so they are not the most relevant starting point for this incident.

Enable Cloud Trace for the batch application and collect traces during the next window targets application latency and request flows rather than OS or kernel events. It would not help diagnose the current failures and it introduces delay since it only gathers data in future runs.

Cameron’s Google Cloud Certification Exam Tip

When failures point to the operating system, start closest to the VM with serial console output, kernel logs, and a tight time window in logs and metrics. Correlate errors with restarts and resource spikes before exploring broader platform events.

Question 6

Two Google Cloud Shared VPC host networks in separate organizations have some overlapping IP ranges such as 10.60.0.0/16, and you need private connectivity only for nonoverlapping subnets with minimal redesign. What should you do?

  • ✓ B. HA VPN with Cloud Router advertising only nonoverlapping prefixes

The correct option is HA VPN with Cloud Router advertising only nonoverlapping prefixes. This approach provides private connectivity between the two VPC networks while letting you control which prefixes are exchanged so only nonoverlapping subnets are learned on each side and you avoid conflicts with minimal redesign.

This works because Cloud Router supports custom advertisements so you can advertise only the specific IP ranges that you want the other side to learn. When each side advertises only the nonoverlapping subnets, routing remains private and reachable for those ranges while overlapping ranges are never imported. This setup is supported for VPC to VPC connectivity across projects and even across organizations, and it requires no IP renumbering.

Private Service Connect with internal load balancers is for publishing and consuming specific services privately rather than providing general purpose network connectivity between arbitrary subnets. It is not intended to create broad bidirectional connectivity between two VPC networks, so it does not meet the requirement.

VPC Network Peering with custom route exchange does not support overlapping IP ranges and it has no route filtering to selectively block conflicting subnets. Overlapping prefixes cause conflicts and the routes are not imported, therefore it cannot satisfy the requirement.

Dedicated Interconnect with custom route advertisements is designed for private connectivity between on premises networks and Google Cloud. Using it to connect two VPC networks in different organizations would require additional infrastructure and complexity, which is not minimal redesign and is unnecessary for this use case.

Cameron’s Google Cloud Certification Exam Tip

When prefixes overlap, look for solutions that let you control advertisements with BGP. VPC Network Peering does not support overlapping IP ranges and has no route filtering, so prefer Cloud VPN with Cloud Router or Interconnect when selective exchange is required.

Question 7

Orchid Outfitters operates a three layer application in a single VPC on Google Cloud. The web, service and database layers scale independently using managed instance groups. Network flows must go from the web layer to the service layer and then from the service layer to the database layer, and there must be no direct traffic from the web layer to the database layer. How should you configure the network to meet these constraints?

  • ✓ C. Apply network tags to each layer and create VPC firewall rules that allow web to service and service to database while preventing web to database

The correct option is Apply network tags to each layer and create VPC firewall rules that allow web to service and service to database while preventing web to database.

This configuration uses firewall rules as the enforcement point for lateral traffic inside a VPC. You assign distinct tags to the instances in each managed instance group through their instance templates, then create an ingress allow rule that targets the service tier with a source tag that identifies the web tier. You also create an allow rule that targets the database tier with a source tag that identifies the service tier. To ensure there is no direct web to database path, add an explicit deny from the web tag to the database target with a higher priority than any broader allows, or rely on the absence of an allow so the implied deny blocks it.

This works because VPC routing is destination based and does not enforce hop by hop traversal. Firewall rules define which sources can reach which targets, and tags make it easy to select entire tiers in managed instance groups with consistent policy.

Place each layer in separate subnetworks within the VPC is insufficient on its own because subnet boundaries do not block traffic. Traffic can flow between subnets in the same VPC whenever firewall rules allow it, so you still need explicit rules to constrain which tiers can talk.

Configure Cloud Router with custom dynamic routes and use route priorities to force traffic to traverse the service tier does not meet the requirement because Cloud Router exchanges routes with peer networks using BGP and does not control instance to instance traffic inside a VPC. Route priority cannot force traffic to take a middle hop between tiers.

Add tags to each layer and create custom routes to allow only the desired paths is incorrect because VPC routes are destination based and cannot express policies that depend on both source and destination. Custom routes cannot prevent a directly reachable destination from being used, so they cannot reliably enforce tier by tier traversal. You need firewall rules for that control.

Cameron’s Google Cloud Certification Exam Tip

When you must allow some lateral flows but block others inside a VPC, think of VPC firewall rules targeted by network tags or service accounts. Remember that routes and Cloud Router do not enforce middle tier traversal for instance to instance traffic.

Question 8

Which Google Cloud storage option minimizes cost for telemetry that will be rarely accessed for the next 18 months?

  • ✓ B. Compress telemetry every 30 minutes and store snapshots in Cloud Storage Coldline

The correct option is Compress telemetry every 30 minutes and store snapshots in Cloud Storage Coldline.

Coldline is designed for data that is accessed very infrequently and kept for months to years, which aligns with telemetry that will be rarely accessed over 18 months. It offers lower storage cost than classes intended for more frequent access while still providing immediate access when needed. The minimum storage duration for Coldline is reasonable for long lived telemetry and its retrieval and operation costs are appropriate when access is rare, which makes total cost lower than options optimized for more frequent reads.

Stream telemetry to BigQuery partitions with long term storage pricing is not optimal for cost minimization of rarely accessed raw telemetry because BigQuery is an analytics warehouse and not a low cost archival store. Streaming inserts and query based access add ongoing costs and BigQuery long term storage still tends to be more expensive than Google Cloud Storage archival classes for large volumes kept primarily for retention.

Compress telemetry every 30 minutes and store snapshots in Cloud Storage Nearline targets data accessed roughly once a month, which is more frequent than this workload requires. Its storage price is higher than Coldline, so it does not minimize cost for data that will be accessed only rarely over 18 months.

Compress telemetry every 30 minutes and store snapshots in Cloud Storage Archive is optimized for data accessed less than once per year and for long term preservation. For telemetry that may still need occasional investigation within the 18 month window, the higher retrieval and operation costs can outweigh the storage savings compared with Coldline, so it is not the best balance for minimizing total cost in this scenario.

Cameron’s Google Cloud Certification Exam Tip

Match the access pattern to the storage class. If data is accessed less than once per quarter use Coldline. If access is monthly use Nearline. Consider Archive only when access is extremely rare and retention is long, and always factor in minimum storage duration and retrieval fees when comparing total cost.

Question 9

VerdantCart wants to move a mission critical web application from an on premises facility to Google Cloud and it must continue serving users if an entire region goes offline with automatic traffic failover. How should you design the deployment?

  • ✓ B. Place the application in two Compute Engine managed instance groups in two different regions within one project and use a global external HTTP(S) Load Balancer to fail over between the groups

The correct option is Place the application in two Compute Engine managed instance groups in two different regions within one project and use a global external HTTP(S) Load Balancer to fail over between the groups.

This design delivers regional resilience because each region runs its own managed instance group and the global external HTTP(S) Load Balancer presents a single anycast IP to users. The load balancer continuously health checks the backends and automatically shifts traffic to the healthy region when a region becomes unavailable. Managed instance groups add autohealing and autoscaling so the application maintains capacity and replaces failed virtual machines without manual action.

Keeping both instance groups in one project keeps configuration, identity and access, logging, and troubleshooting straightforward. The global external HTTP(S) Load Balancer natively supports multi region backends within a single project which is the simplest way to meet the requirement for automatic cross region failover.

Run the application on a single Compute Engine VM and attach an external HTTP(S) Load Balancer to handle failover between instances is incorrect because a single virtual machine in one region cannot survive a regional outage and an external HTTP(S) Load Balancer needs multiple healthy backends to fail over to. This setup offers neither regional redundancy nor managed recovery.

Place the application in two Compute Engine managed instance groups in different regions that live in different projects and configure a global HTTP(S) Load Balancer to shift traffic between them is unnecessary for the goal. While cross project backends can be made to work with additional features, it adds complexity without improving availability for this scenario and the simpler same project design already meets the requirement.

Launch two standalone Compute Engine VMs in separate regions within one project and set up an HTTP(S) Load Balancer to route traffic from one VM to the other when needed is incorrect because standalone instances do not provide autohealing or autoscaling and the load balancer distributes traffic to backends rather than forwarding from one virtual machine to another. This design is brittle and does not ensure reliable failover.

Cameron’s Google Cloud Certification Exam Tip

When you read requirements for regional outage tolerance and automatic failover, think global external HTTP(S) load balancing with backends in different regions and use managed instance groups for autohealing and scaling. Prefer the simplest architecture that meets the goal which often keeps all backends in one project.

Question 10

How should you migrate Windows Server 2022 Datacenter VMs to Google Cloud so you can keep using existing Microsoft volume licenses in compliance?

  • ✓ B. Import as Windows 2022 Datacenter BYOL and use Sole Tenant Node

The correct option is Import as Windows 2022 Datacenter BYOL and use Sole Tenant Node.

This approach lets you bring your existing Microsoft volume licenses while staying compliant because Windows Server BYOL on Google Cloud requires running on dedicated hosts. Using BYOL with Windows Server is supported only when the VMs are placed on dedicated capacity, which is provided by sole tenant nodes. You import the Windows Server 2022 Datacenter image as BYOL and schedule the VMs onto dedicated nodes so your licenses remain isolated and auditable according to Microsoft terms.

Compute Engine Windows image is incorrect because Google provided images are license included and you would pay for the Windows license through Google Cloud rather than reusing your existing licenses.

Migrate to Virtual Machines with license included images is incorrect because license included choices bundle the Windows Server license and do not allow you to apply your existing volume licenses.

Import disk and run on shared tenancy is incorrect because BYOL for Windows Server is not permitted on shared tenancy and must run on dedicated hosts to meet Microsoft licensing requirements.

Cameron’s Google Cloud Certification Exam Tip

When a scenario mentions keeping existing Microsoft licenses, look for BYOL plus dedicated capacity. On Google Cloud that means Sole Tenant Node rather than license included images or shared tenancy.

Question 11

The compliance team at HarborView Insurance needs to retain Cloud VPN log events for 18 months to meet audit obligations. You must configure Google Cloud so these logs are stored appropriately. What should you do?

  • ✓ B. Create a Cloud Logging sink with a filter for Cloud VPN entries and export them to a Cloud Storage bucket for long term retention

The correct option is Create a Cloud Logging sink with a filter for Cloud VPN entries and export them to a Cloud Storage bucket for long term retention.

This approach uses a log sink to route only Cloud VPN log entries so you retain precisely what auditors need. Exporting to Cloud Storage gives durable and cost effective storage. You can set a bucket retention policy to 18 months which prevents early deletion and meets the compliance requirement. This also avoids the default retention limits of Cloud Logging so the logs remain available for the full audit window.

Build a Cloud Logging dashboard that shows Cloud VPN metrics for the past 18 months is incorrect because dashboards visualize data and do not store it. They cannot extend log retention and cannot guarantee that raw log entries are kept for 18 months.

Configure a Cloud Logging export that publishes matching entries to Pub/Sub is incorrect because Pub/Sub is a messaging service and not an archival store. Message retention is limited to days and without an additional storage destination it cannot satisfy an 18 month retention requirement.

Enable firewall rule logging on the Compute Engine rules that handle VPN traffic is incorrect because firewall logs record allow and deny decisions for firewall rules rather than Cloud VPN tunnel events. Enabling this does not address the need to retain Cloud VPN logs for 18 months.

Cameron’s Google Cloud Certification Exam Tip

When a question emphasizes log retention for months or years, look for an export sink to durable storage such as Cloud Storage with a retention policy. Remember that Pub/Sub is for streaming and dashboards are for visualization.

Stay Informed

Get the best articles every day for FREE. Cancel anytime.