Free Google Cloud Architect Certification Topics Test
Despite the title of this article, this is not a Professional GCP Cloud Architect Engineer Certification Braindump in the traditional sense.
I do not believe in cheating.
Traditionally, the term “braindump” referred to someone taking an exam, memorizing the questions, and sharing them online for others to use.
That practice is unethical and violates the certification agreement. It provides no integrity, no real learning, and no professional growth.
This is not a GCP braindump.
All of these questions come from my Google Cloud Architect training materials and from the certificationexams.pro website, which offers hundreds of free GCP Professional Cloud Architect Practice Questions.
Google Certified Cloud Architect Exam Simulator
Each question has been carefully written to align with the official Google Cloud Certified Professional Architect exam objectives.
They mirror the tone, logic, and technical depth of real exam scenarios, but none are copied from the actual test.
Every question is designed to help you learn, reason, and master Google Cloud concepts such as network design, identity management, hybrid deployment, cost efficiency, and disaster recovery in the right way.
If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the real exam but also gain a deeper understanding of how to architect and manage enterprise-scale cloud environments effectively.
About GCP Exam Dumps
So if you want to call this your Google Cloud Architect Certification Exam Dump, that is fine, but remember that every question here is built to teach, not to cheat.
Each item includes detailed explanations, real-world examples, and insights that help you think like a professional cloud architect.
Study with focus, practice consistently, and approach your certification with integrity. Success as a Google Cloud Architect comes not from memorizing answers but from understanding how system design, networking, and security come together to deliver reliable, scalable cloud solutions.
Use the Google Certified Cloud Architect Exam Simulator and the Google Certified Professional Cloud Architect Practice Test to prepare effectively and move closer to earning your certification.
Now for the GCP Certified Architect Professional exam questions.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
GCP Cloud Architect Professional Exam Dump
Question 1
Company background. Pixel Horizon Studios builds session based multiplayer mobile games and has historically leased physical servers from several cloud providers. Sudden popularity spikes have made it hard to scale their audience, application tier, MySQL databases, and analytics, and they currently write gameplay metrics to files then push them through an ETL process into a central MySQL reporting database. Solution concept. They will launch a new title on Google Compute Engine so they can ingest streaming telemetry, run heavy analytics, take advantage of autoscaling, and integrate with a managed NoSQL database. Business goals include expanding globally, improving availability because downtime loses players, increasing cloud efficiency, and reducing latency for users worldwide. Technical requirements for the game backend include dynamic scaling based on player activity, connectivity to a managed NoSQL service, and the ability to run a customized Linux distribution. Technical requirements for analytics include elastic scaling, real time processing from game servers, handling late arriving mobile events, supporting SQL queries over at least 12 TB of historical data, ingesting files uploaded by devices on a regular basis, and using only fully managed services. The CEO reports that the last hit game failed to scale and damaged their reputation. The CTO wants to replace MySQL while adopting autoscaling and low latency load balancing and to avoid server maintenance. The CFO needs richer demographic and usage KPIs to improve targeted campaigns and in app sales. They ask you to define a new testing approach for this platform. How should test coverage change compared to their older backends on prior providers?
-
❏ A. Drop unit testing and rely solely on full end to end tests
-
❏ B. Depend only on Cloud Monitoring uptime checks for validation
-
❏ C. Test coverage must scale far beyond earlier backends to validate behavior during global traffic spikes
-
❏ D. Introduce testing only after features are released to production
Question 2
How can you centrally enforce that most Compute Engine VMs have no external IPs across all current and future projects while allowing only an approved set of instances to keep external connectivity?
-
❏ A. VPC Service Controls perimeters
-
❏ B. Organization Policy compute.vmExternalIpAccess allowlist
-
❏ C. Hierarchical firewall policy with egress deny and tag exceptions
Question 3
PixelForge Entertainment migrated to Google Cloud and is launching a cross-platform retro arena shooter whose backend will run on Google Kubernetes Engine. They will deploy identical Kubernetes clusters in three Google Cloud regions and require one global entry point that sends each player to the closest healthy region while allowing rapid scaling and low latency. You must design the external ingress to meet these business and technical goals and keep the platform ready for migrating older titles later. What should you implement?
-
❏ A. Create a global external HTTP(S) Load Balancer in front of a single multi-zonal GKE cluster
-
❏ B. Use Traffic Director with proxyless gRPC to steer requests between regional services
-
❏ C. Configure GKE Multi-Cluster Ingress with the global external HTTP(S) Load Balancer across the regional clusters
-
❏ D. Create a global external HTTP(S) Load Balancer backed by a managed instance group on Compute Engine
Question 4
How should you expose v1 and a beta v2 of a REST API under the same DNS name and TLS certificate for 30 days while keeping separate backends on Google Cloud?
-
❏ A. Provision two external HTTPS load balancers and migrate with DNS later
-
❏ B. External HTTPS Load Balancer with one certificate and a path-based URL map
-
❏ C. Traffic Director
-
❏ D. Cloud DNS weighted round robin
Question 5
Riverside Outfitters uses Google Cloud with an Organization that contains two folders named Ledger and Storefront. Members of the [email protected] Google Group currently hold the Project Owner role at the Organization level. You need to stop this group from creating resources in projects that belong to the Ledger folder while still allowing them to fully manage resources in projects under the Storefront folder. What change should you make to enforce this requirement?
-
❏ A. Assign the group the Project Viewer role on the Ledger folder and the Project Owner role on the Storefront folder
-
❏ B. Assign the group only the Project Viewer role on the Ledger folder
-
❏ C. Grant the group the Project Owner role on the Storefront folder and remove its Project Owner role at the Organization level
-
❏ D. Move the Ledger folder into a separate Organization and keep the current group role assignments unchanged
Question 6
Which managed Google Cloud service should you use as the primary store to ingest high volume time series data with very low latency and serve recent records by device key and time range?
-
❏ A. Cloud Spanner
-
❏ B. Google Cloud Bigtable
-
❏ C. BigQuery
-
❏ D. Firestore
Question 7
Riverview Analytics is preparing a major release and uses a managed instance group as the backend for an external HTTP(S) load balancer. None of the virtual machines have public IP addresses and the group keeps recreating instances roughly every 90 seconds. You need to ensure the backend configuration is correct so the instances remain stable and the service can receive traffic. What should you configure?
-
❏ A. Assign a public IP to each VM and open a firewall rule so the load balancer can reach the instance public addresses
-
❏ B. Add a firewall rule that allows client HTTP and HTTPS traffic to the load balancer frontend
-
❏ C. Create a firewall rule that permits load balancer health check probes to access the instance group on the health check port
-
❏ D. Configure Cloud NAT for the subnet so instances without public IPs can reach the internet
Question 8
Which Google Cloud connectivity should you use to provide private connectivity that avoids the public internet and meets strict availability and compliance needs for critical workloads when upgrading from Partner Interconnect and Cloud VPN?
-
❏ A. Direct Peering
-
❏ B. Use Dedicated Interconnect
-
❏ C. Increase Partner Interconnect capacity
-
❏ D. HA VPN
Question 9
Rivertown Analytics keeps regulated customer records in a Cloud Storage bucket and runs batch transformations on the files with Dataproc. The security team requires that the encryption key be rotated every 90 days and they want a solution that aligns with Google guidance and keeps operations simple for the data pipeline. What should you implement to rotate the key for the bucket that stores the sensitive files while preserving secure access for the Dataproc jobs?
-
❏ A. Use Secret Manager to store and rotate an AES-256 key then encrypt each object before uploading to Cloud Storage
-
❏ B. Generate and use a customer supplied encryption key for the bucket and pass the key with every object upload and download
-
❏ C. Create a key in Cloud Key Management Service and set it as the default encryption key on the Cloud Storage bucket then grant the Dataproc service account permission to use that key
-
❏ D. Call the Cloud KMS encrypt API for each file before upload and manage ciphertext and re encryption during rotations yourself
Question 10
Which Cloud Storage lifecycle configuration minimizes cost by tiering objects older than 60 days to a colder class and deleting objects after 18 months while preserving audit access?
-
❏ A. Enable Autoclass with an 18 month bucket retention policy
-
❏ B. Lifecycle rules to move at 60 days to Coldline and delete after 18 months
-
❏ C. Lifecycle rules to move at 90 days to Nearline then at 180 days to Coldline
Question 11
BrightBay Media keeps 32 GB of security audit logs on an on premises NAS and plans to move them into a new Cloud Storage bucket. The compliance team requires that uploads use customer supplied encryption keys so that the data is encrypted at rest with your own keys. What should you do?
-
❏ A. Create a bucket with a default Cloud KMS key and copy the files using Storage Transfer Service
-
❏ B. Add the base64 encoded customer supplied key to the gsutil .boto configuration and upload with gsutil
-
❏ C. Run gsutil cp and pass the key using the –encryption-key flag
-
❏ D. Set an encryption key in gcloud config and then copy the files with gsutil
Question 12
Firewall Insights shows no rows for VPC firewall rules in a shared VPC. What should you enable to produce log entries for analysis?
-
❏ A. Enable Packet Mirroring in the VPC
-
❏ B. Turn on Firewall Rules Logging for the relevant rules
-
❏ C. Enable Data Access audit logs for Compute Engine
-
❏ D. Enable VPC Flow Logs on the subnets
Question 13
Harborline Freight operates a production web portal on Google Cloud and stores sensitive customer data in Cloud SQL. The compliance team requires that the database be encrypted while stored on disk and they want the simplest approach that does not require changing the application or managing encryption keys. What should you do?
-
❏ A. Enable TLS for connections between the application and Cloud SQL
-
❏ B. Configure Cloud KMS customer managed keys for the Cloud SQL instance
-
❏ C. Rely on Cloud SQL default encryption at rest
-
❏ D. Implement client side encryption in the application before writing to Cloud SQL
Question 14
A new App Engine standard release increased latency. How should you quickly restore user experience and investigate the regression safely?
-
❏ A. Use App Engine traffic splitting to shift 90% of traffic to the previous version and investigate with Cloud Logging
-
❏ B. Increase App Engine instance class and raise autoscaling limits
-
❏ C. Roll back to the stable version, then use a staging project with Cloud Logging and Cloud Trace to diagnose latency
-
❏ D. Roll back to the previous version, then redeploy the updated build during a low traffic window at 3 AM and troubleshoot in production with Cloud Logging and Cloud Trace
Question 15
Riverton Insights plans to move about eight petabytes of historical analytics data into Google Cloud and the data must be available around the clock. The analysts insist on using a familiar SQL interface for querying. How should the data be stored to make analysis as simple as possible?
-
❏ A. Migrate the data into Cloud SQL for PostgreSQL
-
❏ B. Load the dataset into BigQuery tables
-
❏ C. Keep the files in Cloud Storage and query them using BigQuery external tables
-
❏ D. Write the data to Cloud Bigtable
Question 16
How should you quickly and reliably upload large batches of files from a Compute Engine staging directory to Cloud Storage within 10 minutes without changing the ETL tool?
-
❏ A. Use gcsfuse and write files directly to the bucket
-
❏ B. Use gsutil to move files sequentially
-
❏ C. Use gsutil with parallel copy
-
❏ D. Storage Transfer Service scheduled job
Question 17
Orchid Publishing operates about 420 virtual machines in its on premises data center and wants to follow Google best practices to move these workloads to Google Cloud using a lift and shift approach with only minor automatic adjustments while keeping effort low. What should the team do?
-
❏ A. Create boot disk images for each VM, archive them to Cloud Storage, and manually import them to build Compute Engine instances
-
❏ B. Use Migrate for Compute Engine with one runbook and one job that moves all VMs in a single event across the environment
-
❏ C. Assess dependencies and use Migrate for Compute Engine to create waves, then prepare a runbook and a job for each wave and migrate the VMs in that wave together
-
❏ D. Install VMware or Hyper-V replication agents on every source VM to copy disks to Google Cloud and then clone them into Compute Engine instances
Question 18
You must transfer 25 TB from on premises to Google Cloud within 3 hours during failover and you need encrypted connectivity with redundancy and high throughput. Which network design should you use?
-
❏ A. HA VPN with Cloud Router
-
❏ B. Partner Interconnect with dual links only
-
❏ C. Dedicated Interconnect with HA VPN backup
-
❏ D. Dedicated Interconnect with Direct Peering backup
Question 19
Northlake Systems plans to deploy a customer portal on Compute Engine and must keep the service available if a whole region becomes unavailable. You need a disaster recovery design that automatically redirects traffic to another region when health checks fail in the primary and that does not require any DNS changes during failover. What should you implement to meet these requirements on Google Cloud?
-
❏ A. Run two single Compute Engine instances in different regions within the same project and configure an external HTTP(S) load balancer to fail over between them
-
❏ B. Serve production from a Compute Engine instance in the primary region and configure the external HTTP(S) load balancer to fail over to an on premises VM through Cloud VPN during a disaster
-
❏ C. Deploy two regional managed instance groups in the same project and place them behind a global external HTTP(S) load balancer with health checks and automatic failover
-
❏ D. Use Cloud DNS with health checks to switch a public hostname between two regional external IP addresses when the primary region fails
Question 20
To address skills gaps and improve cost efficiency for a new Google Cloud initiative, what should you do next?
-
❏ A. Enforce labels and budgets with Cloud Billing and quotas across projects
-
❏ B. Budget for targeted team training and define a role based Google Cloud certification roadmap
-
❏ C. Set project budget alerts and purchase one year committed use discounts
-
❏ D. Hire external consultants for delivery and defer internal training
Question 21
Peregrine Outfitters is moving fast on GCP and leadership values rapid releases and flexibility above all else. You must strengthen the delivery workflow so that accidental security flaws are less likely to slip into production while preserving speed. Which actions should you implement? (Choose 2)
-
❏ A. Mandate that a security specialist approves every code check in before it merges
-
❏ B. Run automated vulnerability scans in the CI/CD pipeline for both code and dependencies
-
❏ C. Build stubs and unit tests for every component boundary
-
❏ D. Set up code signing and publish artifacts only from a private trusted repository that is enforced by the pipeline
-
❏ E. Configure Cloud Armor policies on your external HTTP load balancer
Question 22
Which Google Cloud services should you combine to guarantee per account ordered delivery and exactly once processing for a streaming pipeline in us-central1 that handles about 9,000 events per second with latency under 800 ms?
-
❏ A. Cloud Pub/Sub with Cloud Run
-
❏ B. Cloud Pub/Sub ordering keys and Dataflow streaming with exactly once
-
❏ C. Cloud Pub/Sub with ordering enabled only
-
❏ D. Cloud Pub/Sub with Cloud Functions
Question 23
Arcadia Payments processes cardholder transactions through an internal service that runs in its colocated data center and the servers will reach end of support in three months. Leadership has chosen to move the workloads to Google Cloud and the risk team requires adherence to PCI DSS. You plan to deploy the service on Google Kubernetes Engine and you need to confirm whether this approach is appropriate and what else is required. What should you do?
-
❏ A. Move the workload to App Engine Standard because it is the only compute option on Google Cloud certified for PCI DSS
-
❏ B. Choose Anthos on premises so that PCI scope remains entirely outside of Google Cloud
-
❏ C. Use Google Kubernetes Engine and implement the required PCI DSS controls in your application and operations because GKE is within Google Cloud’s PCI DSS scope
-
❏ D. Assume compliance is automatic because Google Cloud holds a PCI DSS attestation for the platform
Question 24
How is a Google Cloud project’s effective IAM policy determined when policies exist at the organization, folder, and project levels?
-
❏ A. Only the project policy applies
-
❏ B. Union of local and inherited bindings
-
❏ C. Intersection of local and inherited policies
-
❏ D. Nearest ancestor policy overrides others
Question 25
Rivermark Outfitters has finished moving its systems to Google Cloud and now plans to analyze operational telemetry to improve fulfillment and customer experience. There is no existing analytics codebase so they are open to any approach. They require a single technology that supports both batch and streaming because some aggregations run every 30 minutes and other events must be handled in real time. Which Google Cloud technology should they use?
-
❏ A. Google Kubernetes Engine with Bigtable
-
❏ B. Cloud Run with Pub/Sub and BigQuery
-
❏ C. Google Cloud Dataflow
-
❏ D. Google Cloud Dataproc
Google Cloud Solutions Architect Professional Braindump
Question 1
Company background. Pixel Horizon Studios builds session based multiplayer mobile games and has historically leased physical servers from several cloud providers. Sudden popularity spikes have made it hard to scale their audience, application tier, MySQL databases, and analytics, and they currently write gameplay metrics to files then push them through an ETL process into a central MySQL reporting database. Solution concept. They will launch a new title on Google Compute Engine so they can ingest streaming telemetry, run heavy analytics, take advantage of autoscaling, and integrate with a managed NoSQL database. Business goals include expanding globally, improving availability because downtime loses players, increasing cloud efficiency, and reducing latency for users worldwide. Technical requirements for the game backend include dynamic scaling based on player activity, connectivity to a managed NoSQL service, and the ability to run a customized Linux distribution. Technical requirements for analytics include elastic scaling, real time processing from game servers, handling late arriving mobile events, supporting SQL queries over at least 12 TB of historical data, ingesting files uploaded by devices on a regular basis, and using only fully managed services. The CEO reports that the last hit game failed to scale and damaged their reputation. The CTO wants to replace MySQL while adopting autoscaling and low latency load balancing and to avoid server maintenance. The CFO needs richer demographic and usage KPIs to improve targeted campaigns and in app sales. They ask you to define a new testing approach for this platform. How should test coverage change compared to their older backends on prior providers?
-
✓ C. Test coverage must scale far beyond earlier backends to validate behavior during global traffic spikes
The correct option is Test coverage must scale far beyond earlier backends to validate behavior during global traffic spikes.
The new platform relies on global autoscaling, low latency load balancing, and fully managed streaming analytics. Spiky traffic and worldwide expansion demand verification at massive scale. Tests must exercise autoscaling behavior during sudden surges, validate graceful degradation and recovery, and confirm that sessions remain stable while instances are added or removed. Real time analytics must be tested for throughput, ordering sensitivity, and data correctness when events arrive late. Historical queries over large volumes must be validated for performance and reliability. This broader scope ensures the game remains available and responsive as popularity grows.
This approach also supports the business goals. It reduces risk to reputation by proving resilience before launch and it improves efficiency by finding scaling limits early. It verifies the end to end pipeline from telemetry ingestion through processing and storage so the team can trust dashboards and KPIs for campaigns and in app sales.
Drop unit testing and rely solely on full end to end tests is wrong because unit and integration tests provide fast feedback, isolate defects, and make failures easier to diagnose. End to end tests alone are slow and brittle and they do not give sufficient coverage of edge cases in isolation.
Depend only on Cloud Monitoring uptime checks for validation is wrong because uptime checks confirm external availability and basic reachability but they do not validate functional correctness, latency under load, autoscaling behavior, or data accuracy in streaming pipelines.
Introduce testing only after features are released to production is wrong because testing must shift left to pre production environments. Early load and resilience testing prevents incidents and protects user experience during launch spikes.
Cameron’s Google Cloud Certification Exam Tip
When scenarios emphasize global spikes, autoscaling, and real time analytics, favor answers that expand test scope and scale. Be wary of options that remove layers of testing or rely only on monitoring since those do not validate functionality or performance under load.
Question 2
How can you centrally enforce that most Compute Engine VMs have no external IPs across all current and future projects while allowing only an approved set of instances to keep external connectivity?
-
✓ B. Organization Policy compute.vmExternalIpAccess allowlist
The correct option is Organization Policy compute.vmExternalIpAccess allowlist.
The Organization Policy compute.vmExternalIpAccess allowlist lets you set a deny by default stance on external IPs at the organization or folder level so it automatically applies to all current and future projects through inheritance. You then add only the approved instances to the allowlist so those specific VMs can retain external connectivity while all others cannot be created with or retain an external IP. This provides centralized governance and precise exceptions without per project configuration.
VPC Service Controls perimeters focus on protecting access to Google managed APIs and reducing data exfiltration risk and they do not control whether a VM can be assigned an external IP or support instance level allowlists for that capability.
Hierarchical firewall policy with egress deny and tag exceptions can block traffic but it does not prevent the assignment of external IPs to instances and it cannot centrally enforce an organization wide allowlist of specific instances that are permitted to keep external connectivity.
Cameron’s Google Cloud Certification Exam Tip
When a question requires organization wide enforcement with inheritance and a small set of exceptions, think Organization Policy constraints rather than networking features. Match the constraint to the exact resource behavior being controlled.
Question 3
PixelForge Entertainment migrated to Google Cloud and is launching a cross-platform retro arena shooter whose backend will run on Google Kubernetes Engine. They will deploy identical Kubernetes clusters in three Google Cloud regions and require one global entry point that sends each player to the closest healthy region while allowing rapid scaling and low latency. You must design the external ingress to meet these business and technical goals and keep the platform ready for migrating older titles later. What should you implement?
-
✓ C. Configure GKE Multi-Cluster Ingress with the global external HTTP(S) Load Balancer across the regional clusters
The correct option is Configure GKE Multi-Cluster Ingress with the global external HTTP(S) Load Balancer across the regional clusters. This gives a single global anycast entry point that routes each player to the nearest healthy cluster, provides automatic cross-region failover, and scales quickly to meet traffic spikes while keeping the design ready to onboard additional games later.
Multi-Cluster Ingress uses the global external HTTP(S) Load Balancer to perform proximity based routing and health checking so users are sent to the closest healthy region and requests fail over when a region goes down. It integrates natively with GKE Services through Network Endpoint Groups which keeps latency low and operations simple. Because the load balancer is fully managed it can absorb sudden increases in traffic and maintain a stable global IP for clients.
This approach also preserves flexibility for migrating older titles since you can add or remove clusters and services under the same global front door without forcing client changes. You can iterate region by region while keeping a consistent ingress model across games.
Create a global external HTTP(S) Load Balancer in front of a single multi-zonal GKE cluster is not suitable because a single cluster cannot span multiple regions and therefore cannot direct players to the closest region or provide regional isolation and failover that the scenario requires.
Use Traffic Director with proxyless gRPC to steer requests between regional services does not meet the need for a global public entry point. Traffic Director is a service mesh control plane for L7 service to service traffic within your VPC and would still require a separate external load balancer for internet clients. It adds complexity without delivering the requested global HTTP(S) ingress behavior.
Create a global external HTTP(S) Load Balancer backed by a managed instance group on Compute Engine adds unnecessary layers and operational overhead because the backend is on GKE. While you could proxy from VMs to clusters, it does not leverage native GKE integrations and is less direct for multi-region Kubernetes services compared to Multi-Cluster Ingress.
Cameron’s Google Cloud Certification Exam Tip
Look for keywords like one global entry point, closest healthy region, and GKE across multiple regions. These usually indicate GKE Multi-Cluster Ingress or the newer multi cluster gateway with the global external HTTP(S) Load Balancer rather than single region setups or service mesh control planes.
Question 4
How should you expose v1 and a beta v2 of a REST API under the same DNS name and TLS certificate for 30 days while keeping separate backends on Google Cloud?
-
✓ B. External HTTPS Load Balancer with one certificate and a path-based URL map
The correct option is External HTTPS Load Balancer with one certificate and a path-based URL map because it allows both API versions to share the same DNS name and TLS certificate while routing requests to separate backend services for the 30 day overlap.
This approach uses a single global frontend with one anycast IP and one certificate, and a URL map that matches paths such as /v1 and /v2 to different backend services. You avoid DNS changes during the coexistence period and can retire or switch a path rule when the beta ends without impacting the hostname or certificate.
Provision two external HTTPS load balancers and migrate with DNS later is incorrect because it depends on DNS changes and propagation and it cannot route by URL path, so it does not reliably present both versions at the same stable endpoint during the overlap.
Traffic Director is incorrect because it is a control plane for service mesh traffic within your network and does not provide a public edge endpoint or internet facing TLS termination.
Cloud DNS weighted round robin is incorrect because Cloud DNS does not support weighted routing and DNS cannot make decisions based on HTTP paths or ensure a single TLS certificate across multiple edge services.
Cameron’s Google Cloud Certification Exam Tip
When a scenario requires one hostname and certificate with different backends, think of the External HTTP(S) Load Balancer with path based routing and remember that DNS cannot see HTTP paths.
Question 5
Riverside Outfitters uses Google Cloud with an Organization that contains two folders named Ledger and Storefront. Members of the [email protected] Google Group currently hold the Project Owner role at the Organization level. You need to stop this group from creating resources in projects that belong to the Ledger folder while still allowing them to fully manage resources in projects under the Storefront folder. What change should you make to enforce this requirement?
-
✓ C. Grant the group the Project Owner role on the Storefront folder and remove its Project Owner role at the Organization level
The correct option is Grant the group the Project Owner role on the Storefront folder and remove its Project Owner role at the Organization level. This change removes the broad Organization level ownership that currently grants permissions everywhere and then scopes full control to only the Storefront folder, which lets the team fully manage Storefront projects while preventing them from creating resources in Ledger projects.
IAM policies are inherited from the Organization down to folders and projects, and permissions are additive. Removing the Organization level Owner binding stops those permissions from flowing into the Ledger folder. Granting Project Owner on the Storefront folder then restores full management rights for projects under that folder only. This satisfies least privilege and matches the requirement exactly.
Assign the group the Project Viewer role on the Ledger folder and the Project Owner role on the Storefront folder is incorrect because the existing Organization level Project Owner role would still be inherited by Ledger projects. IAM is additive and adding Viewer on Ledger does not remove or override Owner inherited from the Organization.
Assign the group only the Project Viewer role on the Ledger folder is incorrect for the same reason. The Organization level Project Owner role would continue to grant full rights on Ledger projects, and a Viewer grant cannot restrict those inherited permissions.
Move the Ledger folder into a separate Organization and keep the current group role assignments unchanged is unnecessary and introduces operational complexity. You can meet the requirement simply by removing the Organization level Owner grant and assigning the needed role at the Storefront folder, which avoids a cross organization migration and its constraints.
Cameron’s Google Cloud Certification Exam Tip
When access must differ across folders, remove broad grants at the Organization level and reassign at the needed scope. IAM is additive, so you must remove a higher level role to restrict permissions and then grant only what is needed at the folder or project level.
Question 6
Which managed Google Cloud service should you use as the primary store to ingest high volume time series data with very low latency and serve recent records by device key and time range?
-
✓ B. Google Cloud Bigtable
The correct option is Google Cloud Bigtable.
Google Cloud Bigtable is a wide column NoSQL database designed for very high write throughput and single digit millisecond reads. Time series is a primary use case and you can model rows by device identifier with a timestamp component in the row key so the data is naturally ordered for efficient range scans. This lets you ingest large volumes with very low latency and then serve recent records quickly by device key and time range. Bigtable scales horizontally without manual sharding and supports predictable performance as load grows.
Cloud Spanner is a relational service that excels at strongly consistent transactions and global schemas. It is not optimized as a primary store for high volume time series ingestion or ultra low latency key and time range lookups at large scale, so it is not the best fit here.
BigQuery is an analytical data warehouse that is optimized for large scale SQL analytics rather than operational serving. Even with streaming inserts, it is better suited for batch analysis and aggregations and not for very low latency per device range queries on recent data.
Firestore is a document database aimed at application backends for mobile and web. It has indexing and query constraints that make wide time series range scans inefficient at large scale and it is not intended as a primary store for high volume time series ingestion with very low latency serving.
Cameron’s Google Cloud Certification Exam Tip
Start with the required access pattern. If you must perform very low latency writes and range scans by a device key and time, choose the storage engine that orders data by key and scales horizontally. Map the row key design to the query pattern before picking a service.
Question 7
Riverview Analytics is preparing a major release and uses a managed instance group as the backend for an external HTTP(S) load balancer. None of the virtual machines have public IP addresses and the group keeps recreating instances roughly every 90 seconds. You need to ensure the backend configuration is correct so the instances remain stable and the service can receive traffic. What should you configure?
-
✓ C. Create a firewall rule that permits load balancer health check probes to access the instance group on the health check port
The correct option is Create a firewall rule that permits load balancer health check probes to access the instance group on the health check port.
External HTTP(S) load balancers determine backend health by sending probes from Google front ends to the instances on the configured health check port. Because the virtual machines do not have public IP addresses, the probes still reach them over the VPC network and require an allow rule that targets the instance group and permits the health checker source ranges on the health check port. Without this rule the probes fail, the backend service marks instances unhealthy, and the managed instance group autoheals by recreating them in a loop. Allowing the probes stabilizes the group and lets healthy backends receive traffic.
Assign a public IP to each VM and open a firewall rule so the load balancer can reach the instance public addresses is incorrect because external HTTP(S) load balancing does not require public IPs on backend instances and the load balancer does not connect to backend public addresses. Adding public IPs would not fix the failed probe condition and would increase exposure.
Add a firewall rule that allows client HTTP and HTTPS traffic to the load balancer frontend is incorrect because clients connect to the load balancer external IP that is managed by Google and not to your instances. Your VPC firewall does not govern traffic from clients to the load balancer frontend, and this does not address the failed health checks that trigger instance recreation.
Configure Cloud NAT for the subnet so instances without public IPs can reach the internet is incorrect because health checks for external HTTP(S) load balancers originate from Google front ends and do not require internet egress from the instances. Cloud NAT can enable outbound internet access for package updates or external calls but it does not resolve the health check reachability needed here.
Cameron’s Google Cloud Certification Exam Tip
If a managed instance group keeps recreating instances on a steady cadence, first verify the load balancer health check status and confirm your VPC firewall allows the health checker source ranges to the health check port. Fixing health checks usually stabilizes the group before you change anything else.
Question 8
Which Google Cloud connectivity should you use to provide private connectivity that avoids the public internet and meets strict availability and compliance needs for critical workloads when upgrading from Partner Interconnect and Cloud VPN?
-
✓ B. Use Dedicated Interconnect
The correct choice is Use Dedicated Interconnect because it delivers private connectivity that bypasses the public internet and provides the strongest availability and compliance posture for critical workloads.
With Dedicated Interconnect your enterprise uses physical cross connects into Google to reach your VPC over private VLAN attachments and Cloud Router. Traffic stays off the public internet and you can design for 99.9 or 99.99 percent availability by deploying redundant links in diverse locations which aligns with strict compliance and uptime requirements and makes it a natural upgrade from partner based or VPN connectivity.
Direct Peering connects your network to Google public services using public IP addresses and it does not connect to your VPC networks. It therefore does not provide private connectivity to your workloads nor does it avoid the public internet path to your resources.
Increase Partner Interconnect capacity only adds bandwidth while keeping you on a partner delivered service and it does not change the reliance on a third party or deliver the highest end to end availability and control that strict compliance often requires.
HA VPN increases availability for tunnels but it still traverses the public internet so it cannot satisfy a requirement to keep critical workload traffic off the internet.
Cameron’s Google Cloud Certification Exam Tip
When a prompt stresses private connectivity and avoiding the public internet with the highest availability, choose Dedicated Interconnect. If it mentions only access to Google public services consider Direct Peering, and if it emphasizes encrypted connectivity over the internet consider HA VPN.
Question 9
Rivertown Analytics keeps regulated customer records in a Cloud Storage bucket and runs batch transformations on the files with Dataproc. The security team requires that the encryption key be rotated every 90 days and they want a solution that aligns with Google guidance and keeps operations simple for the data pipeline. What should you implement to rotate the key for the bucket that stores the sensitive files while preserving secure access for the Dataproc jobs?
-
✓ C. Create a key in Cloud Key Management Service and set it as the default encryption key on the Cloud Storage bucket then grant the Dataproc service account permission to use that key
The correct option is Create a key in Cloud Key Management Service and set it as the default encryption key on the Cloud Storage bucket then grant the Dataproc service account permission to use that key.
This configuration uses a customer managed key that can be set to rotate automatically every ninety days. Setting it as the bucket default means all new objects are encrypted without any application changes and decryption remains transparent to readers that have permission to use the key. Granting the Dataproc service account the encrypter and decrypter role on the key preserves secure access for the jobs while keeping operations simple.
Rotation occurs by creating new key versions on schedule and Cloud Storage automatically uses the latest version for new writes while continuing to decrypt older objects with the version that encrypted them. There is no need to re encrypt existing data and access is governed by IAM on both the bucket and the key which aligns with Google guidance for regulated data.
Use Secret Manager to store and rotate an AES-256 key then encrypt each object before uploading to Cloud Storage is wrong because it forces client side encryption and requires you to design key distribution and re encryption during rotations which adds operational complexity and risk and does not integrate with bucket default encryption.
Generate and use a customer supplied encryption key for the bucket and pass the key with every object upload and download is wrong because it requires sending the key with each operation and does not provide automatic rotation which complicates Dataproc access and is not the recommended approach for managed simplicity.
Call the Cloud KMS encrypt API for each file before upload and manage ciphertext and re encryption during rotations yourself is wrong because it again implements client side encryption and makes you responsible for storing ciphertext, tracking key versions, and re encrypting data which is unnecessary when a bucket default key provides a managed path.
Cameron’s Google Cloud Certification Exam Tip
When storage encryption must rotate regularly prefer CMEK set as the bucket default key and grant the job or pipeline service account Encrypter and Decrypter on the key. Avoid CSEK and client side patterns unless the question explicitly requires them.
Question 10
Which Cloud Storage lifecycle configuration minimizes cost by tiering objects older than 60 days to a colder class and deleting objects after 18 months while preserving audit access?
-
✓ B. Lifecycle rules to move at 60 days to Coldline and delete after 18 months
The correct option is Lifecycle rules to move at 60 days to Coldline and delete after 18 months.
This lifecycle policy directly implements an age based transition to a colder storage class at 60 days which minimizes cost for infrequently accessed data. It then deletes objects after 18 months which removes ongoing storage charges. Audit access is preserved because C