Question 1
Within a business continuity and disaster recovery framework how is the term Recovery Service Level commonly defined?
-
✓ C. The proportion of normal production performance that must be recovered to meet business continuity and disaster recovery goals
The correct option is The proportion of normal production performance that must be recovered to meet business continuity and disaster recovery goals.
Recovery Service Level describes the required level of performance or capacity that must be restored after an incident so that business functions can continue at an acceptable level. It expresses recovery as a proportion or percentage of normal production capability rather than as a time or a specific tool.
Recovery Service Level is used to define targets in business continuity and disaster recovery planning so stakeholders know how much functionality must be available after recovery. This allows planners to choose appropriate recovery strategies and resources to meet business priorities and to include those targets in agreements.
Cloud Monitoring is incorrect because monitoring is a capability that observes performance and events and it does not define the target proportion of performance that must be recovered. Monitoring can help measure whether recovery targets are met but it is not itself a recovery service level.
The maximum permissible duration that services can remain unavailable during a disaster recovery incident is incorrect because that statement defines the recovery time objective or RTO. RTO is about time and not about the proportion of normal production performance that must be restored.
The mean time typically required to restore services back to standard production operation is incorrect because that describes mean time to repair or restore and it is an operational metric. It does not specify the fraction of service performance that must be recovered to meet business continuity goals.
Cameron’s Certification Exam Tip
When answering these questions focus on what is being measured. RTO measures time and RPO measures allowable data loss. Recovery service level measures the required performance or capacity that must be restored.
Question 2
Which storage characteristic should be given the highest priority to ensure maximum security for confidential records?
-
✓ B. Replicated across multiple regions
The correct option is Replicated across multiple regions.
Choosing Replicated across multiple regions improves availability and durability which are core aspects of security when you consider confidentiality integrity and availability together. Replicating data to multiple geographic locations reduces the risk of total data loss from regional outages natural disasters or targeted incidents and it supports disaster recovery and continuity for confidential records.
Replication does not remove the need for strong confidentiality controls so Replicated across multiple regions should be used alongside encryption access controls and proper key management to protect the data itself. Replication ensures resilient storage and availability while other controls protect confidentiality and integrity.
Key management service is important for encrypting and protecting keys but it is a supporting service rather than a storage characteristic, so it does not directly answer the question about which storage characteristic to prioritize.
Partitioned by access control groups helps with segmentation and least privilege but partitioning alone does not address availability or resilience against regional failures, and it depends on correct policy configuration to be effective.
Cameron’s Certification Exam Tip
When a question uses the word security check whether it refers to confidentiality integrity or availability. If availability is implied prioritize answers that mention replication or geographic redundancy.
Question 3
What advantage does hosting workloads in a dedicated private cloud provide when compared with deploying to public hybrid or community cloud environments?
-
✓ C. Stronger security and control
The correct answer is Stronger security and control.
A dedicated private cloud gives an organization exclusive infrastructure which allows tighter security policies and full administrative control. This makes it easier to enforce network segmentation, manage encryption keys, control physical access, and apply customized host configurations that meet strict compliance requirements.
Because the hardware and tenancy are dedicated there is less risk from noisy neighbors and multi tenancy vulnerabilities. The environment also allows organizations to implement specific monitoring, patching, and access controls that would be difficult to guarantee in public or community clouds.
Lower total cost of ownership is incorrect because private clouds typically require capital expense for hardware and ongoing operational staff. Public cloud models often reduce upfront costs and can be cheaper for variable workloads.
Faster initial deployment is incorrect because public cloud platforms can provision services almost instantly. Private cloud deployments often take longer due to procurement setup and configuration of dedicated infrastructure.
Greater scalability is incorrect because public clouds usually offer the broadest on demand scalability across regions and services. Private clouds can scale but they are constrained by the physical resources that an organization owns or leases.
Cameron’s Certification Exam Tip
When comparing cloud deployment models look for mentions of control and isolation. Those clues usually point to a private cloud answer.
Question 4
Which United States law is officially titled the Financial Modernization Act of 1999?
-
✓ B. Gramm Leach Bliley Act
The correct answer is Gramm Leach Bliley Act.
The Gramm Leach Bliley Act is formally titled the Financial Modernization Act of 1999 and it was enacted in 1999 to modernize aspects of the financial services industry. The law removed certain restrictions from earlier legislation and established privacy and information security obligations for financial institutions to protect consumers.
Dodd Frank Act is incorrect because that law was enacted in 2010 in response to the 2008 financial crisis and its formal title is the Dodd Frank Wall Street Reform and Consumer Protection Act rather than the Financial Modernization Act of 1999.
Sarbanes Oxley Act is incorrect because that statute was passed in 2002 to strengthen corporate governance and financial reporting and it is known by names associated with corporate accountability rather than the Financial Modernization Act of 1999.
Cameron’s Certification Exam Tip
When a question gives a specific year such as 1999 match that year to the law’s enactment date and recall whether the statute is primarily about financial modernization privacy or corporate governance.
Question 5
You are the cloud security lead at Meridian Cloud Services and you have found unauthorized alterations to your cloud setup that stray from defined security baselines and create weaknesses. Which security threat is most likely to arise from these unauthorized configuration changes?
-
✓ D. Security misconfiguration
The correct option is Security misconfiguration.
Security misconfiguration best matches the scenario because it describes unauthorized or incorrect settings that deviate from defined security baselines and introduce weaknesses across cloud services, permissions, network rules, and management interfaces. These kinds of changes are configuration issues by nature and they create attack surfaces that were not intended by the baseline.
Insufficient logging and monitoring is focused on the absence of detection and auditing controls and does not directly name unauthorized configuration changes. It can be a consequence of misconfiguration but it is not the primary classification of the described problem.
Insecure direct object references refers to improper access control where internal object identifiers are exposed and can be manipulated. That vulnerability is about access to objects and does not describe stray or altered configuration settings in the cloud environment.
Sensitive data exposure describes failures to protect data at rest or in transit and is an outcome that can result from many causes. While misconfiguration can lead to data exposure, the question specifically points to unauthorized changes to configuration and baselines which is best categorized as security misconfiguration.
Cameron’s Certification Exam Tip
When a question mentions deviations from baselines or unexpected settings across cloud components think security misconfiguration because it covers incorrect or unauthorized configuration changes rather than specific data or logging issues.
Question 6
Which interfaces provide application features and allow administrators to manage a hosted cloud environment?
APIs is the correct option.
APIs provide programmatic interfaces that let applications invoke cloud features and let administrators automate provisioning, configuration, monitoring, and control of hosted resources. They expose endpoints and operations that SDKs and automation tools use so features can be integrated into applications and scripts without human interaction.
APIs are the foundation for infrastructure as code, CI CD pipelines, and service integrations. They enable role based access, audit logging, and granular controls that administrators rely on to manage environments at scale.
Management Console is a human oriented graphical interface that administrators use to view and change settings interactively. It does not itself provide the programmatic hooks that applications use to enable features or to automate large scale management, although the Management Console typically calls the underlying APIs.
Object Storage is a storage service for blobs and files and it represents a resource rather than a management interface. It may expose its own APIs for storing and retrieving objects but the option names a service and not the general interface used to enable application features and manage the overall hosted environment.
Cameron’s Certification Exam Tip
Focus on whether the choice describes programmatic access or automation. If it does then APIs are usually the correct answer when the question asks about enabling application features and managing a hosted cloud environment.
Question 7
Which term describes the ability to independently verify the source and authenticity of data with a high degree of confidence?
Non-repudiation is correct because it names the property of being able to independently verify the source and authenticity of data with a high degree of confidence.
This property is typically achieved by combining cryptographic methods such as Digital signatures with secure key management and trusted timestamps. A signature shows that a specific private key signed the data and a certificate issued by a Public key infrastructure links that key to an identity which supports the non-repudiation claim.
Hashing is incorrect because a hash demonstrates content integrity but it does not prove who created or signed the data. A hash alone offers no evidence tying the data to a particular originator.
Public key infrastructure is incorrect because it is an ecosystem for issuing and managing keys and certificates that supports non-repudiation but it is not the term that describes the property itself.
Digital signatures is incorrect because signatures are the mechanism that can provide non-repudiation when used with proper key management and certificates. The question asked for the property that describes the ability to verify source and authenticity rather than the mechanism.
Cameron’s Certification Exam Tip
When a question asks for a security property look for abstract goals such as non-repudiation rather than implementation mechanisms like digital signatures or hashing.
Question 8
Which responsibility typically falls outside a cloud provider’s service level agreement and therefore remains the responsibility of the organization?
-
✓ B. The organization’s internal audit calendar that describes the timing and scope of internal assessments
The organization’s internal audit calendar that describes the timing and scope of internal assessments is correct because internal audit planning is an internal governance activity that remains the responsibility of the organization rather than the cloud provider.
This answer is correct because internal audits concern organizational risk acceptance choices and compliance schedules and they depend on internal policies and audit scope decisions. The cloud provider can supply logs and evidence to support audits but the design and timing of an internal audit calendar stays with the customer.
Performance targets and measurement standards for the cloud services is incorrect because those items are commonly defined in the provider’s SLA and detail how the provider measures and reports service performance.
Infrastructure patching for provider managed systems is incorrect because patching of systems that the provider manages normally falls under the provider’s operational responsibilities or under a clear shared responsibility statement depending on the service model.
Agreed service uptime percentages and availability metrics the provider must meet is incorrect because uptime and availability metrics are classic SLA elements that the provider commits to and measures.
Cameron’s Certification Exam Tip
When you read SLA questions look for whether the task is an internal governance decision or a service delivery promise. Internal audit planning is typically a customer responsibility while uptime and managed system patching are usually addressed in provider SLAs and shared responsibility documents.
Question 9
Which action demonstrates data sanitization when decommissioning storage hardware?
-
✓ C. A technician crushed an old hard disk after replacing it so that its data could not be retrieved
The correct answer is A technician crushed an old hard disk after replacing it so that its data could not be retrieved.
Crushing a hard disk is a direct form of data sanitization because it physically destroys the storage media and prevents reconstruction of the platters and recovery of data. Physical destruction is an accepted sanitization method for decommissioning storage hardware when the device will not be reused and when irreversible removal of data is required.
An administrator changed their password every 90 days to reduce the chance of account compromise is incorrect because changing passwords protects user accounts and access controls and it does not remove or sanitize data on retired storage devices.
An engineer fitted new locks on the server cabinet to prevent unauthorized physical entry is incorrect because improving physical security can prevent theft or tampering but it does not sanitize or destroy data on hardware that is being decommissioned.
Google Cloud KMS is incorrect because a key management service handles encryption keys and access to encrypted data rather than physically removing or destroying data on decommissioned drives. While cryptographic erasure by destroying keys can serve as a sanitization method in some contexts the KMS product name alone does not describe physical media sanitization.
Cameron’s Certification Exam Tip
When a question asks about decommissioning storage think about concrete sanitization methods like physical destruction or cryptographic erasure rather than access controls or routine account maintenance.
Question 10
In which data protection process are masking, obfuscation, and anonymization used?
-
✓ B. Data deidentification
The correct answer is Data deidentification.
This process uses techniques such as masking, obfuscation, and anonymization to reduce or remove the ability to identify individuals from datasets. These techniques change or suppress identifying attributes so that the data can be used for analytics or testing without exposing personal identifiers.
Encryption is incorrect because it is a cryptographic transformation that protects confidentiality by making data unreadable without keys. Encryption does not remove identifying attributes and it is generally reversible with the correct key, so it is not categorized as the masking or anonymization process asked about here.
Tokenization is incorrect in this question because it replaces sensitive values with tokens that map back to the original value via a secure vault. Tokenization is often reversible and is treated as a distinct technique used for protecting specific values, so it is not the same category as the masking and anonymization methods referenced by data deidentification.
Cameron’s Certification Exam Tip
When you see words about removing or reducing identifiability prefer answers mentioning deidentification or anonymization and avoid answers that focus on cryptographic protection like encryption.
Question 11
Which responsibility remains exclusively with the cloud customer regardless of the cloud deployment model they choose?
-
✓ D. Governance and compliance
The correct answer is Governance and compliance.
Governance and compliance remain the customer responsibility across all cloud deployment models because legal obligations and organizational policies cannot be delegated to a vendor. Providers can supply controls and compliance documentation, but the customer must set governance policies, classify data, accept risk, and demonstrate compliance to auditors and regulators.
Identity and access management controls is not exclusively the customer responsibility because providers operate and secure the IAM infrastructure and sometimes manage identities for customers. In many models IAM is shared between provider and customer depending on the service.
Application code and runtime is not exclusively the customer responsibility because platform and software as a service offerings place the runtime and application management with the provider. Responsibility shifts depending on whether the service is IaaS, PaaS, or SaaS.
Physical infrastructure and networking is not the customer responsibility because cloud providers own and operate the data center hardware and the underlying network. Those components are typically managed by the provider in all public cloud models.
Cameron’s Certification Exam Tip
When you see questions about shared responsibility focus on what cannot be transferred to a vendor such as legal obligations and policy decisions. Pay attention to whether the task is about controls and tools or about accepting risk and demonstrating compliance.
Question 12
Which audit report may be published to demonstrate a cloud provider’s security controls to the general public?
The correct answer is SOC 3.
SOC 3 is a general use attestation report that is explicitly designed to be shared publicly to demonstrate that an organization has effective controls related to security availability processing integrity confidentiality or privacy depending on the engagement. It gives a high level attestation without the detailed control descriptions and testing evidence so it is suitable for the general public.
SOC 2 is incorrect because that report contains detailed descriptions of controls and the auditor testing results and it is intended for customers and other stakeholders rather than broad public distribution. Organizations normally share SOC 2 reports under nondisclosure agreements.
ISO 27001 is incorrect because it is a certification of an information security management system rather than a public attestation report in the SOC format. A certificate or statement of certification may be published but the formal audit evidence and detailed reports are not typically released to the general public and the certificate does not serve as the same public assurance document as a SOC 3.
Cameron’s Certification Exam Tip
Look for the phrase general use versus restricted in the question. If the exam asks which report can be shared with the general public choose the general use report such as SOC 3.
Question 13
Which mechanism should a cloud administrator use to prioritize and allocate user resource requests when the infrastructure cannot satisfy all incoming demands?
Shares is correct because it provides a way to assign relative priority or weights so that when total demand exceeds available capacity the system divides resources proportionally according to those weights.
This method is used by hypervisors and operating systems to implement fair allocation under contention. Administrators set shares to express the importance of workloads without permanently reserving capacity and the scheduler uses those weights to prioritize and allocate scarce resources.
Quotas set maximum allowable usage per user or project and they prevent overconsumption but they do not define how remaining capacity is distributed among competing requests when resources are constrained.
Limits impose absolute caps on resource consumption for a given object and they prevent a single tenant from exceeding a ceiling but they do not provide a proportional prioritization mechanism for dividing scarce resources.
Reservations guarantee capacity for a specific tenant or workload by holding resources aside and they ensure availability for that entity but they are not a general method for prioritizing or fairly allocating resources among multiple competing requests.
Cameron’s Certification Exam Tip
Look for wording that implies proportional or weighted distribution when demand exceeds supply and choose the option that describes relative priority rather than fixed caps or holds.
Question 14
What security benefit does maintaining a comprehensive cloud archive and backup program provide for protecting data integrity and ensuring availability?
-
✓ B. Allows restoration to known good states after tampering or deletion
The correct answer is Allows restoration to known good states after tampering or deletion.
Allows restoration to known good states after tampering or deletion is correct because a comprehensive archive and backup program creates isolated copies and versioned snapshots so organizations can recover data integrity after corruption or malicious alteration and restore availability when primary data is lost.
A robust backup program typically includes immutability, offsite or logically separated copies, checksums or other integrity checks, and regular restore testing. These elements together make it possible to perform reliable point in time recovery and meet recovery time and recovery point objectives.
Identity and access management is incorrect because IAM focuses on controlling who can access resources and what actions they can perform. IAM does not by itself provide historical copies or point in time recovery needed to restore data after tampering or deletion.
Supports regulatory compliance reporting is incorrect because although archives and backups can assist with audits and evidence retention, their primary security benefit for integrity and availability is the ability to recover data. Compliance reporting is a secondary outcome rather than the core recovery capability.
Question 15
Which statement accurately describes who is responsible for maintenance and version control across cloud service models such as IaaS PaaS and SaaS?
-
✓ C. In PaaS the cloud customer maintains and versions the applications they buy or build while the provider maintains the platform tools and underlying infrastructure
The correct option is In PaaS the cloud customer maintains and versions the applications they buy or build while the provider maintains the platform tools and underlying infrastructure.
This is correct because in PaaS the cloud provider supplies and maintains the platform runtime, middleware, operating system, virtualization, and physical hardware while the customer is responsible for their own application code, dependencies, deployments, updates, and version control. The customer therefore manages application maintenance and versioning and the provider manages the platform and infrastructure maintenance.
The cloud provider arranges update and patch schedules with clients for both SaaS and PaaS is incorrect because providers do not arrange or perform application level update schedules for customer built applications on PaaS. Providers handle platform and SaaS application patches, but customers control updates and versioning for their own applications on PaaS.
In an IaaS deployment the cloud customer manages hardware networking storage and the virtualization layer is incorrect because in IaaS the provider is responsible for the physical hardware, networking, storage, and the hypervisor or virtualization layer. The customer manages the operating systems, middleware, runtimes, and applications running on the provisioned virtual machines.
In a SaaS offering the customer is responsible for maintenance and versioning of every component is incorrect because SaaS vendors deliver and maintain the application and its underlying components. Customers generally only manage their data, configuration choices, and access controls rather than the application code or platform stack.
Cameron’s Certification Exam Tip
When you see service model questions map each layer from hardware up to application and ask who controls the application code. Remember that in IaaS the customer controls the OS and apps, in PaaS the customer controls the apps only, and in SaaS the provider controls almost everything.
Question 16
Which capability do information rights management systems not typically provide?
The correct answer is Deletion.
Information rights management systems are designed to protect content by controlling who can open and what actions they can take on files. They implement encryption, apply usage policies, and allow revocation and expiration of rights, but they do not typically perform a guaranteed Deletion that removes every copy of a file from every device and storage location.
Some vendors may offer integrations with device management or remote wipe tools to help remove files from managed endpoints and those features can approximate Deletion. Those features are separate from the core IRM capability and they depend on endpoint management rather than rights enforcement alone.
User authentication is incorrect because IRM solutions depend on authenticating users so that rights can be bound to identities and enforced.
Policy enforcement is incorrect because enforcing access and usage policies is a primary function of IRM and is how the systems control printing, copying, editing, and expiration of protected content.
Cameron’s Certification Exam Tip
When answering, separate features that control and restrict data from those that remove data. IRM is about enforcing access and usage policies and revoking rights, and it does not usually guarantee complete removal of all copies without endpoint management.
Question 17
You are a cloud architect responsible for a regional payment platform’s infrastructure and you must design the environment to remain operational when individual components fail. Which architectural principle should be prioritized to ensure continuous service availability?
-
✓ C. Distributed redundancy
The correct option is Distributed redundancy.
Distributed redundancy is the architectural principle that replicates critical components across independent failure domains so the service can continue when individual parts fail. This includes placing instances in multiple availability zones or regions, using load balancers and health checks for automatic failover, and keeping data replicated so clients can be served from other replicas without interruption.
Centralized operations management is important for monitoring and coordination but it concentrates control and does not by itself remove single points of failure in the runtime path. It helps operations but it does not guarantee that the service stays up when components fail.
Isolated tenant environments improve security and limit the blast radius between tenants, but isolation alone does not create redundant replicas or automatic failover that preserve availability when components fail.
Elastic scaling helps handle changes in load by adding or removing capacity, but scaling addresses capacity and performance more than resilience to individual component failure. Elastic scaling can complement redundancy but it is not the primary design principle for surviving component failures.
Cameron’s Certification Exam Tip
When you see wording about staying operational despite component failures look for answers that mention replication, multiple failure domains, or automatic failover. Those phrases usually point to availability through redundancy rather than scaling or centralization.
Question 18
What is a significant drawback of storing data fragments in multiple legal jurisdictions?
-
✓ C. Cross border data movement
Cross border data movement is the correct answer. Dispersing data fragments across multiple legal jurisdictions can create serious legal and compliance challenges because different countries have different rules about data protection, lawful access, and export of personal information. The movement of fragments across borders can trigger data transfer restrictions, require contractual safeguards or approvals, and expose the data to foreign legal process even when no single location holds the complete data set.
For example, regulators may treat reassembled or reconstructible fragments as subject to local privacy laws and oversight. Organisations that fragment data to improve resilience can still face obligations under regimes such as the GDPR and similar national laws when any fragment crosses a jurisdictional boundary. Those obligations often demand technical, contractual, or legal safeguards and can significantly complicate a deployment compared with keeping data within a single legal territory.
Reconstruction and reassembly overhead is not the best answer because while there is some technical cost to reassembling fragments, that is primarily a performance and architectural concern and not the major legal drawback posed by storing fragments in different countries.
Distributed erasure coding is not correct because it names a technique used to disperse data rather than a drawback. Erasure coding is often chosen to improve durability and availability and it is not inherently a legal issue.
More complex key lifecycle management is not the best choice because key management can be made consistent with centralised key services or hardware security modules. Key lifecycle complexity is a technical and operational challenge but it does not capture the primary legal risk that arises when fragments cross national borders.
Cameron’s Certification Exam Tip
When you see answers that mention jurisdiction, legal, or cross border risk think about compliance and data sovereignty first because these concerns often outweigh pure technical costs on regulatory exams.
Question 19
A technology company named Meridian Cloud is adopting OpenID Connect for single sign on and wants to know which authorization framework OpenID Connect is built on and uses for authenticating users?
The correct answer is OAuth 2.0.
OpenID Connect is an identity layer built on top of OAuth 2.0 and it uses the OAuth 2.0 authorization flows to authenticate users and issue tokens such as the ID token and access token. OpenID Connect adds standardized identity claims and discovery on top of the OAuth 2.0 framework and commonly relies on the authorization code flow for secure authentication.
WS Federation is an older Microsoft web services federation protocol and it is not the base protocol for OpenID Connect. Microsoft and other vendors have moved toward OAuth 2.0 and OpenID Connect for modern single sign on so WS Federation is less relevant on newer exams.
LDAP is a directory access protocol used for querying and managing directory information and it is not an authorization framework that OpenID Connect is built on. LDAP serves different purposes and does not provide the OAuth style grants that OIDC depends on.
SAML 2.0 is an XML based federation and single sign on standard that operates separately from OpenID Connect. SAML 2.0 can be used for SSO in many environments but it is not the underlying authorization framework for OpenID Connect which is built on OAuth 2.0.
Cameron’s Certification Exam Tip
When you see OpenID Connect on an exam think of an identity layer on top of an OAuth style authorization framework and match it to the OAuth based option rather than directory or XML federation protocols.
Question 20
Which type of incident describes sensitive records being disclosed to someone who is not authorized to receive them?
-
✓ B. Unauthorized data exposure
The correct answer is Unauthorized data exposure.
Unauthorized data exposure describes an incident in which sensitive records are disclosed to someone who does not have permission to see them. This covers accidental exposures from misconfigured permissions and deliberate disclosures where confidentiality is broken, so it matches the description exactly.
Insider threat is not the best choice because it describes the actor or source of risk rather than the specific incident type. An insider threat can cause an unauthorized exposure but the phrase does not specifically mean records were disclosed without authorization.
Data loss is also incorrect because it usually refers to loss of access, deletion, or destruction of data rather than the unauthorized disclosure of sensitive records to an outside or unauthorized party.
Cameron’s Certification Exam Tip
When a question mentions words like disclosed or exposed focus on confidentiality related incident types rather than options that describe actors or availability issues.
Question 21
When securing the management plane of a cloud deployment which factor is least important to prioritize?
When securing the management plane of a cloud deployment the least important factor to prioritize is Data backups.
Data backups are essential for resilience and for restoring data and configurations after loss or corruption. They do not however directly reduce the risk of unauthorized access to control interfaces or credentials, so they are a lower priority when the question is specifically about hardening the management plane.
Network isolation for management interfaces is critical because isolating management networks reduces the attack surface and limits lateral movement. Strong network controls ensure that only authorized administrators can reach management endpoints and they are therefore a high priority for management plane security.
Identity and access management controls are essential because they determine who can perform management actions and how credentials are protected. Enforcing least privilege and multifactor authentication directly defends the management plane and makes IAM a top priority.
Management activity logging is important because it provides detection and forensic capability for suspected misuse or compromise of management interfaces. Audit logs enable timely response and investigation so logging is also a high priority for the management plane.
Cameron’s Certification Exam Tip
For “least important” questions think whether the control prevents compromise or supports recovery. Prioritize prevention controls such as isolation, strong IAM and logging, and treat backups as recovery focused.
Question 22
Which type of organization is subject to FISMA requirements and would be assessed by a third party security assessor?
Government agency is correct.
FISMA is a federal law that requires federal agencies to develop, document, and implement an information security program and to undergo periodic independent assessments. A government agency is the entity that is directly subject to FISMA and that must engage or be assessed by a third party security assessor as part of its compliance and authorization processes.
Cloud service provider is not the primary entity covered by FISMA. Cloud providers that host federal systems may be assessed under FedRAMP when they serve federal customers, but the FISMA responsibility and formal authorization rests with the federal agency that owns the system.
Healthcare provider is generally regulated by laws such as HIPAA and HITECH and is not directly subject to FISMA unless it is a federal health agency. Private healthcare organizations are not assessed under FISMA in the same way that federal agencies are assessed.
Cameron’s Certification Exam Tip
When a question asks about FISMA think about whether the organization is a federal agency. FISMA applies to federal information systems and requires independent assessments rather than applying directly to most private sector organizations.
Question 23
In cloud security it is important to distinguish between incidents that destroy or make data unavailable and incidents that expose data to unauthorized parties. Which of the following scenarios is misclassified as data loss and should instead be treated as a data breach?
-
✓ B. Sensitive data stolen by attackers exploiting vulnerabilities in an application
Sensitive data stolen by attackers exploiting vulnerabilities in an application is correct because it describes confidential information being accessed and taken by unauthorized parties and therefore should be treated as a data breach.
This scenario involves external actors exploiting a vulnerability to exfiltrate information which means confidentiality has been violated. A data breach requires responses that include containment of the attacker and notification obligations when sensitive information has been exposed to unauthorized parties.
An administrator unintentionally deletes rows from a production database is wrong because that scenario describes accidental destruction or unavailability of data rather than exposure to outsiders. The primary response is recovery and improving change controls and backups rather than breach notification.
A misconfigured Cloud Storage lifecycle rule causing objects to be permanently removed is wrong because the objects are being removed and thus become unavailable or destroyed. This is a data loss incident that requires restoration and configuration fixes and not a breach unless the objects were also disclosed to unauthorized parties.
Loss of encryption keys that makes stored backups unreadable is wrong in this phrasing because the data becomes unusable and unavailable and confidentiality may remain intact. This is a data loss event. If the keys were stolen or otherwise exposed then it could become a breach but loss alone without exposure is not a breach.
Cameron’s Certification Exam Tip
When deciding between loss and breach ask whether data was exposed to unauthorized parties. If exposure occurred treat it as a breach. If data was destroyed or rendered unreadable treat it as loss.
Question 24
What term describes a cloud platform that automatically scales compute and storage resources to match changing workload demands?
Rapid elasticity is correct because it explicitly refers to a cloud platform automatically scaling compute and storage to match changing workload demands.
Rapid elasticity describes the capability to quickly add or remove resources such as compute instances and storage so capacity aligns with demand. This characteristic emphasizes automatic and dynamic scaling that can occur without manual intervention and it is the cloud trait that maps directly to the scenario in the question.
Resource pooling is incorrect because it refers to the provider pooling computing resources to serve multiple consumers and it focuses on multi tenancy and efficient resource use rather than automatic scaling behavior.
On-demand self-service is incorrect because it means customers can provision resources themselves without human interaction from the provider and it does not by itself describe automatic, dynamic scaling to match changing workloads.
Cameron’s Certification Exam Tip
When you see a question about automatic scaling look for the term elasticity or wording about dynamically adding and removing resources rather than terms about provisioning or pooling.
Question 25
While negotiating terms with a prospective client at BluePeak Cloud you are clarifying policies about how long customer records are retained and how they are securely erased when they are no longer required. Which section of a service level agreement is most relevant to these concerns?
The correct option is Data governance.
Data governance is the SLA section that defines how customer data is managed across its lifecycle. This section typically specifies retention periods, legal and regulatory obligations, the responsibilities of the provider and the customer, and the required processes for secure erasure or return of records when they are no longer needed.
Incident response procedures is about detecting, reporting, and remediating security incidents and breaches. It does not set retention schedules or describe how customer records are securely erased.
Service performance targets cover availability levels, throughput, response times, and other operational metrics. They do not address data lifecycle management or deletion practices.
Billing and invoice terms govern pricing, invoicing cycles, payment methods, and dispute resolution for charges. They do not define data retention policies or secure disposal requirements.
Cameron’s Certification Exam Tip
Scan the question for keywords like retention, deletion, or lifecycle. Those terms usually point to the data governance section of an SLA rather than performance or billing sections.
Question 26
Which of the following is an example of an Internet of Things device found in a home?
-
✓ B. Connected refrigerator that sends a shopping list to the owner’s phone
The correct answer is Connected refrigerator that sends a shopping list to the owner’s phone.
This is an Internet of Things device because it is a physical appliance with embedded sensors and network connectivity that communicates state and actionable data to a user and other systems. The connected refrigerator monitors inventory or usage and sends the shopping list to the owner’s phone over the network, which matches the common definition of an IoT device in a home.
Google Cloud Pub/Sub is incorrect because it is a cloud messaging service used to move data between applications and services rather than a physical device in a home. It can be part of an IoT solution for transporting messages but it is not itself an IoT device.
A system that infers and carries out tasks without being explicitly programmed is incorrect because that describes artificial intelligence or autonomous software rather than a tangible home device. Such systems may augment IoT devices but the statement does not describe an example of a home IoT device.
Cameron’s Certification Exam Tip
When asked to identify an IoT device look for a real world object with sensors and network connectivity that interacts with people or other devices and not a cloud service or a general AI description.
Question 27
When a web service uses SOAP to exchange data what structure does it wrap the message in?