What business analysis deliverable would be an essential input when designing an audit log report?
Access Control Requirements
Risk Log
Future State Business Process
Internal Audit Report
Designing an audit log report requires clarity onwho is allowed to do what, which actions are considered security-relevant, and what evidence must be captured to demonstrate accountability.Access Control Requirementsare the essential business analysis deliverable because they define roles, permissions, segregation of duties, privileged functions, approval workflows, and the conditions under which access is granted or denied. From these requirements, the logging design can specify exactly which events must be recorded, such as authentication attempts, authorization decisions, privilege elevation, administrative changes, access to sensitive records, data exports, configuration changes, and failed access attempts. They also help determine how logs should attribute actions to unique identities, including service accounts and delegated administration, which is critical for auditability and non-repudiation.
Access control requirements also drive necessary log fields and report structure: user or role, timestamp, source, target object, action, outcome, and reason codes for denials or policy exceptions. Without these requirements, an audit log report can become either too sparse to support investigations and compliance, or too noisy to be operationally useful.
A risk log can influence priorities, but it does not define the authoritative set of access events and entitlements that must be auditable. A future state process can provide context, yet it is not as precise as access rules for determining what to log. An internal audit report may highlight gaps, but it is not the primary design input compared to formal access control requirements.
Why would a Business Analyst include current technology when documenting the current state business processes surrounding a solution being replaced?
To ensure the future state business processes are included in user training
To identify potential security impacts to integrated systems within the value chain
To identify and meet internal security governance requirements
To classify the data elements so that information confidentiality, integrity, and availability are protected
A Business Analyst documents current technology in the “as-is” state because business processes are rarely isolated; they depend on applications, interfaces, data exchanges, identity services, and shared infrastructure. From a cybersecurity perspective, replacing one solution can unintentionally change trust boundaries, authentication flows, authorization decisions, logging coverage, and data movement across integrated systems. Option B is correct because understanding the current technology landscape helps identify where security impacts may occur across the value chain, including upstream data providers, downstream consumers, third-party services, and internal platforms that rely on the existing system.
Cybersecurity documents emphasize that integration points are common attack surfaces. APIs, file transfers, message queues, single sign-on, batch jobs, and shared databases can introduce risks such as broken access control, insecure data transmission, data leakage, privilege escalation, and gaps in monitoring. If the BA captures current integrations, dependencies, and data flows, the delivery team can properly perform threat modeling, define security requirements, and avoid breaking compensating controls that other systems depend on. This also supports planning for secure decommissioning, migration, and cutover, ensuring credentials, keys, service accounts, and network paths are rotated or removed appropriately.
The other options are less precise for the question. Training is not the core driver for documenting current technology. Governance requirements apply broadly but do not explain why current tech must be included. Data classification is important, but it is a separate activity from capturing technology dependencies needed to assess integration security impacts.
Recovery Point Objectives and Recovery Time Objectives are based on what system attribute?
Sensitivity
Vulnerability
Cost
Criticality
Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are continuity and resilience targets that define how quickly a system must be restored and how much data loss is acceptable after an interruption. These objectives are derived primarily fromsystem criticality, meaning how essential the system is to business operations, safety, revenue, legal obligations, and customer commitments. Highly critical systems support mission-essential functions or time-sensitive services, so they require shorter RTOs (restore fast) and smaller RPOs (lose little or no data). Less critical systems can tolerate longer outages and larger data gaps, allowing longer RTOs and RPOs.
Cybersecurity and business continuity documents tie RTO/RPO determination to business impact analysis results. The BIA identifies maximum tolerable downtime, operational dependencies, and the consequences of service disruption and data unavailability. From there, organizations set RTO/RPO targets that align with risk appetite and required service levels. Those targets then drive technical and operational controls such as backup frequency, replication methods, high availability architecture, failover design, disaster recovery procedures, monitoring, and routine recovery testing.
Sensitivity focuses on confidentiality needs and may influence encryption and access controls, but it does not directly define acceptable downtime or data loss. Vulnerability describes weakness exposure and is used for threat/risk management, not recovery objectives. Cost is a constraint when selecting recovery solutions, but RTO/RPO are defined by business need and system importance first—then solutions are chosen to meet those targets within budget.
If a threat is expected to have a serious adverse effect, according to NIST SP 800-30 it would be rated with a severity level of:
moderate.
severe.
severely low.
very severe.
NIST SP 800-30 Rev. 1 defines qualitative risk severity levels using consistent impact language. In its assessment scale,“Moderate”is explicitly tied to events that can be expected to have aserious adverse effecton organizational operations, organizational assets, individuals, other organizations, or the Nation.
A “serious adverse effect” is described as outcomes such as asignificant degradation in mission capabilitywhere the organization can still perform its primary functions but withsignificantly reduced effectiveness,significant damageto organizational assets,significant financial loss, orsignificant harm to individualsthat does not involve loss of life or life-threatening injuries. This phrasing is used to distinguish “Moderate” from “Low” (limited adverse effect) and from “High” (severe or catastrophic adverse effect).
This classification matters in enterprise risk because it drives prioritization and control selection. A “Moderate” rating typically triggers stronger treatment actions than “Low,” such as tighter access controls, enhanced monitoring, more frequent vulnerability remediation, stronger configuration management, and improved incident response readiness. It also helps leaders compare risks consistently across systems and business processes by anchoring severity to clear operational and harm-based criteria rather than subjective judgment.
Which of the following is a cybersecurity risk that should be addressed by business analysis during solution development?
Project budgets may prevent developers from implementing the full set of security measures
QA may fail to identify all possible security vulnerabilities during system testing
The solution may not be understood well enough to reliably identify security risks
Code may be implemented in ways that introduce new vulnerabilities
Business analysis is responsible for ensuring the solution is correctly understood in terms of business purpose, process flows, data handling, user roles, integrations, and non-functional requirements such as security and privacy. If the solution is not understood well enough, security risks will be missed early, leading to gaps that are expensive and difficult to correct later. This is why option C is the best answer: inadequate understanding prevents reliable identification of threats, sensitive data paths, trust boundaries, and misuse cases during requirements and design stages.
Cybersecurity documents emphasize “security by design” and “shift-left” practices, meaning risks should be identified and addressed before build and test. Business analysis contributes by eliciting and documenting security requirements, clarifying data classification and retention needs, defining user access and privilege expectations, identifying regulatory and policy constraints, and ensuring interfaces and third-party dependencies are known and assessed. BA also supports threat modeling inputs by providing accurate context about actors, workflows, and data movement, which are essential for identifying where controls like authentication, authorization, logging, encryption, and validation must exist.
Other options align to different roles or stages: budgets are governance and project management constraints, QA limitations are testing risks, and coding-introduced vulnerabilities are primarily addressed through secure coding standards, code review, and developer practices. BA’s key cybersecurity risk is incomplete understanding that prevents correct security requirements and risk identification.
If a system contains data with differing security categories, how should this be addressed in the categorization process?
Security for the system should be in line with the highest impact value across all categories
The data should be segregated across multiple systems so that they can have the appropriate security level for each
The data types should be merged into a single category and reevaluated
Security for the system should be in line with the lowest impact value across all categories
When a system processes multiple information types with different security categorizations, cybersecurity standards require the system’s overall security categorization to reflect thehighest impact levelamong those information types. This is commonly called thehigh-water markapproach. The reason is straightforward: the system is only as secure as the protection applied to the most sensitive or most mission-critical data it handles. If the system were categorized at the lowest impact value, an attacker could target the weaker control baseline and still reach higher-impact information, creating an unacceptable gap in confidentiality, integrity, or availability protection.
In practice, categorization evaluates the potential impact of loss for each of the three security objectives and then selects the highest level for each objective across all information types handled by the system. That resulting system categorization then drives control selection, assurance activities, and the rigor of monitoring and incident response expectations. This approach also supports consistent governance: it prevents under-protecting systems that contain a mix of low and high sensitivity information and aligns control strength with worst-case business impact.
Segregating data across systems can be a valid architecture decision to reduce cost or scope, but it is not the required categorization rule; it is an optional design strategy that must be justified and implemented securely. Merging categories or using the lowest value contradicts risk-based protection principles and would likely fail compliance and audit scrutiny.
What is defined as an internal computerized table of access rules regarding the levels of computer access permitted to login IDs and computer terminals?
Access Control List
Access Control Entry
Relational Access Database
Directory Management System
AnAccess Control List (ACL)is a structured, system-maintained list of authorization rules that specifieswho or what is allowed to access a resourceand what actions are permitted. In many operating systems, network devices, and applications, an ACL functions as an internal table that maps identities such as user IDs, group IDs, service accounts, or even device/terminal identifiers to permissions like read, write, execute, modify, delete, or administer. When a subject attempts to access an object, the system consults the ACL to determine whether the requested operation should be allowed or denied, enforcing the organization’s security policy at runtime.
The description in the question matches the classic definition of an ACL as a computerized table of access rules tied to login IDs and sometimes the originating endpoint or terminal context. ACLs are central to implementingdiscretionary access controland are also widely used in networking (for example, permitting or denying traffic flows based on source/destination and ports) and file systems (controlling access to folders and files).
AnAccess Control Entry (ACE)is only a single line item within an ACL (one rule for one subject). A “Relational Access Database” is not a standard security control term for authorization tables. A “Directory Management System” manages identities and groups, but it is not the same as the enforcement list attached to a specific resource. Therefore, the correct answer isAccess Control List.
Public & Private key pairs are an example of what technology?
Virtual Private Network
IoT
Encryption
Network Segregation
Public and private key pairs are the foundation ofasymmetric encryption, also calledpublic key cryptography. In this model, each entity has two mathematically related keys: apublic keythat can be shared widely and aprivate keythat must be kept secret. The keys are designed so that what one key does, only the other key can undo. This enables two core security functions used throughout cybersecurity architectures.
First,confidentiality: data encrypted with a recipient’s public key can only be decrypted with the recipient’s private key. This allows secure communication without having to share a secret key in advance, which is especially important on untrusted networks like the internet. Second,digital signatures: a sender can sign data with their private key, and anyone can verify the signature using the sender’s public key. This provides authenticity (proof the sender possessed the private key), integrity (the data was not altered), and supports non-repudiation when combined with proper key custody and audit practices.
These mechanisms underpin widely used security controls such as TLS for secure web connections, secure email standards, code signing, and certificate-based authentication. A VPN may use public key cryptography during key exchange, but the key pair itself is specifically anencryption technology. IoT and network segregation are unrelated categories.
What common mitigation tool is used for directly handling or treating cyber risks?
Exit Strategy
Standards
Control
Business Continuity Plan
In cybersecurity risk management,risk treatmentis the set of actions used to reduce risk to an acceptable level. The most common tool used to directly treat or mitigate cyber risk is acontrolbecause controls are the specific safeguards that prevent, detect, or correct adverse events. Cybersecurity frameworks describe controls as measures implemented to reduce either thelikelihoodof a threat event occurring or theimpactif it does occur. Controls can be technical (such as multifactor authentication, encryption, endpoint protection, network segmentation, logging and monitoring), administrative (policies, standards, training, access approvals, change management), or physical (badges, locks, facility protections). Regardless of type, controls are the direct mechanism used to mitigate identified risks.
Anexit strategyis typically a vendor or outsourcing risk management concept focused on how to transition away from a provider or system; it supports resilience but is not the primary tool for directly mitigating a specific cyber risk.Standardsguide consistency by defining required practices and configurations, but the standard itself is not the mitigation—controls implemented to meet the standard are. Abusiness continuity plansupports availability and recovery after disruption, which is important, but it primarily addresses continuity and recovery rather than directly reducing the underlying cybersecurity risk in normal operations. Therefore, the best answer is the one that represents the direct implementation of safeguards:controls.
Analyst B has discovered unauthorized access to data. What has she discovered?
Breach
Hacker
Threat
Ransomware
Unauthorized access to data is the defining condition of a data breach. In standard cybersecurity terminology, a breach occurs when confidentiality is compromised—meaning data is accessed, acquired, viewed, or exfiltrated by an entity that is not authorized to do so. This is distinct from a “threat,” which is only the potential for harm, and distinct from a “hacker,” which describes an actor rather than the security outcome. A breach can result from external attackers, malicious insiders, credential theft, misconfigurations, unpatched vulnerabilities, or poor access controls. Cybersecurity guidance typically frames breaches as realized security incidents with measurable impact: exposure of regulated data, loss of intellectual property, fraud risk, reputational harm, and legal/regulatory consequences. Once unauthorized access is confirmed, incident response procedures generally require containment (limit further access), preservation of evidence (logs, system images where appropriate), eradication (remove persistence), and recovery (restore secure operations). Organizations also assess scope—what data types were accessed, how many records, which systems, and the dwell time—and then determine notification obligations where laws or contracts apply. In short, the discovery describes an actual compromise of data confidentiality, which is precisely a breach.
Analyst B has discovered multiple sources which can harm the organization’s systems. What has she discovered?
Breach
Hacker
Threat
Ransomware
Multiple sources that can harm an organization’s systems are classified as threats. In cybersecurity risk terminology, a threat is any circumstance, event, actor, or condition with the potential to adversely impact confidentiality, integrity, or availability. Threats can be human (external attackers, insiders, third-party compromises), technical (malware, ransomware campaigns, exploit kits), operational (misconfigurations, weak processes, inadequate monitoring), or environmental (power disruption, natural disasters). This differs from a breach, which is the realized outcome where unauthorized access or disclosure has already occurred. It also differs from hacker, which refers to one type of threat actor rather than the broader category of potential harm. Ransomware is a specific threat type (malware that encrypts data and demands payment), not a general term for multiple sources of harm. Cybersecurity documents commonly pair “threats” with “vulnerabilities” and “controls”: threats exploit vulnerabilities to create risk; controls reduce either the likelihood of exploitation or the impact if exploitation occurs. Identifying “multiple sources which can harm systems” is essentially threat identification—an early and ongoing step in risk management used to inform security architecture, monitoring, and incident preparedness. Therefore, the correct concept is threat.
Which of the following should be addressed by functional security requirements?
System reliability
User privileges
Identified vulnerabilities
Performance and stability
Functional security requirements definewhat security capabilities a system must provideto protect information and enforce policy. They describe required security functions such as identification and authentication, authorization, role-based access control, privilege management, session handling, auditing/logging, segregation of duties, and account lifecycle processes. Because of this,user privilegesare a direct and core concern of functional security requirements: the system must support controlling who can access what, under which conditions, and with what level of permission.
In cybersecurity requirement documentation, “privileges” include permission assignment (roles, groups, entitlements), enforcement of least privilege, privileged access restrictions, elevation workflows, administrative boundaries, and the ability to review and revoke permissions. These are functional because they require specific system behaviors and features—for example, the ability to define roles, prevent unauthorized actions, log privileged activities, and enforce timeouts or re-authentication for sensitive operations.
The other options are typically classified differently.System reliabilityandperformance/stabilityare generally non-functional requirements (quality attributes) describing service levels, resilience, and operational characteristics rather than security functions.Identified vulnerabilitiesare findings from assessments that drive remediation work and risk treatment; they inform security improvements but are not themselves functional requirements. Therefore, the option best aligned with functional security requirements is user privileges.
Which of the following control methods is used to protect integrity?
Principle of Least Privilege
Biometric Verification
Anti-Malicious Code Detection
Backups and Redundancy
Integritymeans information and systems remain accurate, complete, and protected from unauthorized or improper modification. ThePrinciple of Least Privilegeis a direct integrity protection control because it limits who can change data and what changes they are allowed to make. Under least privilege, users, applications, and service accounts receive only the minimum permissions needed to perform approved tasks, and nothing more. This reduces the chance that an attacker using a compromised account can alter records, manipulate transactions, or change configurations, and it also reduces accidental changes by well-meaning users who do not need write or administrative rights.
Least privilege is commonly enforced through role-based access control, separation of duties, restricted administrative roles, just-in-time elevation for privileged tasks, and periodic access reviews to remove excess permissions. These practices are emphasized in cybersecurity frameworks because integrity failures often occur when excessive access allows unauthorized edits to sensitive data, logs, security settings, or application code.
The other options relate to security but are less directly tied to integrity as the primary objective.Biometric verificationis an authentication method that helps confirm identity; it supports access control broadly, but it does not by itself limit modification capability once access is granted.Anti-malicious code detectionhelps prevent malware that could corrupt data, but it is primarily a detection/prevention tool rather than the foundational control for authorized modification.Backups and redundancyprimarily support availability and recovery after corruption, not the prevention of unauthorized changes.
NIST 800-30 defines cyber risk as a function of the likelihood of a given threat-source exercising a potential vulnerability, and:
the pre-disposing conditions of the vulnerability.
the probability of detecting damage to the infrastructure.
the effectiveness of the control assurance framework.
the resulting impact of that adverse event on the organization.
NIST SP 800-30 describes risk using a classic risk model:risk is a function of likelihood and impact. In this model, a threat-source may exploit a vulnerability, producing a threat event that results in adverse consequences. Thelikelihoodcomponent reflects how probable it is that a threat event will occur and successfully cause harm, considering factors such as threat capability and intent (or in non-adversarial cases, the frequency of hazards), the existence and severity of vulnerabilities, exposure, and the strength of current safeguards. However, likelihood alone does not define risk; a highly likely event that causes minimal harm may be less important than a less likely event that causes severe harm.
The second required component is theimpact—the magnitude of harm to the organization if the adverse event occurs. Impact is commonly evaluated across mission and business outcomes, including financial loss, operational disruption, legal or regulatory consequences, reputational damage, and loss of confidentiality, integrity, or availability. This is why option D is correct: NIST’s definition explicitly ties the risk expression tothe resulting impact on the organization.
The other options may influence likelihood assessment or control selection, but they are not the missing definitional element. Detection probability and control assurance relate to monitoring and governance; predisposing conditions can shape likelihood. None replace the
The hash function supports data in transit by ensuring:
validation that a message originated from a particular user.
a message was modified in transit.
a public key is transitioned into a private key.
encrypted messages are not shared with another party.
A cryptographic hash function supports data in transit primarily by providingintegrity assurance. When a sender computes a hash (digest) of a message and the receiver recomputes the hash after receipt, the two digests should match if the message arrived unchanged. If the message is altered in any way while traveling across the network—whether by an attacker, a faulty intermediary device, or transmission errors—the recomputed digest will differ from the original. This difference is the key signal that the messagewas modified in transit, which is what option B expresses. In practical secure-transport designs, hashes are typically combined with a secret key or digital signature so an attacker cannot simply modify the message and generate a new valid digest. Examples include HMAC for message authentication and digital signatures that hash the content and then sign the hash with a private key. These mechanisms provide integrity and, when keyed or signed, also provide authentication and non-repudiation properties.
Option A is more specifically about authentication of origin, which requires a keyed construction such as HMAC or a signature scheme; a plain hash alone cannot prove who sent the message. Option C is incorrect because keys are not “converted” from public to private. Option D relates to confidentiality, which is provided by encryption, not hashing. Therefore, the best answer is B because hashing enables detection of message modification during transit.
What stage of incident management would "strengthen the security from lessons learned" fall into?
Response
Recovery
Detection
Remediation
“Strengthen the security from lessons learned” fits theremediationstage because it focuses on eliminating root causes and improving controls so the same incident is less likely to recur. In incident management lifecycles,responseis about immediate actions to contain and manage the incident (triage, containment, eradication actions in progress, communications, and preserving evidence).Detectionis the identification and confirmation stage (alerts, analysis, validation, and initial classification).Recoveryis restoring services to normal operation and verifying stability, including bringing systems back online, validating data integrity, and meeting recovery objectives.
After the environment is stable, organizations conduct a post-incident review and then implement corrective and preventive actions. That work is remediation: closing exploited vulnerabilities, hardening configurations, rotating credentials and keys, tightening access and privileged account controls, improving monitoring and logging coverage, updating firewall rules or segmentation, refining secure development practices, and correcting process gaps such as weak change management or incomplete asset inventory. Remediation also includes updating policies and playbooks, enhancing detection rules based on observed attacker techniques, and training targeted groups if human factors contributed.
Cybersecurity guidance emphasizes documenting lessons learned, assigning owners and deadlines, validating fixes, and tracking completion because “lessons learned” without implemented change does not reduce risk. The defining characteristic is durable improvement to the control environment, which is why this activity belongs toremediationrather than response, detection, or recovery.
Where SaaS is the delivery of a software service, what service does PaaS provide?
Load Balancers
Storage
Subscriptions
Operating System
Cloud service models are commonly described as stacked layers of responsibility.Software as a Servicedelivers a complete application to the customer, while the provider manages the underlying platform and infrastructure.Platform as a Servicesits one level below SaaS: it provides the managed platform needed to build, deploy, and run applications without the customer having to manage the underlying servers and most core system software.
A defining feature of PaaS is that the provider supplies and manages key platform components such as theoperating system, runtime environment, middleware, web/application servers, and often supporting services like managed databases, messaging, scaling, and patching of the platform layer. The customer typically remains responsible for their application code, configuration, identities and access in the application, data classification and protection choices, and secure development practices. This shared responsibility model is central in cybersecurity guidance because it determines which security controls the provider enforces by default and which controls the customer must implement.
Given the answer options,Operating Systemis the best match because it is a core part of the platform layer that PaaS customers generally do not manage directly. Load balancers and storage can be consumed in multiple models, including IaaS and PaaS, and subscriptions describe a billing approach, not the technical service layer. Therefore, option D correctly reflects what PaaS provides compared to SaaS.
Bottom of Form
What terms are often used to describe the relationship between a sub-directory and the directory in which it is cataloged?
Primary and Secondary
Multi-factor Tokens
Parent and Child
Embedded Layers
Directories are commonly organized in a hierarchical structure, where each directory can contain sub-directories and files. In this hierarchy, the directory that contains another directory is referred to as theparent, and the contained sub-directory is referred to as thechild. This parent–child relationship is foundational to how file systems and many directory services represent and manage objects, including how paths are constructed and how inheritance can apply.
From a cybersecurity perspective, understanding parent and child relationships matters because access control and administration often follow the hierarchy. For example, permissions applied at a parent folder may be inherited by child folders unless inheritance is explicitly broken or overridden. This can simplify administration by allowing consistent access patterns, but it also introduces risk: overly permissive settings at a parent level can unintentionally grant broad access to many child locations, increasing the chance of unauthorized data exposure. Security documents therefore emphasize careful design of directory structures, least privilege at higher levels of the hierarchy, and regular permission reviews to detect privilege creep and misconfigurations.
The other options do not describe this standard hierarchy terminology. “Primary and Secondary” is more commonly used for redundancy or replication roles, not directory relationships. “Multi-factor Tokens” relates to authentication factors. “Embedded Layers” is not a st
How does Transport Layer Security ensure the reliability of a connection?
By ensuring a stateful connection between client and server
By conducting a message integrity check to prevent loss or alteration of the message
By ensuring communications use TCP/IP
By using public and private keys to verify the identities of the parties to the data transfer
Transport Layer Security (TLS) strengthens the trustworthiness of application communications by ensuring that data exchanged over an untrusted network is not silently modified and is coming from the expected endpoint. While TCP provides delivery features such as sequencing and retransmission, TLS contributes to what many cybersecurity documents describe as “reliable” secure communication by addingcryptographic integrity protections. TLS uses integrity checks (such as message authentication codes in older versions/cipher suites, or authenticated encryption modes like AES-GCM and ChaCha20-Poly1305 in modern TLS) so that any alteration of data in transit is detected. If an attacker intercepts traffic and tries to change commands, session data, or application content, the integrity verification fails and the connection is typically terminated, preventing corrupted or manipulated messages from being accepted as valid.
This is distinct from merely being “stateful” (a transport-layer property) or “using TCP/IP” (a networking stack choice). TLS can run over TCP and relies on TCP for delivery reliability, but TLS itself is focused onconfidentiality, integrity, and endpoint authentication. Public/private keys and certificates are used during the TLS handshake to authenticate servers (and optionally clients) and to establish shared session keys, but the ongoing protection that prevents undetected tampering is the integrity check on each protected record. Therefore, the best match to how TLS ensures secure, dependable communication is the message integrity mechanism described in option B.
If a Business Analyst is asked to document the current state of the organization's web-based business environment, and recommend where cost savings could be realized, what risk factor must be included in the analysis?
Organizational Risk Tolerance
Impact Severity
Application Vulnerabilities
Threat Likelihood
When analyzing a web-based business environment for potential cost savings, the Business Analyst must account forapplication vulnerabilitiesbecause they directly affect the organization’s exposure to cyber attack and the true cost of operating a system. Vulnerabilities are weaknesses in application code, configuration, components, or dependencies that can be exploited to compromise confidentiality, integrity, or availability. In web environments, common examples include insecure authentication, injection flaws, broken access control, misconfigurations, outdated libraries, and weak session management.
Cost-saving recommendations frequently involve consolidating platforms, reducing tooling, lowering support effort, retiring controls, delaying upgrades, or moving to shared services. Without including known or likely vulnerabilities, the analysis can unintentionally recommend changes that reduce preventive and detective capability, increase attack surface, or extend the time vulnerabilities remain unpatched. Cybersecurity governance guidance emphasizes that technology rationalization must consider security posture: vulnerable applications often require additional controls (patching cadence, WAF rules, monitoring, code fixes, penetration testing, secure SDLC work) that carry ongoing cost. These costs are part of the system’s “total cost of ownership” and should be weighed against proposed savings.
While impact severity and threat likelihood are important for overall risk scoring, the question asks what risk factor must be included when documenting the current state of a web-based environment. The most essential factor that ties directly to the environment’s condition and drives remediation cost and exposure isapplication vulnerabilities.
What is an external audit?
A review of security-related measures in place intended to identify possible vulnerabilities
A process that the cybersecurity follows to ensure that they have implemented the proper controls
A review of security expenditures by an independent party
A review of security-related activities by an independent party to ensure compliance
Anexternal auditis an independent evaluation performed by a party outside the organization to determine whether security-related activities, controls, and evidence meet defined requirements. Those requirements are typically drawn from laws and regulations, contractual obligations, and recognized standards or control frameworks. The defining characteristics areindependenceandattestation: the auditor is not part of the operational team being assessed and provides an objective conclusion about compliance or control effectiveness.
Unlike a vulnerability-focused review (often called a security assessment or technical audit) that primarily seeks weaknesses to remediate, an external audit emphasizes whether controls aredesigned appropriately, implemented consistently, and operating effectivelyover time. External auditors usually test governance processes, risk management practices, policies, access control procedures, change management, logging and monitoring, incident response readiness, and evidence of periodic reviews. They also validate documentation and sampling records to confirm that what is written is actually performed.
Option B describes an internal assurance activity, such as self-assessment or internal audit preparation, where the security team checks its own implementation. Option C is closer to a financial or procurement review and is not the typical definition of an external security audit. Therefore, the best answer is the one that clearly captures anindependent partyreviewing security activitiesto ensure compliancewith established criteria
Which scenario is an example of the principle of least privilege being followed?
An application administrator has full permissions to only the applications they support
All application and database administrators have full permissions to every application in the company
Certain users are granted administrative access to their network account, in case they need to install a web-app
A manager who is conducting performance appraisals is granted access to HR files for all employees
The principle of least privilege requires that users, administrators, services, and applications are granted only the minimum access necessary to perform authorized job functions, and nothing more. Option A follows this principle because the administrator’s elevated permissions are limited in scope to the specific applications they are responsible for supporting. This reduces the attack surface and limits blast radius: if that administrator account is compromised, the attacker’s reach is constrained to only those applications rather than the entire enterprise environment.
Least privilege is typically implemented through role-based access control, separation of duties, and privileged access management practices. These controls ensure privileges are assigned based on defined roles, reviewed regularly, and removed when no longer required. They also promote using standard user accounts for routine tasks and reserving administrative actions for controlled, auditable sessions. In addition, least privilege supports stronger accountability through logging and change tracking, because fewer people have the ability to make high-impact changes across systems.
The other scenarios violate least privilege. Option B grants excessive enterprise-wide permissions, creating unnecessary risk and enabling widespread damage from mistakes or compromise. Option C provides “just in case” administrative access, which cybersecurity guidance explicitly discourages because it increases exposure without a validated business need. Option D is overly broad because access to all HR files exceeds what is required for performance appraisals, which typically should be limited to relevant employee records only.
Copyright © 2021-2026 CertsTopics. All Rights Reserved