NIST SP800-30 Risk Assessment Methodology
1 Step 1 - Prepare for risk assessment
1.1 1-1 -- Identify purpose
1.1.1 TASK 1-1: Identify the purpose of the risk assessment in terms of the information the assessment is intended to produce and the decisions the assessment is intended to support.
1.1.1.1 Supplemental Guidance
1.1.1.1.1 The purpose of the risk assessment is explicitly stated in sufficient detail in order to fully inform and guide the conduct of the assessment to ensure that the purpose is achieved. The purpose of the risk assessment is influenced by whether the assessment is:
1.1.1.1.1.1 (i) an initial assessment; or
1.1.1.1.1.2 (ii) an updated assessment initiated from the risk response or risk monitoring steps in the risk management process.
1.1.1.1.2 For an initial assessment, the purpose can include, for example:
1.1.1.1.2.1 (i) establishing a baseline assessment of risk; or
1.1.1.1.2.2 (ii) identifying threats and vulnerabilities, impacts to organizational operations and assets, individuals, other organizations, and the Nation, and other risk factors to be monitored or tracked over time as part of risk monitoring.
1.1.1.1.3 For a reassessment initiated from the risk response step, the purpose can include, for example, recommending (or providing a comparative analysis of) alternative risk response courses of action.
1.1.1.1.4 Alternatively, for a reassessment initiated from the risk monitoring step, the purpose can include, for example, updating the risk assessment based on:
1.1.1.1.4.1 (i) ongoing determinations of the effectiveness of security controls in organizational information systems or environments of operation;
1.1.1.1.4.2 (ii) changes to organizational information systems or environments of operation (e.g., changes to hardware, firmware, software; changes to system-specific, hybrid, or common controls,; changes to mission/business processes, common infrastructure and support services, threats, vulnerabilities, or facilities); and
1.1.1.1.4.3 (iii) results from compliance verification activities.
1.1.2 Initial assessment
1.1.2.1 Establish a baseline assessment of risk
1.1.2.2 Identifying threats and vulnerabilities, impacts, and other risk factors
1.1.3 Updated assessment
1.1.3.1 Recommending alternative risk responses
1.1.3.2 Updating a risk assessment based on:
1.1.3.2.1 Ongoing determinations of effectiveness of security controls
1.1.3.2.2 Changes to information systems
1.1.3.2.3 Changes to mission or business processes
1.1.3.2.4 Results from compliance verification activities
1.2 1-2 -- Identify scope
1.2.1 TASK 1-2: Identify the scope of the risk assessment in terms of organizational applicability, time frame supported, and architectural/technology considerations.
1.2.1.1 Supplemental Guidance
1.2.1.1.1 The scope of the risk assessment determines the boundary of the assessment and can include one or more tiers in the risk management hierarchy as described in NIST Special Publication 800-39. Risk assessment scope affects the range of information available to make risk-based decisions and is determined by the organizational official requesting the assessment. Establishing the scope of risk assessments helps organizations determine:
1.2.1.1.1.1 (i) what tiers are addressed in risk assessments;
1.2.1.1.1.2 (ii) what parts of organizations are affected by risk assessments and how are they affected;
1.2.1.1.1.3 (iii) what decisions risk assessment results support;
1.2.1.1.1.4 (iv) how long risk assessment results are relevant; and
1.2.1.1.1.5 (v) what influences the need to update risk assessments.
1.2.1.1.2 Organizational Applicability
1.2.1.1.2.1 Organizational applicability describes which parts of the organization or sub-organizations are affected by the risk assessment and the risk-based decisions resulting from the assessment (including the parts of the organization/sub- organizations responsible for implementing the activities and tasks related to the decisions). For example, the risk assessment can inform decisions regarding information systems supporting a particular organizational mission/business function or mission/business process. This can include decisions regarding the selection, tailoring, or supplementation of security controls for specific information systems or the selection of common controls. Alternatively, the risk assessment can inform decisions regarding a set of closely related missions/business functions or mission/business processes. The scope of the risk assessment can include not only the missions/business functions, mission/business processes, common infrastructure, or shared services on which the organization currently depends, but also those which the organization might use under specific operational conditions.
1.2.1.1.3 Effectiveness Time Frame
1.2.1.1.3.1 Organizations determine how long the results of particular risk assessments can be used to legitimately inform risk- based decisions. The time frame is usually related to the purpose of the assessment. For example, a risk assessment to inform Tier 1 policy-related decisions needs to be relevant for an extended period of time since the governance process for policy changes can be time-consuming in many organizations. A risk assessment conducted to inform a Tier 3 decision on the use of a compensating security control for an information system may be relevant only until the next release of the information technology product providing the required security capability. Organizations determine the useful life of risk assessment results and under what conditions the current assessment results become ineffective or irrelevant. Risk monitoring can be used to help determine effectiveness time frames for risk assessments.
1.2.1.1.4 Architectural/Technology Considerations
1.2.1.1.4.1 Organizations determine the types of system architectures, information systems, and environments of operation to which risk assessments and the resulting risk-based decisions apply. For example, a risk assessment can be used to inform decisions regarding command and control systems in fixed, land-based facilities. A risk assessment can also be used to inform decisions regarding industrial/process control systems supporting nuclear power plant operations, a service-oriented architecture supporting a just-in-time logistics operation, or mobile/wireless technologies supporting first responders.
1.2.2 Organizational applicability
1.2.2.1 Providers of "The DNS"
1.2.2.1.1 ICANN
1.2.2.1.2 Root server operators
1.2.2.1.3 TLD server operators
1.2.2.1.4 "TLD-like" 3rd-level operators (eg. 3rd-level ccTLD operators)
1.2.2.2 Providers of lower levels of the DNS hierarchy
1.2.2.2.1 Registrars
1.2.2.2.2 Registrants
1.2.2.3 Consumers of the DNS
1.2.3 Time frame
1.2.3.1 Tier 1 -- relevant for an extended period (since governance processes can be time-consuming)
1.2.3.2 Tier 2 -- somewhere in between
1.2.3.3 Tier 3 -- can be as short is until the next release of underlying technology
1.2.4 Architecture and technology
1.2.4.1 "The DNS"
1.2.4.1.1 Root servers and associated infrastructure
1.2.4.1.2 TLD servers and associated infrastructure
1.3 1-3 -- Identify assumptions and constraints
1.3.1 TASK 1-3: Identify the specific assumptions and constraints under which the risk assessment is conducted.
1.3.1.1 Supplemental Guidance
1.3.1.1.1 Organizations provide direction for the assumptions and constraints that guide and inform risk assessments. By making assumptions explicit and providing realistic constraints, there is greater clarity in the risk model selected for the risk assessment, increased reproducibility/repeatability of assessment results, and an increased opportunity for reciprocity among organizations. Organizations identify assumptions and provide guidance in several areas including, for example:
1.3.1.1.1.1 (i) threat sources;
1.3.1.1.1.2 (ii) threat events;
1.3.1.1.1.3 (iii) vulnerabilities/predisposing conditions;
1.3.1.1.1.4 (iv) impacts; and
1.3.1.1.1.5 (v) assessment and analytic approaches.
1.3.1.1.2 Organizations identify constraints in several areas including, for example:
1.3.1.1.2.1 (i) resources available for the risk assessment;
1.3.1.1.2.2 (ii) skills and expertise required for the risk assessment; and
1.3.1.1.2.3 (iii) operational considerations related to mission/business activities.
1.3.1.1.3 Assessments of threats and impacts, for example, can range from worst-case projections to best-case projections or anything in between those endpoints. Organizations also consider the uncertainty with regard to any assumptions made or any other information related to or used in risk assessments. Uncertainty in assumptions can affect organizational risk tolerance. For example, assumptions based on a lack of specific and/or credible information may reduce an organization’s risk tolerance because of the inherent uncertainty influencing the assumptions. The following sections provide some representative examples of areas where assumptions/constraints for risk assessments are needed and appropriate.
1.3.1.1.4 Threat Sources
1.3.1.1.4.1 Organizations determine which types of threat sources are to be considered during risk assessments. Risk assessments can address all types of threat sources, a single broad threat source (e.g., adversarial), or a specific threat source (e.g., trusted insider). Table D-2 provides a sample taxonomy of threat sources that can be considered by organizations in identifying assumptions for risk assessments. See Task 2-1 for additional guidance on identifying threat sources.
1.3.1.1.5 Threat Events
1.3.1.1.5.1 Organizations determine the level of detail in describing threat events that are to be considered during risk assessments. Descriptions of threat events can be expressed in highly general terms (e.g., phishing, distributed denial-of-service), in more descriptive terms using tactics, techniques, and procedures, or highly specific terms (e.g., the names of specific information systems, technologies, organizations, roles, or locations). In addition, organizations consider:
1.3.1.1.5.1.1 (i) what representative set of threat events can serve as a starting point for the identification of the specific threat events in the risk assessment; and
1.3.1.1.5.1.2 (ii) what degree of confirmation is needed for threat events to be considered relevant for purposes of the risk assessment.
1.3.1.1.5.2 For example, organizations may consider only those threat events that have been observed (either internally or by organizations that are peers/partners) or all possible threat events. Table E-2 and Table E-3 provide representative examples of adversarial and non-adversarial threat events. See Task 2-2 for additional guidance on identifying threat events.
1.3.1.1.6 Vulnerabilities and Predisposing Conditions
1.3.1.1.6.1 Organizations determine the types of vulnerabilities that are to be considered during risk assessments and the level of detail provided in the vulnerability descriptions. Vulnerabilities can be associated with organizational information systems (e.g., hardware, software, firmware, internal controls, and security procedures) or the environments in which those systems operate (e.g., organizational governance, external relationships, mission/business processes, enterprise architectures, information security architectures). Organizations also determine the types of predisposing conditions that are to be considered during risk assessments. Table F-4 provides representative examples of such predisposing conditions. See Task 2-3 for additional guidance on identifying vulnerabilities and predisposing conditions.
1.3.1.1.7 Impacts
1.3.1.1.7.1 Organizations determine potential adverse impacts in terms of organizational operations (i.e., missions, functions, image, and reputation), organizational assets, individuals, other organizations, and the Nation. Organizations address impacts at a level of detail that includes, for example, specific mission/business processes or information resources (e.g., information, personnel, equipment, funds, and information technology). Organizations may include information from Business Impact Analyses with regard to providing impact information for risk assessments. Table H-2 provides representative examples of types of impacts (i.e., harm) that can be considered by organizations. See Task 2-4 for additional guidance on identifying potential adverse impacts.
1.3.1.1.8 Risk Tolerance and Uncertainty
1.3.1.1.8.1 Organizations determine the levels of risk, types of risk, and degree of risk uncertainty that are acceptable. Of particular concern is how organizations analyze and determine risks when a high degree of uncertainty exists. This is especially important when organizations consider advanced persistent threats since assessments of the likelihood of threat event occurrence can have a great degree of uncertainty. Organizations can take a variety of approaches to determine likelihood, ranging from assuming the worst-case likelihood (certain to happen sometime in the foreseeable future) to assuming that if an event has not been observed, it is unlikely to happen. Organizations also determine what levels of risk (combination of likelihood and impact) indicate that no further analysis of any risk factors is needed.
1.3.1.1.9 Analytic Approach
1.3.1.1.9.1 Organizations determine the degree of detail or in what form, threats are analyzed including the level of granularity to describe threat events or threat scenarios. Different analysis approaches are possible, including, for example, event/TTP coverage analysis, attack tree/threat scenario analysis, and layers of protection analysis. Different analysis approaches can lead to different levels of detail in characterizing the adverse events for which likelihoods are determined. For example, an adverse event could be characterized in several ways (with increasing levels of detail):
1.3.1.1.9.1.1 (i) a threat event (for which the likelihood is determined by taking the maximum overall threat sources;
1.3.1.1.9.1.2 (ii) a pairing of a threat event and a threat source; or
1.3.1.1.9.1.3 (iii) a detailed threat scenario/attack tree.
1.3.1.1.9.2 In general, organizations can be expected to require more detail for highly critical mission/business functions, common infrastructures, or shared services on which multiple missions or business functions depend (as common points of failure), and information systems with high criticality or sensitivity. Mission/business owners may amplify this guidance for risk hot spots (information systems, services, or critical infrastructure components of particular concern) in mission/business segments.
1.3.2 Threat sources
1.3.2.1 Range of sources
1.3.2.1.1 Broad (eg all sources, adversarial and non-adversarial)
1.3.2.1.2 Narrow (eg one specific threat source)
1.3.2.2 Table D-2 provides a sample taxonomy that can be considered
1.3.2.2.1 Adversarial
1.3.2.2.1.1 Description
1.3.2.2.1.1.1 Individuals, groups, organizations or states that seek to exploit the organization's dependence on cyber resources
1.3.2.2.1.2 Characteristics
1.3.2.2.1.2.1 Capability
1.3.2.2.1.2.2 Intent
1.3.2.2.1.2.3 Targeting
1.3.2.2.1.3 Types
1.3.2.2.1.3.1 Individual
1.3.2.2.1.3.1.1 Outsider
1.3.2.2.1.3.1.2 Insider
1.3.2.2.1.3.1.3 Trusted Insider
1.3.2.2.1.3.1.4 Privileged Insider
1.3.2.2.1.3.2 Group
1.3.2.2.1.3.2.1 Ad Hoc
1.3.2.2.1.3.2.2 Established
1.3.2.2.1.3.2.3 Organization
1.3.2.2.1.3.2.4 Nation-state
1.3.2.2.2 Accidental
1.3.2.2.2.1 Description
1.3.2.2.2.1.1 Erroneous actions taken by individuals in the course of executing their everyday responsibilities
1.3.2.2.2.2 Characteristics
1.3.2.2.2.2.1 Range of effects
1.3.2.2.2.3 Types
1.3.2.2.2.3.1 Ordinary users
1.3.2.2.2.3.2 Privileged users
1.3.2.2.3 Structural
1.3.2.2.3.1 Description
1.3.2.2.3.1.1 Failures of equipment, environmental controls or software due to aging, resource depletion or other circumstances which exceed expected operating parameters
1.3.2.2.3.2 Characteristics
1.3.2.2.3.2.1 Range of effects
1.3.2.2.3.3 Types
1.3.2.2.3.3.1 IT Equipment
1.3.2.2.3.3.1.1 Storage
1.3.2.2.3.3.1.2 Processing
1.3.2.2.3.3.1.3 Communications
1.3.2.2.3.3.1.4 Display
1.3.2.2.3.3.1.5 Sensor
1.3.2.2.3.3.1.6 Controller
1.3.2.2.3.3.2 Environmental
1.3.2.2.3.3.2.1 Temperature/humidity controls
1.3.2.2.3.3.2.2 Power supply
1.3.2.2.3.3.3 Software
1.3.2.2.3.3.3.1 Operating system
1.3.2.2.3.3.3.2 Networking
1.3.2.2.3.3.3.3 General-purpose applications
1.3.2.2.3.3.3.4 Mission-specific applications
1.3.2.2.4 Environmental
1.3.2.2.4.1 Description
1.3.2.2.4.1.1 Natural disasters and failures of critical infrastructures on which the organization depends, but which are outside the control of the organization
1.3.2.2.4.1.1.1 Note: Natural and man-made disasters can also be characterized in terms of their severity and/or duration. However, because the threat source and the threat event are strongly identified, severity and duration can be included in the description of the threat event (eg Category 5 hurricane causes extensive damage to the facilities housing mission-critical systems, making those systems unavailable for three weeks).
1.3.2.2.4.2 Characteristics
1.3.2.2.4.2.1 Range of effects
1.3.2.2.4.3 Types
1.3.2.2.4.3.1 Natural or man-made disaster
1.3.2.2.4.3.1.1 Fire
1.3.2.2.4.3.1.2 Flood/Tsunami
1.3.2.2.4.3.1.3 Windstorm/tornado
1.3.2.2.4.3.1.4 Hurricane
1.3.2.2.4.3.1.5 Earthquake
1.3.2.2.4.3.1.6 Bombing
1.3.2.2.4.3.1.7 Overrun
1.3.2.2.4.3.2 Unusual natural event (eg sunspots)
1.3.2.2.4.3.3 Infrastructure failure/outage
1.3.2.2.4.3.3.1 Telecommunications
1.3.2.2.4.3.3.2 Power
1.3.2.3 Task 2-1 has additional guidance for identifying threat sources
1.3.3 Threat events
1.3.3.1 Determine level of detail
1.3.3.1.1 General (eg phishing, DDOS)
1.3.3.1.2 More descriptive (tactics, techniques, procedures)
1.3.3.1.2.1 Maybe drill down into one or two? eg. DDOs?
1.3.3.1.3 Specific (names of systems, technologies, organizations, roles or locations)
1.3.3.2 What representative set of events to use as a starting point to identify specific threat events -- see Tables E-2 and E-3 for a sample taxonomy
1.3.3.2.1 Table E-2 -- Adversarial threat events (giant list -- we need to radically thin this one)
1.3.3.2.1.1 Access sensitive information through network sniffing.
1.3.3.2.1.1.1 Adversary gains access to the exposed wired or wireless data channels that organizations (or organizational personnel) use to transmit information, and intercept communications. Adversary actions might include, for example, targeting public kiosks or hotel networking connections.
1.3.3.2.1.2 Adapt cyber attacks based on detailed surveillance.
1.3.3.2.1.2.1 Adversary adapts attacks in response to surveillance of organizations and the protective measures that organizations employ.
1.3.3.2.1.3 Exploit recently discovered vulnerabilities.
1.3.3.2.1.3.1 Adversary exploits recently discovered vulnerabilities in organizational information systems in an attempt to attack the systems before mitigation measures are available or in place.
1.3.3.2.1.4 Employ brute force login attempts/password guessing.
1.3.3.2.1.4.1 Adversary attempts to gain access to organizational information systems by random or systematic guessing of passwords, possibly supported by password cracking utilities.
1.3.3.2.1.5 Cause degradation or denial of attacker selected services or capabilities.
1.3.3.2.1.5.1 Adversary launches attacks specifically intended to impede the ability of organizations to function.
1.3.3.2.1.6 Cause deterioration/destruction of critical information system components and functions.
1.3.3.2.1.6.1 Adversary attempts to destroy or deteriorate critical information system components for purposes of impeding or eliminating the ability of organizations to carry out missions or business functions. Detection of this action is not a concern.
1.3.3.2.1.7 Combine internal and external attacks across multiple information systems and information technologies to achieve a breach or compromise.
1.3.3.2.1.7.1 Adversary combines attacks that require both physical presence within organizations and cyber methods to achieve success. Physical components may be as simple as convincing maintenance personnel to leave doors or cabinets open.
1.3.3.2.1.8 Compromise critical information systems via physical access by outsiders.
1.3.3.2.1.8.1 Adversary without authorized access to organizational information systems, attempts to physically gain access to the systems.
1.3.3.2.1.9 Compromise mission critical information.
1.3.3.2.1.9.1 Adversary takes action to compromise the integrity of mission critical information, thus preventing/impeding ability of organizations to which information is supplied, from carrying out operations.
1.3.3.2.1.10 Compromise information systems or devices used externally and reintroduce into the enterprise.
1.3.3.2.1.10.1 Adversary manages to install malware on information systems or devices while the systems/devices are external to organizations for purposes of subsequently infecting organizations when reconnected.
1.3.3.2.1.11 Compromise design, manufacture, and/or distribution of information system components (including hardware, software, and firmware) organizations are known to use.
1.3.3.2.1.11.1 Adversary is able to compromise the design, manufacturing, and/or distribution of critical information system components at selected suppliers.
1.3.3.2.1.12 Conduct reconnaissance, surveillance, and target acquisition of targeted organizations.
1.3.3.2.1.12.1 Adversary uses various means (e.g., scanning, physical observation) to examine and assess organizations and ascertain points of vulnerability.
1.3.3.2.1.13 Conduct phishing attacks.
1.3.3.2.1.13.1 Adversary attempts to acquire sensitive information such as usernames, passwords, or SSNs, by pretending to be communications from a legitimate/trustworthy source. Typical attacks occur via email, instant messaging, or comparable means; commonly directing users to Web sites that appear to be legitimate sites, while actually stealing the entered information.
1.3.3.2.1.14 Continuous, adaptive and changing cyber attacks based on detailed surveillance of organizations.
1.3.3.2.1.14.1 Adversary attacks continually change in response to surveillance of organizations and protective measures that organizations take.
1.3.3.2.1.15 Coordinating cyber attacks on organizations using external (outsider), internal (insider), and supply chain (supplier) attack vectors.
1.3.3.2.1.15.1 Adversary employs continuous, coordinated attacks, potentially using all three attack vectors for the purpose of impeding organizational operations.
1.3.3.2.1.16 Create and operate false front organizations that operate within the critical life cycle path to inject malicious information system components into the supply chain.
1.3.3.2.1.16.1 Adversary creates the appearance of legitimate suppliers that then inject corrupted/malicious information system components into the supply chain of organizations.
1.3.3.2.1.17 Deliver known malware to internal organizational information systems (e.g., virus via email).
1.3.3.2.1.17.1 Adversary uses common delivery mechanisms (e.g., email) to install/insert known malware (e. g., malware whose existence is known) into organizational information systems.
1.3.3.2.1.18 Deliver modified malware to internal organizational information systems.
1.3.3.2.1.18.1 Adversary uses more sophisticated means (e.g., Web traffic, instant messaging, FTP) to deliver malware and possibly modifications of known malware to gain access to internal organizational information systems.
1.3.3.2.1.19 Devise attacks specifically based on deployed information technology environment.
1.3.3.2.1.19.1 Adversary develops attacks, using known and unknown attacks that are designed to take advantage of adversary knowledge of the information technology infrastructure.
1.3.3.2.1.20 Discovering and accessing sensitive data/information stored on publicly accessible information systems.
1.3.3.2.1.20.1 Adversary attempts to scan or mine information on publically accessible servers and Web pages of organizations with the intent of finding information that is sensitive (i.e., not approved for public release).
1.3.3.2.1.21 Distributed Denial of Service (DDoS) attack.
1.3.3.2.1.21.1 Adversary uses multiple compromised information systems to attack a single target, thereby causing denial of service for users of the targeted information systems.
1.3.3.2.1.22 Exploit known vulnerabilities in mobile systems (e.g., laptops, PDAs, smart phones).
1.3.3.2.1.22.1 Adversary takes advantage of fact that transportable information systems are outside physical protection of organizations and logical protection of corporate firewalls, and compromises the systems based on known vulnerabilities to gather information from those systems.
1.3.3.2.1.23 Exploiting vulnerabilities in information systems timed with organizational mission/business operations tempo.
1.3.3.2.1.23.1 Adversary launches attacks on organizations in a time and manner consistent with organizational needs to conduct mission/business operations.
1.3.3.2.1.24 Externally placed adversary sniffing and intercepting of wireless network traffic.
1.3.3.2.1.24.1 Adversary strategically in position to intercept wireless communications of organizations.
1.3.3.2.1.25 Hijacking information system sessions of data traffic between the organization and external entities.
1.3.3.2.1.25.1 Adversary takes control of (hijacks) already established, legitimate information system sessions between organizations and external entities (e.g., users connecting from off-site locations).
1.3.3.2.1.26 Injecting false but believable data/information into organizational information systems.
1.3.3.2.1.26.1 Adversary injects false but believable data into organizational information systems. This action by the adversary may impede the ability of organizations to carry out missions/business functions correctly and/or undercut the credibility other entities may place in the information or services provided by organizations.
1.3.3.2.1.27 Insert subverted individuals into privileged positions in organizations.
1.3.3.2.1.27.1 Adversary has individuals in privileged positions within organizations that are willing and able to carry out actions to cause harm to organizational missions/business functions. Subverted individuals may be active supporters of adversary, supporting adversary (albeit under duress), or unknowingly supporting adversary (e.g., false flag). Adversary may target privileged functions to gain access to sensitive information (e.g., user accounts, system files, etc.) and may leverage access to one privileged capability to get to another capability.
1.3.3.2.1.28 Counterfeit/Spoofed Web site.
1.3.3.2.1.28.1 Adversary creates duplicates of legitimate Web sites and directs users to counterfeit sites to gather information.
1.3.3.2.1.29 Deliver targeted Trojan for control of internal systems and exfiltration of data.
1.3.3.2.1.29.1 Adversary manages to install software containing Trojan horses that are specifically designed to take control of internal organizational information systems, identify sensitive information, exfiltrate the information back to adversary, and conceal these actions.
1.3.3.2.1.30 Employ open source discovery of organizational information useful for future cyber attacks.
1.3.3.2.1.30.1 Adversary mines publically accessible information with the goal of discerning information about information systems, users, or organizational personnel that the adversary can subsequently employ in support of an attack.
1.3.3.2.1.31 Exploit vulnerabilities on internal organizational information systems.
1.3.3.2.1.31.1 Adversary searches for known vulnerabilities in organizational internal information systems and exploits those vulnerabilities.
1.3.3.2.1.32 Inserting malicious code into organizational information systems to facilitate exfiltration of data/information.
1.3.3.2.1.32.1 Adversary successfully implants malware into internal organizational information systems, where the malware over time identifies and then successfully exfiltrates valuable information.
1.3.3.2.1.33 Installing general-purpose sniffers on organization- controlled information systems or networks.
1.3.3.2.1.33.1 Adversary manages to install sniffing software onto internal organizational information systems or networks.
1.3.3.2.1.34 Leverage traffic/data movement allowed across perimeter (e.g., email communications, removable storage) to compromise internal information systems (e.g., using open ports to exfiltrate information).
1.3.3.2.1.34.1 Adversary makes use of permitted information flows (e.g., email communications) to facilitate compromises to internal information systems (e.g., phishing attacks to direct users to go to Web sites containing malware) which allows adversary to obtain and exfiltrate sensitive information through perimeters.
1.3.3.2.1.35 Insert subverted individuals into the organizations.
1.3.3.2.1.35.1 Adversary has individuals in place within organizations that are willing and able to carry out actions to cause harm to organizational missions/business functions. Subverted individuals may be active supporters of adversary, supporting adversary (albeit under duress), or unknowingly supporting adversary (e.g., false flag).
1.3.3.2.1.36 Insert counterfeited hardware into the supply chain.
1.3.3.2.1.36.1 Adversary intercepts hardware from legitimate suppliers. Adversary modifies the hardware or replaces it with faulty or otherwise modified hardware.
1.3.3.2.1.37 Inserting malicious code into organizational information systems and information system components (e.g., commercial information technology products) known to be used by organizations.
1.3.3.2.1.37.1 Adversary inserts malware into information systems specifically targeted to the hardware, software, and firmware used by organizations (resulting from the reconnaissance of organizations by adversary).
1.3.3.2.1.38 Inserting specialized, non-detectable, malicious code into organizational information systems based on system configurations.
1.3.3.2.1.38.1 Adversary launches multiple, potentially changing attacks specifically targeting critical information system components based on reconnaissance and placement within organizational information systems.
1.3.3.2.1.39 Insider-based session hijacking.
1.3.3.2.1.39.1 Adversary places an entity within organizations in order to gain access to organizational information systems or networks for the express purpose of taking control (hijacking) an already established, legitimate session either between organizations and external entities (e.g., users connecting from remote locations) or between two locations within internal networks.
1.3.3.2.1.40 Installing persistent and targeted sniffers on organizational information systems and networks.
1.3.3.2.1.40.1 Adversary places within the internal organizational information systems or networks software designed to (over a continuous period of time) collect (sniff) network traffic.
1.3.3.2.1.41 Intercept/decrypt weak or unencrypted communication traffic and protocols.
1.3.3.2.1.41.1 Adversary takes advantage of communications that are either unencrypted or use weak encryption (e.g., encryption containing publically known flaws), targets those communications, and gains access to transmitted information and channels.
1.3.3.2.1.42 Jamming wireless communications.
1.3.3.2.1.42.1 Adversary takes measures to interfere with the wireless communications so as to impede or prevent communications from reaching intended recipients.
1.3.3.2.1.43 Malicious activity using unauthorized ports, protocols, and services.
1.3.3.2.1.43.1 Adversary conducts attacks using ports, protocols, and services for ingress and egress that are not authorized for use by organizations.
1.3.3.2.1.44 Malicious creation, deletion, and/or modification of files on publicly accessible information systems (e.g., Web defacement).
1.3.3.2.1.44.1 Adversary vandalizes, or otherwise makes unauthorized changes to organizational Web sites or files on Web sites.
1.3.3.2.1.45 Mapping and scanning organization-controlled (internal) networks and information systems from within (inside) organizations.
1.3.3.2.1.45.1 Adversary installs malware inside perimeter that allows the adversary to scan network to identify targets of opportunity. Because the scanning does not cross the perimeter, it is not detected by externally placed intrusion detection systems.
1.3.3.2.1.46 Mishandling of critical and/or sensitive information by authorized users.
1.3.3.2.1.46.1 Authorized users inadvertently expose critical/sensitive information.
1.3.3.2.1.47 Multistage attacks (e.g., hopping).
1.3.3.2.1.47.1 Adversary moves attack location from one compromised information system to other information systems making identification of source difficult.
1.3.3.2.1.48 Network traffic modification (man in the middle) attacks by externally placed adversary.
1.3.3.2.1.48.1 Adversary intercepts/eavesdrops on sessions between organizations and external entities. Adversary then relays messages between the organizations and external entities, making them believe that they are talking directly to each other over a private connection, when in fact the entire communication is controlled by the adversary.
1.3.3.2.1.49 Network traffic modification (man in the middle) attacks by internally placed adversary.
1.3.3.2.1.49.1 Adversary operating within the infrastructure of organizations intercepts and corrupts data sessions.
1.3.3.2.1.50 Non-target specific insertion of malware into downloadable software and/or into commercial information technology products.
1.3.3.2.1.50.1 Adversary corrupts or inserts malware into common freeware, shareware, or commercial information technology products. Adversary is not targeting specific organizations in this attack, simply looking for entry points into internal organizational information systems.
1.3.3.2.1.51 Operate across organizations to acquire specific information or achieve desired outcome.
1.3.3.2.1.51.1 Adversary does not limit planning to the targeting of one organization. Adversary observes multiple organizations to acquire necessary information on targets of interest.
1.3.3.2.1.52 Opportunistically stealing or scavenging information systems/components.
1.3.3.2.1.52.1 Adversary takes advantage of opportunities (due to advantageous positioning) to steal information systems or components (e. g., laptop computers or data storage media) that are left unattended outside of the physical perimeters of organizations.
1.3.3.2.1.53 Perimeter network reconnaissance/scanning.
1.3.3.2.1.53.1 Adversary uses commercial or free software to scan organizational perimeters with the goal of obtaining information that provides the adversary with a better understanding of the information technology infrastructure and facilitates the ability of the adversary to launch successful attacks.
1.3.3.2.1.54 Pollution of critical data.
1.3.3.2.1.54.1 Adversary implants corrupted and incorrect data in the critical data that organizations use to cause organizations to take suboptimal actions or to subsequently disbelieve reliable inputs.
1.3.3.2.1.55 Poorly configured or unauthorized information systems exposed to the Internet.
1.3.3.2.1.55.1 Adversary gains access through the Internet, to information systems that are not authorized for such access or that do not meet the specified configuration requirements of organizations.
1.3.3.2.1.56 Salting the physical perimeter of organizations with removable media containing malware.
1.3.3.2.1.56.1 Adversary places removable media (e.g., flash drives) containing malware in locations external to the physical perimeters of organizations but where employees are likely to find and install on organizational information systems.
1.3.3.2.1.57 Simple Denial of Service (DoS) Attack.
1.3.3.2.1.57.1 Adversary attempts to make an Internet-accessible resource unavailable to intended users, or prevent the resource from functioning efficiently or at all, temporarily or indefinitely.
1.3.3.2.1.58 Social engineering by insiders within organizations to convince other insiders to take harmful actions.
1.3.3.2.1.58.1 Internally placed adversaries take actions (e.g., using email, phone) so that individuals within organizations reveal critical/sensitive information (e.g., personally identifiable information).
1.3.3.2.1.59 Social engineering by outsiders to convince insiders to take armful actions.
1.3.3.2.1.59.1 Externally placed adversaries take actions (using email, phone) with the intent of persuading or otherwise tricking individuals within organizations into revealing critical/sensitive information (e.g., personally identifiable information).
1.3.3.2.1.60 Spear phishing attack.
1.3.3.2.1.60.1 Adversary employs phishing attacks targeted at high-value targets (e.g., senior leaders/executives).
1.3.3.2.1.61 Spill sensitive information.
1.3.3.2.1.61.1 Adversary contaminates organizational information systems (including devices and networks) by placing on the systems or sending to/over the systems, information of a classification/sensitivity which the systems have not been authorized to handle. The information is exposed to individuals that are not authorized access to such information, and the information system, device, or network is unavailable while the spill is investigated and mitigated.
1.3.3.2.1.62 Spread attacks across organizations from existing footholds.
1.3.3.2.1.62.1 Adversary builds upon existing footholds within organizations and works to extend the footholds to other parts of organizations including organizational infrastructure. Adversary places itself in positions to further undermine the ability for organizations to carry out missions/business functions.
1.3.3.2.1.63 Successfully compromise software of critical information systems within organizations.
1.3.3.2.1.63.1 Adversary inserts malware or otherwise corrupts critical internal organizational information systems.
1.3.3.2.1.64 Tailgate authorized staff to gain access to organizational facilities.
1.3.3.2.1.64.1 Adversary follows authorized individuals into secure/controlled locations with the goal of gaining access to facilities, circumventing physical security checks.
1.3.3.2.1.65 Tailored zero-day attacks on organizational information systems.
1.3.3.2.1.65.1 Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Zero-day attacks are based on adversary insight into the information systems and applications used by organizations as well as adversary reconnaissance of organizations.
1.3.3.2.1.66 Tamper with critical organizational information system components and inject the components into the systems.
1.3.3.2.1.66.1 Adversary replaces, though supply chain, subverted insider, or some combination thereof, critical information system components with modified or corrupted components that operate in such a manner as to severely disrupt organizational missions/business functions or operations.
1.3.3.2.1.67 Targeting and compromising home computers (including personal digital assistants and smart phones) of critical employees within organizations.
1.3.3.2.1.67.1 Adversary targets key employees of organizations outside the security perimeters established by organizations by placing malware in the personally owned information systems and devices of individuals (e.g., laptop/notebook computers, personal digital assistants, smart phones). The intent is to take advantage of any instances where employees use personal information systems or devices to convey critical/sensitive information.
1.3.3.2.1.68 Targeting and exploiting critical hardware, software, or firmware (both commercial off-the-shelf and custom information systems and components).
1.3.3.2.1.68.1 Adversary targets and attempts to compromise the operation of software (e.g., through malware injections) that performs critical functions for organizations. This is largely accomplished as supply chain attacks.
1.3.3.2.1.69 Unauthorized internal information system access by insiders.
1.3.3.2.1.69.1 Adversary is an individual who has authorized access to organizational information systems, but gains (or attempts to gain) access that exceeds authorization.
1.3.3.2.1.70 Undermine the ability of organizations to detect attacks.
1.3.3.2.1.70.1 Adversary takes actions to inhibit the effectiveness of the intrusion detection systems or auditing capabilities within organizations.
1.3.3.2.1.71 Use remote information system connections of authorized users as bridge to gain unauthorized access to internal networks (i.e., split tunneling).
1.3.3.2.1.71.1 Adversary takes advantage of external information systems (e.g., laptop computers at remote locations) that are simultaneously connected securely to organizations and to nonsecure remote connections gaining unauthorized access to organizations via nonsecure, open channels.
1.3.3.2.1.72 Using postal service or other commercial delivery services to insert malicious scanning devices (e.g., wireless sniffers) inside facilities.
1.3.3.2.1.72.1 Adversary uses courier service to deliver to organizational mailrooms a device that is able to scan wireless communications accessible from within the mailrooms and then wirelessly transmit information back to adversary.
1.3.3.2.1.73 Zero-day attacks (non-targeted).
1.3.3.2.1.73.1 Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Attacks are not based on any adversary insights into specific vulnerabilities of organizations.
1.3.3.2.2 Table E-3 -- Non-adversarial threat events (opposite problem -- the list is too thin -- we need to expand)
1.3.3.2.2.1 Threat source - accidental ordinary user
1.3.3.2.2.1.1 Threat event - spill sensitive information
1.3.3.2.2.1.1.1 Description - Authorized user erroneously contaminates a device, information system, or network by placing on it or sending to it information of a classification/sensitivity which it has not been authorized to handle. The information is exposed to access by unauthorized individuals, and as a result, the device, system, or network is unavailable while the spill is investigated and mitigated.
1.3.3.2.2.2 Threat source - Accidental Privileged User or Administrator
1.3.3.2.2.2.1 Threat event - Mishandling of critical and/or sensitive information by authorized users
1.3.3.2.2.2.1.1 Description - Authorized privileged user inadvertently exposes critical/sensitive information.
1.3.3.2.2.3 Threat source - Communication
1.3.3.2.2.3.1 Threat event - Communications contention
1.3.3.2.2.3.1.1 Description - Degraded communications performance due to contention.
1.3.3.2.2.4 Threat source - Earthquake
1.3.3.2.2.4.1 Threat event - Earthquake at primary facility
1.3.3.2.2.4.1.1 Description - Earthquake of organization-defined magnitude at primary facility makes facility inoperable.
1.3.3.2.2.5 Threat source - Fire
1.3.3.2.2.5.1 Threat event - Fire at primary facility
1.3.3.2.2.5.1.1 Description - Fire (not due to adversarial activity) at primary facility makes facility inoperable.
1.3.3.2.2.6 Threat source - Processing
1.3.3.2.2.6.1 Threat event - Resource depletion
1.3.3.2.2.6.1.1 Description - Degraded processing performance due to resource depletion.
1.3.3.2.2.7 Threat source - Storage
1.3.3.2.2.7.1 Threat event - disk error
1.3.3.2.2.7.1.1 Description - Corrupted storage due to a disk error.
1.3.3.2.2.8 Threat source - Storage
1.3.3.2.2.8.1 Threat event - pervasive disk error
1.3.3.2.2.8.1.1 Description - Multiple disk errors due to aging of a set of devices all acquired at the same time, from the same supplier.
1.3.3.2.3 Our "threats and vulnerabilities" work fits here -- but we may want to revise our taxonomy a bit
1.3.3.3 What degree of confirmation is needed for threat events to be considered relevant to the risk assessment?
1.3.3.3.1 Only those that have been observed, or
1.3.3.3.2 All possible threat events
1.3.4 Vulnerabilities and predisposing conditions
1.3.4.1 Determine types of vulnerabilities to be considered
1.3.4.1.1 Vulnerabilities of information systems (hardware, software, firmware, internal controls, security procedures)
1.3.4.1.2 Environmental vulnerabilities (Organization governance, external relationships, mission/business processes, enterprise architecture, information security architecture)
1.3.4.1.3 Our "threats and vulnerabilities" work fits here -- but we may want to revise our taxonomy a bit
1.3.4.2 Determine level of detail in vulnerability descriptions to be used
1.3.4.3 Determine types of predisposing conditions to be considered
1.3.4.3.1 Table F-4 -- representative samples
1.3.4.3.1.1 Information related
1.3.4.3.1.1.1 Description:
1.3.4.3.1.1.1.1 Needs to handle information (as it is created, transmitted, stored, processed, and/or displayed) in a specific manner, due to its sensitivity (or lack of sensitivity), legal or regulatory requirements, and/or contractual or other organizational agreements.
1.3.4.3.1.1.2 Examples:
1.3.4.3.1.1.2.1 - Classified National Security Information
1.3.4.3.1.1.2.2 - Compartments
1.3.4.3.1.1.2.3 - Controlled Unclassified Information
1.3.4.3.1.1.2.4 - Personally Identifiable Information
1.3.4.3.1.1.2.5 - Special Access Programs
1.3.4.3.1.1.2.6 - Agreement-Determined (eg proprietary)
1.3.4.3.1.2 Technical
1.3.4.3.1.2.1 Description
1.3.4.3.1.2.1.1 Needs to use technologies in specific ways.
1.3.4.3.1.2.2 Examples:
1.3.4.3.1.2.2.1 - Architectural
1.3.4.3.1.2.2.1.1 - Compliance with technical standards
1.3.4.3.1.2.2.1.2 - Use of specific products or product lines
1.3.4.3.1.2.2.1.3 - Solutions for and/or approaches to user-based collaboration and information sharing
1.3.4.3.1.2.2.1.4 - Allocation of specific security functionality to common controls
1.3.4.3.1.2.2.2 - Functional
1.3.4.3.1.2.2.2.1 - Networked multiuser
1.3.4.3.1.2.2.2.2 - Single-user
1.3.4.3.1.2.2.2.3 - Stand-alone / nonnetworked
1.3.4.3.1.2.2.2.4 - Restricted functionality (e.g., communications, sensors, embedded controllers)
1.3.4.3.1.3 Operational / Environmental
1.3.4.3.1.3.1 Description:
1.3.4.3.1.3.1.1 Ability to rely upon physical, procedural, and personnel controls provided by the operational environment.
1.3.4.3.1.3.2 Examples:
1.3.4.3.1.3.2.1 - Mobility
1.3.4.3.1.3.2.1.1 - Fixed-site (specify location)
1.3.4.3.1.3.2.1.2 - Semi-mobile
1.3.4.3.1.3.2.1.2.1 - Land-based (e.g., van)
1.3.4.3.1.3.2.1.2.2 - Airborne
1.3.4.3.1.3.2.1.2.3 - Sea-based
1.3.4.3.1.3.2.1.2.4 - Space-based
1.3.4.3.1.3.2.1.3 - Mobile (e.g., handheld device)
1.3.4.3.1.3.2.2 - Population with physical and/or logical access to components of the information system, mission/business process, EA segment
1.3.4.3.1.3.2.2.1 - Size of population
1.3.4.3.1.3.2.2.2 - Clearance/vetting of population
1.3.4.3.2 I don't "get" this section yet -- more reading required -- Mikey
1.3.5 Impacts
1.3.5.1 Determine Potential adverse impacts
1.3.5.1.1 Organizational
1.3.5.1.1.1 Mission
1.3.5.1.1.2 Functions
1.3.5.1.1.3 Image
1.3.5.1.1.4 Reputation
1.3.5.1.1.5 Assets
1.3.5.1.2 Other organizations
1.3.5.1.2.1 Mission
1.3.5.1.2.2 Functions
1.3.5.1.2.3 Image
1.3.5.1.2.4 Reputation
1.3.5.1.2.5 Assets
1.3.5.1.3 Nations and the world
1.3.5.1.3.1 Mission
1.3.5.1.3.2 Functions
1.3.5.1.3.3 Image
1.3.5.1.3.4 Reputation
1.3.5.1.3.5 Assets
1.3.5.1.4 Table H-2 -- Representative samples
1.3.5.1.4.1 Harm to operations
1.3.5.1.4.1.1 Inability to perform current missions/business functions
1.3.5.1.4.1.1.1 In a sufficient and timely manner
1.3.5.1.4.1.1.2 With sufficient confidence and/or correctness
1.3.5.1.4.1.1.3 Within planned resource constraints
1.3.5.1.4.1.2 Inability to restore missions/business functions
1.3.5.1.4.1.2.1 In a sufficiently timely manner
1.3.5.1.4.1.2.2 With sufficient confidence or correctness
1.3.5.1.4.1.2.3 Within planned resource constraints
1.3.5.1.4.1.3 Harms (eg financial costs, sanctions) due to noncompliance
1.3.5.1.4.1.3.1 With applicable laws or regulations
1.3.5.1.4.1.3.2 With requirements in contracts or agreements
1.3.5.1.4.1.4 Direct financial costs
1.3.5.1.4.1.5 Relational harms
1.3.5.1.4.1.5.1 Damage to trust relationships
1.3.5.1.4.1.5.2 Damage to image or reputation (and thus future trust relationships)
1.3.5.1.4.2 Harm to assets
1.3.5.1.4.2.1 Physical facilities
1.3.5.1.4.2.2 Information systems or networks
1.3.5.1.4.2.3 Information technology or equipment
1.3.5.1.4.2.4 Component parts or supplies
1.3.5.1.4.2.5 Information assets
1.3.5.1.4.2.6 Intellectual property
1.3.5.1.4.3 Harm to individuals
1.3.5.1.4.3.1 Identity theft
1.3.5.1.4.3.2 Loss of personally identifiable information
1.3.5.1.4.3.3 Injury or loss of life
1.3.5.1.4.3.4 Damage to image or reputation
1.3.5.1.4.3.5 Physical or psychological mistreatment
1.3.5.1.4.4 Harms to other organizations
1.3.5.1.4.4.1 Harms due to noncompliance
1.3.5.1.4.4.1.1 With laws or regulations
1.3.5.1.4.4.1.2 With requirements in contracts or agreements
1.3.5.1.4.4.2 Direct financial costs
1.3.5.1.4.4.3 Relational harms
1.3.5.1.4.4.3.1 Damage to trust relationships
1.3.5.1.4.4.3.2 Damage to image or reputation (and thus future trust relationships)
1.3.5.1.4.5 Harm to nations and the world
1.3.5.1.4.5.1 Damage to or incapacitation of a critical infrastructure sector
1.3.5.1.4.5.2 Loss of governmental continuity of operations
1.3.5.1.4.5.3 Relational harms
1.3.5.1.4.5.3.1 Damage to trust relationships between governments or regions
1.3.5.1.4.5.3.2 Damage to governmental or regional reputation
1.3.5.1.4.5.4 Damage to ability to meet national or global objectives
1.3.6 Risk tolerance and uncertainty
1.3.6.1 Determine the levels of risk are acceptable
1.3.6.1.1 Likelihood
1.3.6.1.1.1 Assume worst-case, vs
1.3.6.1.1.2 Assume that unobserved events are unlikely
1.3.6.1.2 Impact
1.3.6.2 Determine the types of risks are acceptable
1.3.6.3 Determine the the degree of risk-uncertainty that is acceptable
1.3.7 Analytic approach
1.3.7.1 Determine degree of detail (or in what form) threats are analyzed -- the level of granularity to describe threat events and threat scenarios
1.3.7.1.1 Threat events
1.3.7.1.1.1 Maybe we choose to do some of these...
1.3.7.1.2 Pairings of threat events and threat sources
1.3.7.1.2.1 Some of these...
1.3.7.1.3 Detailed threat scenario/attack-tree
1.3.7.1.3.1 And one or two of these?
1.3.7.1.4 NOTE: In general, organizations can be expected to require more detail for highly critical mission/business functions, common infrastructures, or shared services on which multiple missions or business functions depend (as common points of failure), and information systems with high criticality or sensitivity.
1.4 1-4 -- Identify information sources
1.4.1 TASK 1-4: Identify the sources of threat, vulnerability, and impact information to be used in the risk assessment.
1.4.1.1 Supplemental Guidance
1.4.1.1.1 Sources of threat information as described in Tables D-1, E-1, F-1, G-1, H-1, and I-1) can be either internal or external to organizations.
1.4.1.1.2 Internal sources can provide insights into specific threats to organizations and can include, for example, incident reports, security logs, trouble tickets, and monitoring results.
1.4.1.1.3 Mission/business owners are encouraged to identify not only common infrastructure and/or support services they depend on, but also those they might use under specific operational circumstances.
1.4.1.1.4 External sources of threat information can include cross- community organizations (e.g., US Computer Emergency Readiness Team [US-CERT]), sector partners (e.g., Defense Industrial Base [DIB] using the DoD-Defense Industrial Base Collaborative Information Sharing Environment [DCISE], Information Sharing and Analysis Centers [ISACs] for critical infrastructure sectors), research and nongovernmental organizations (e.g. Carnegie Mellon University, Software Engineering Institute-CERT), and security service providers).
1.4.1.1.5 Organizations using external sources, consider the timeliness, specificity, and relevance of threat information. Similar to sources of threat information, sources of vulnerability information can also be either internal or external to organizations. Internal sources can provide insights into specific vulnerabilities to organizations and can include, for example, security assessment reports, vulnerability assessment reports, risk assessment reports, incident reports, security logs, trouble tickets, and monitoring results. External sources of vulnerability information are similar to those sources identified above for threat information. Sources of impact information can include, for example, mission/business impact analyses and asset inventories, and FIPS Publication 199 security categorizations.
1.4.2 Internal
1.4.2.1 Incident reports
1.4.2.2 Trouble tickets
1.4.2.3 Monitoring results
1.4.3 External
1.4.3.1 Cross-community organizations
1.4.3.1.1 CERTs
1.4.3.1.2 Information sharing and analysis centers (ISACs) for critical infrastructure sectors
1.4.3.1.3 Research and NGO's
1.4.3.1.4 Subtopic
1.5 1-5 -- Define risk model
1.5.1 TASK 1-5: Define (or refine) the risk model to be used in the risk assessment.
1.5.1.1 Supplemental Guidance
1.5.1.1.1 Organizations define one or more risk models for use in conducting risk assessments (see Section 2.1.1). To facilitate reciprocity of risk assessment results, organization-specific risk models include (or can be translated into) the risk factors defined in the appendices. For each assessable risk factor, the appendices include three assessment scales with correspondingly different representations. Organizations typically define (or select and tailor from the appendices), the assessment scales to be used in their risk assessments, annotating with common anchoring examples for specific values and defining break points between bins for semi-quantitative approaches. In addition, mission/business owners can provide further annotations with mission/business-specific examples.
1.5.2 Mikey note -- we're using the risk model that is called "semi-qualitative. Numbers, rather than words, are used to describe the scales -- which should allow for some helpful arithmetic downstream. Since this is the first time through, there is no "anchoring" to other scales, but hopefully this will provide a baseline for subsequent studies.
2 Step 2 - Conduct risk assessment
2.1 2-1 -- Identify threat sources
2.1.1 TASK 2-1: Identify and characterize the threat sources of concern to the organization, including the nature of the threats and for adversarial threats, capability, intent, and targeting characteristics.
2.1.1.1 Supplemental Guidance
2.1.1.1.1 Organizations identify threat sources of concern and determine the characteristics associated with those threat sources.
2.1.1.1.2 Certain characteristics (e.g., capabilities, intentions, and targeting) may define specific types of threat sources to be addressed. For threat sources identified by type or by name, the characteristics associated with the threat sources are also identified. The prepare step for the risk assessment includes organizational direction and guidance for conducting threat source identification and characterization including, for example:
2.1.1.1.2.1 (i) sources for obtaining threat information;
2.1.1.1.2.2 (ii) threat sources to consider (by type/name);
2.1.1.1.2.3 (iii) threat taxonomy to be used; and
2.1.1.1.2.4 (iv) process for identifying which threat sources are of concern for the risk assessment.
2.1.1.1.3 Organizations make explicit any assumptions concerning threat sources including decisions regarding the identification of threat sources when specific and credible threat information is unavailable. The identification and characterization of Advanced Persistent Threats (APTs) can involve considerable uncertainty. Organizations annotate such threat sources with appropriate rationale and references (and providing classifications as necessary).
2.1.1.1.4 Appendix D provides a set of exemplary tables for use in identifying threat sources:
2.1.1.1.4.1 Table D-1 provides a set of exemplary inputs to the threat source identification task;
2.1.1.1.4.2 Table D-2 provides an exemplary taxonomy that can be used to identify and characterize threat sources;
2.1.1.1.4.3 Tables D-3, D-4, and D-5 provide exemplary assessment scales to assess the risk factors (i.e., characteristics) of adversarial threat sources with regard to capability, intent, and targeting;
2.1.1.1.4.4 Table D-6 provides an exemplary assessment scale for assessing the ranges of effects from threat events initiated by non-adversarial threat sources; and
2.1.1.1.4.5 Tables D-7 and D-8 provide templates for summarizing and documenting the results of threat source identification and characterization.
2.1.1.1.5 If a particular type of threat source is outside the scope of the risk assessment or not relevant to the organization, the information in Tables D-7 and D-8 can be truncated accordingly. The information produced in Task 2-1 provides threat source inputs to the risk tables in Appendix I.
2.1.2 Provide threat source inputs
2.1.2.1 Threat information sources (from Task 1-4)
2.1.2.2 Taxonomy of threat sources (tailored version of Table D-2)
2.1.2.2.1 Table D-2 -- Taxonomy of threat sources
2.1.2.2.1.1 Adversarial
2.1.2.2.1.1.1 Description
2.1.2.2.1.1.1.1 Individuals, groups, organizations or states that seek to exploit the organization's dependence on cyber resources
2.1.2.2.1.1.2 Characteristics
2.1.2.2.1.1.2.1 Capability
2.1.2.2.1.1.2.2 Intent
2.1.2.2.1.1.2.3 Targeting
2.1.2.2.1.1.3 Types
2.1.2.2.1.1.3.1 Individual
2.1.2.2.1.1.3.1.1 Outsider
2.1.2.2.1.1.3.1.2 Insider
2.1.2.2.1.1.3.1.3 Trusted Insider
2.1.2.2.1.1.3.1.4 Privileged Insider
2.1.2.2.1.1.3.2 Group
2.1.2.2.1.1.3.2.1 Ad Hoc
2.1.2.2.1.1.3.2.1.1 Rogue elements
2.1.2.2.1.1.3.2.2 Established
2.1.2.2.1.1.3.2.3 Organization
2.1.2.2.1.1.3.2.3.1 Terrorism
2.1.2.2.1.1.3.2.3.1.1 Acts of war/terror
2.1.2.2.1.1.3.2.3.2 Geo-political groups
2.1.2.2.1.1.3.2.3.3 Alternate DNS root operators
2.1.2.2.1.1.3.2.3.4 Organized crime
2.1.2.2.1.1.3.2.4 Nation-state
2.1.2.2.1.1.3.2.4.1 Regulatory-imposed shutdown
2.1.2.2.1.1.3.2.4.1.1 A court, government or government agency could attempt to order a registry operator to halt its operations.
2.1.2.2.1.1.3.2.4.2 Government Seizure of Registry Operator
2.1.2.2.1.1.3.2.4.2.1 A government could assume control over a registry operator, either through seizure of registry operations or nationalization of operations. Re-delegation of ccTLDs from individuals to government agencies provide examples of government assumption of control over registry operations. Re-delegation of a registry should include measures to ensure stable transition of registry operations.
2.1.2.2.1.1.3.2.4.3 Government Takeover/Coup
2.1.2.2.1.1.3.2.4.3.1 A change of government by takeover, revolution or coup could lead to instability or failure for a registry operator. Political instability has not to date had a direct impact on registry operations, but direct intervention by governments into registry operations could occur in the future.
2.1.2.2.1.1.3.2.4.4 State-sponsored "hacktivism"
2.1.2.2.1.2 Accidental
2.1.2.2.1.2.1 Description
2.1.2.2.1.2.1.1 Erroneous actions taken by individuals in the course of executing their everyday responsibilities
2.1.2.2.1.2.2 Characteristics
2.1.2.2.1.2.2.1 Range of effects
2.1.2.2.1.2.3 Types
2.1.2.2.1.2.3.1 Ordinary users
2.1.2.2.1.2.3.2 Privileged users
2.1.2.2.1.2.3.2.1 Name, web, database, and transaction server configuration errors
2.1.2.2.1.2.3.2.2 Operating system configuration errors
2.1.2.2.1.2.3.2.3 security system configuration errors
2.1.2.2.1.2.3.3 Business failure of key providers
2.1.2.2.1.2.3.3.1 Use cases
2.1.2.2.1.2.3.3.1.1 Registry business failure
2.1.2.2.1.2.3.3.1.1.1 As with any business, registry operators must properly manage financial assets, funding and cash flow or face potential financial failure. Businesses and entities interested in entering the registry market should study the examples set by current registry operators in order to understand the business of domain names. Business failure examples include bankruptcy, buy-out, loss of funding, liquidation, management failure, marketing failure, litigation-related or induced failure or termination of payment processing capability
2.1.2.2.1.2.3.3.1.1.2 Failure modes - vulnerabilities
2.1.2.2.1.2.3.3.1.1.2.1 Marketing Failure
2.1.2.2.1.2.3.3.1.1.2.2 Litigation-related Failure
2.1.2.2.1.2.3.3.1.1.2.3 Termination of payment processing capability
2.1.2.2.1.2.3.3.1.1.2.4 General Business Failure
2.1.2.2.1.2.3.3.1.1.3 Palage White Paper (http://forum.icann.org/lists/new-gtld-questions/msg00006.html ) recommendations -- mitigation
2.1.2.2.1.2.3.3.1.1.3.1 All registry operators be required to operate on the current EPP standard
2.1.2.2.1.2.3.3.1.1.3.2 ICANN listed as direct beneficiary of data escrow agreement, with active script verification and periodic download
2.1.2.2.1.2.3.3.1.1.3.3 ICANN access to zone files
2.1.2.2.1.2.3.3.1.1.3.4 Education on existence and function of Auth Codes
2.1.2.2.1.2.3.3.1.1.3.5 Bonding requirement
2.1.2.2.1.2.3.3.1.1.3.6 Discussion of "thick" vs "thin" registries
2.1.2.2.1.2.3.3.1.2 Registrar business failure
2.1.2.2.1.2.3.3.2 Scope-- under discussion
2.1.2.2.1.2.3.3.2.1 Doubtful that the failure of a registry (perhaps with the exception of .com/.net) will have a substantial impact on the DNS
2.1.2.2.1.2.3.4 Governmental interventions with accidental or unintended consequences
2.1.2.2.1.2.3.4.1 Legislation -- eg "Protect IP"
2.1.2.2.1.3 Structural
2.1.2.2.1.3.1 Description
2.1.2.2.1.3.1.1 Failures of equipment, environmental controls or software due to aging, resource depletion or other circumstances which exceed expected operating parameters
2.1.2.2.1.3.2 Characteristics
2.1.2.2.1.3.2.1 Range of effects
2.1.2.2.1.3.3 Types
2.1.2.2.1.3.3.1 IT Equipment
2.1.2.2.1.3.3.1.1 Storage
2.1.2.2.1.3.3.1.1.1 Database server processor fails
2.1.2.2.1.3.3.1.1.2 Database disk drive fails
2.1.2.2.1.3.3.1.1.3 Database crashes
2.1.2.2.1.3.3.1.2 Processing
2.1.2.2.1.3.3.1.2.1 Applications-cluster processor fails
2.1.2.2.1.3.3.1.3 Communications
2.1.2.2.1.3.3.1.3.1 Internet or VPN link fails
2.1.2.2.1.3.3.1.3.2 Router or firewall fails
2.1.2.2.1.3.3.1.3.3 Physical site becomes inoperable for more than 24 hours
2.1.2.2.1.3.3.1.3.3.1 impact - move
2.1.2.2.1.3.3.1.3.4 Both the primary and secondary data centers become inoperable
2.1.2.2.1.3.3.1.3.4.1 impact - move
2.1.2.2.1.3.3.1.4 Display
2.1.2.2.1.3.3.1.5 Sensor
2.1.2.2.1.3.3.1.6 Controller
2.1.2.2.1.3.3.2 Environmental
2.1.2.2.1.3.3.2.1 Temperature/humidity controls
2.1.2.2.1.3.3.2.2 Power supply
2.1.2.2.1.3.3.3 Software
2.1.2.2.1.3.3.3.1 Operating system
2.1.2.2.1.3.3.3.1.1 Operating system fails
2.1.2.2.1.3.3.3.2 Networking
2.1.2.2.1.3.3.3.2.1 Authentication server fails
2.1.2.2.1.3.3.3.3 General-purpose applications
2.1.2.2.1.3.3.3.3.1 Application software fails
2.1.2.2.1.3.3.3.4 Mission-specific applications
2.1.2.2.1.3.3.3.4.1 Web server processor fails
2.1.2.2.1.3.3.3.4.2 Whois-cluster processor fails
2.1.2.2.1.3.3.3.4.3 EPP/RRP server processor fails
2.1.2.2.1.3.3.3.4.4 Billing and collections server fails
2.1.2.2.1.3.3.3.4.5 Root scaling
2.1.2.2.1.4 Environmental
2.1.2.2.1.4.1 Description
2.1.2.2.1.4.1.1 Natural disasters and failures of critical infrastructures on which the organization depends, but which are outside the control of the organization
2.1.2.2.1.4.1.1.1 Note: Natural and man-made disasters can also be characterized in terms of their severity and/or duration. However, because the threat source and the threat event are strongly identified, severity and duration can be included in the description of the threat event (eg Category 5 hurricane causes extensive damage to the facilities housing mission-critical systems, making those systems unavailable for three weeks).
2.1.2.2.1.4.2 Characteristics
2.1.2.2.1.4.2.1 Range of effects
2.1.2.2.1.4.3 Types
2.1.2.2.1.4.3.1 Natural or man-made disaster
2.1.2.2.1.4.3.1.1 Fire
2.1.2.2.1.4.3.1.2 Flood/Tsunami
2.1.2.2.1.4.3.1.2.1 While no registries are currently located in a tsunami-danger zone, future registry operators in tsunami-prone areas should have contingency plans in place to ensure the stability of registry operations.
2.1.2.2.1.4.3.1.3 Windstorm/tornado
2.1.2.2.1.4.3.1.4 Hurricane
2.1.2.2.1.4.3.1.4.1 Hurricane Katrina (23-31 August 2005) is estimated to be responsible for over $75 billion USD in damages. When Hurricane Katrina hit New Orleans 27-30 August 2005, it caused a temporary failure to ICANN-accredited registrar Intercosmos Media Group. Intercosmos was able to avoid a prolonged outage because it had a plan for the backup of critical registrar resources. Although Intercosmos is a registrar, it may serve as an example for registries facing potential disaster scenarios.
2.1.2.2.1.4.3.1.5 Earthquake
2.1.2.2.1.4.3.1.5.1 A strong earthquake could cause a temporary failure for a registry. A registry located in an earthquake-prone location should have contingency plans in place to ensure continuity of operations.
2.1.2.2.1.4.3.1.6 Bombing
2.1.2.2.1.4.3.1.7 Overrun
2.1.2.2.1.4.3.2 Unusual natural event (eg sunspots)
2.1.2.2.1.4.3.3 Infrastructure failure/outage
2.1.2.2.1.4.3.3.1 Telecommunications
2.1.2.2.1.4.3.3.2 Power
2.1.2.2.1.4.3.3.2.1 In the future, a similar large-scale power outage could impact registry operators that have not implemented protections against localized outages at registry operations centers.
2.1.2.3 Characterization of adversarial and non-adversarial threat sources
2.1.2.3.1 Adversary capability, intent and targeting (tailored versions of Tables D-3, D-4, D-5)
2.1.2.3.1.1 Table D-3 -- Adversary capability
2.1.2.3.1.1.1 The adversary has a very sophisticated level of expertise, is well-resourced, and can generate opportunities to support multiple successful, continuous, and coordinated attacks.
2.1.2.3.1.1.2 The adversary has a sophisticated level of expertise, with significant resources and opportunities to support multiple successful coordinated attacks.
2.1.2.3.1.1.3 The adversary has moderate resources, expertise, and opportunities to support multiple successful attacks.
2.1.2.3.1.1.4 The adversary has limited resources, expertise, and opportunities to support a successful attack.
2.1.2.3.1.1.5 The adversary has very limited resources, expertise, and opportunities to support a successful attack.
2.1.2.3.1.2 Table D-4 -- Adversary intent
2.1.2.3.1.2.1 The adversary seeks to undermine, severely impede, or destroy a core mission or business function, program, or enterprise by exploiting a presence in the organization’s information systems or infrastructure. The adversary is concerned about disclosure of tradecraft only to the extent that it would impede its ability to complete stated goals.
2.1.2.3.1.2.2 The adversary seeks to undermine/impede critical aspects of a core mission or business function, program, or enterprise, or place itself in a position to do so in the future, by maintaining a presence in the organization’s information systems or infrastructure. The adversary is very concerned about minimizing attack detection/disclosure of tradecraft, particularly while preparing for future attacks.
2.1.2.3.1.2.3 The adversary seeks to obtain or modify specific critical or sensitive information or usurp/disrupt the organization’s cyber resources by establishing a foothold in the organization’s information systems or infrastructure. The adversary is concerned about minimizing attack detection/disclosure of tradecraft, particularly when carrying out attacks over long time periods. The adversary is willing to impede aspects of the organization’s mission/business functions to achieve these ends.
2.1.2.3.1.2.4 The adversary actively seeks to obtain critical or sensitive information or to usurp/disrupt the organization’s cyber resources, and does so without concern about attack detection/disclosure of tradecraft.
2.1.2.3.1.2.5 The adversary seeks to usurp, disrupt, or deface the organization’s cyber resources, and does so without concern about attack detection/disclosure of tradecraft.
2.1.2.3.1.3 Table D-5 -- Adversary targeting
2.1.2.3.1.3.1 The adversary analyzes information obtained via reconnaissance and attacks to target persistently a specific organization, enterprise, program, mission or business function, focusing on specific high-value or mission-critical information, resources, supply flows, or functions; specific employees or positions; supporting infrastructure providers/suppliers; or partnering organizations.
2.1.2.3.1.3.2 The adversary analyzes information obtained via reconnaissance to target persistently a specific organization, enterprise, program, mission or business function, focusing on specific high-value or mission-critical information, resources, supply flows, or functions, specific employees supporting those functions, or key positions.
2.1.2.3.1.3.3 The adversary analyzes publicly available information to target persistently specific high-value organizations (and key positions, such as Chief Information Officer), programs, or information.
2.1.2.3.1.3.4 The adversary uses publicly available information to target a class of high-value organizations or information, and seeks targets of opportunity within that class.
2.1.2.3.1.3.5 The adversary may or may not target any specific organizations or classes of organizations.
2.1.2.3.2 Range of effects of non-adversarial threat sources (tailored version of Table D-6)
2.1.2.3.2.1 Table D-6 -- Range of effects
2.1.2.3.2.1.1 The effects of the error, accident, or act of nature are sweeping, involving almost all of the cyber resources of the [Tier 3: information systems; Tier 2: mission/business processes or EA segments, common infrastructure, or support services; Tier 1: organization/governance structure].
2.1.2.3.2.1.2 The effects of the error, accident, or act of nature are extensive, involving most of the cyber resources of the [Tier 3: information systems; Tier 2: mission/business processes or EA segments, common infrastructure, or support services; Tier 1: organization/governance structure], including many critical resources.
2.1.2.3.2.1.3 The effects of the error, accident, or act of nature are wide-ranging, involving a significant portion of the cyber resources of the [Tier 3: information systems; Tier 2: mission/business processes or EA segments, common infrastructure, or support services; Tier 1: organization/governance structure], including some critical resources.
2.1.2.3.2.1.4 The effects of the error, accident, or act of nature are limited, involving some of the cyber resources of the [Tier 3: information systems; Tier 2: mission/business processes or EA segments, common infrastructure, or support services; Tier 1: organization/governance structure], but involving no critical resources.
2.1.2.3.2.1.5 The effects of the error, accident, or act of nature are minimal, involving few if any of the cyber resources of the [Tier 3: information systems; Tier 2: mission/business processes or EA segments, common infrastructure, or support services; Tier 1: organization/governance structure], and involving no critical resources.
2.1.3 Use (and tailor) Table D-2 to identify threat sources, updating Tables D-7 and D8 (adversarial and non-adversarial threat sources respectively)
2.1.3.1 Table D-7 -- Adversarial threat sources
2.1.3.1.1 Identifier (defined by us)
2.1.3.1.2 Threat source (Task 1-4 and Table D-2)
2.1.3.1.3 Source of information
2.1.3.1.4 In scope? (yes/no)
2.1.3.1.5 Capability (tailored Table D-3)
2.1.3.1.6 Intent (tailored Table D-4)
2.1.3.1.7 Targeting (tailored Table D-5)
2.1.3.2 Table D-8 -- Non-adversarial threat sources
2.1.3.2.1 Identifier (defined by us)
2.1.3.2.2 Threat source (Task 1-4 and Table D-2)
2.1.3.2.3 Source of information
2.1.3.2.4 In scope? (yes/no)
2.1.3.2.5 Range of effects (tailored Table D-6)
2.2 2-2 Identify threat events
2.2.1 TASK 2-2: Identify potential threat events, relevance to the organization, and the threat sources that could initiate the events.
2.2.1.1 Supplemental Guidance
2.2.1.1.1 Threat events are characterized by the threat sources that could initiate the events, and for adversarial events, the tactics, techniques, and procedures used to carry out attacks.
2.2.1.1.2 Organizations define these threat events with sufficient detail to accomplish the purpose of the risk assessment. Multiple threat sources can initiate a single threat event. Conversely, a single threat source can initiate multiple threat events. Therefore, there can be a many-to-many relationship between threat events and threat sources which can potentially increase the complexity of the analysis and the risk assessment.
2.2.1.1.3 Organizations tailor the general descriptions of threat events to identify how each event could potentially harm organizational operations (including mission, functions, image, or reputation) and assets, individuals, other organizations, or the Nation.
2.2.1.1.4 For non-adversarial threat events, organizations use the range of effects to identify the affected operations, assets, or individuals (see Task 2-5).
2.2.1.1.5 For adversarial threat events, organizations use the event description and adversary targeting and intent to identify the affected operations, assets, or individuals.
2.2.1.1.6 For each threat event identified, organizations determine the relevance of the event. Table E-4 provides a range of values for relevance of threat events. The values selected by organizations have a direct linkage to organizational risk tolerance. The more risk averse, the greater the range of values considered. Organizations accepting greater risk or having a greater risk tolerance are more likely to require substantive evidence before giving consideration to threat events.
2.2.1.1.7 If a threat event is deemed to be irrelevant, no further consideration is given. For relevant threat events, organizations identify all potential threat sources that could initiate the events. Organizations can identify each pairing of threat source and threat event separately since the likelihood of threat initiation and success could be different for each pairing.
2.2.1.1.8 Alternatively, organizations can assess likelihoods by considering the set of all possible threat sources that could potentially initiate a threat event.
2.2.1.1.9 Organizations make explicit any assumptions and decisions when identifying threat events. Organizations also make explicit the process used for identifying threat events and the information sources used to identify the events.
2.2.1.1.10 Finally, organizations capture information to support the determinations of uncertainty.
2.2.1.1.11 Appendix E provides a set of exemplary tables for use in identifying threat events:
2.2.1.1.11.1 Table E-1 provides a set of exemplary inputs to the threat event identification task;
2.2.1.1.11.2 Table E-2 provides representative examples of adversarial threat events expressed as TTPs;
2.2.1.1.11.3 Table E-3 provides representative examples of non-adversarial threat events;
2.2.1.1.11.4 Table E-4 provides exemplary values for the relevance of threat events to organizations; and
2.2.1.1.11.5 Table E-5 provides a template for summarizing and documenting the results of threat event identification.
2.2.1.1.12 The information produced in Task 2-2 provides threat event inputs to the risk tables in Appendix I.
2.2.2 Provide threat event inputs (Table E-1)
2.2.2.1 Threat information sources (from Task 1-4)
2.2.2.2 Taxonomy of adversarial threat events (tailored version of Table E-2)
2.2.2.2.1 Table E-2 -- Adversarial threat events (giant list -- we need to radically thin this one)
2.2.2.2.1.1 Access sensitive information through network sniffing.
2.2.2.2.1.1.1 Adversary gains access to the exposed wired or wireless data channels that organizations (or organizational personnel) use to transmit information, and intercept communications. Adversary actions might include, for example, targeting public kiosks or hotel networking connections.
2.2.2.2.1.2 Adapt cyber attacks based on detailed surveillance.
2.2.2.2.1.2.1 Adversary adapts attacks in response to surveillance of organizations and the protective measures that organizations employ.
2.2.2.2.1.3 Exploit recently discovered vulnerabilities.
2.2.2.2.1.3.1 Adversary exploits recently discovered vulnerabilities in organizational information systems in an attempt to attack the systems before mitigation measures are available or in place.
2.2.2.2.1.4 Employ brute force login attempts/password guessing.
2.2.2.2.1.4.1 Adversary attempts to gain access to organizational information systems by random or systematic guessing of passwords, possibly supported by password cracking utilities.
2.2.2.2.1.5 Cause degradation or denial of attacker selected services or capabilities.
2.2.2.2.1.5.1 Adversary launches attacks specifically intended to impede the ability of organizations to function.
2.2.2.2.1.6 Cause deterioration/destruction of critical information system components and functions.
2.2.2.2.1.6.1 Adversary attempts to destroy or deteriorate critical information system components for purposes of impeding or eliminating the ability of organizations to carry out missions or business functions. Detection of this action is not a concern.
2.2.2.2.1.7 Combine internal and external attacks across multiple information systems and information technologies to achieve a breach or compromise.
2.2.2.2.1.7.1 Adversary combines attacks that require both physical presence within organizations and cyber methods to achieve success. Physical components may be as simple as convincing maintenance personnel to leave doors or cabinets open.
2.2.2.2.1.8 Compromise critical information systems via physical access by outsiders.
2.2.2.2.1.8.1 Adversary without authorized access to organizational information systems, attempts to physically gain access to the systems.
2.2.2.2.1.9 Compromise mission critical information.
2.2.2.2.1.9.1 Adversary takes action to compromise the integrity of mission critical information, thus preventing/impeding ability of organizations to which information is supplied, from carrying out operations.
2.2.2.2.1.10 Compromise information systems or devices used externally and reintroduce into the enterprise.
2.2.2.2.1.10.1 Adversary manages to install malware on information systems or devices while the systems/devices are external to organizations for purposes of subsequently infecting organizations when reconnected.
2.2.2.2.1.11 Compromise design, manufacture, and/or distribution of information system components (including hardware, software, and firmware) organizations are known to use.
2.2.2.2.1.11.1 Adversary is able to compromise the design, manufacturing, and/or distribution of critical information system components at selected suppliers.
2.2.2.2.1.12 Conduct reconnaissance, surveillance, and target acquisition of targeted organizations.
2.2.2.2.1.12.1 Adversary uses various means (e.g., scanning, physical observation) to examine and assess organizations and ascertain points of vulnerability.
2.2.2.2.1.13 Conduct phishing attacks.
2.2.2.2.1.13.1 Adversary attempts to acquire sensitive information such as usernames, passwords, or SSNs, by pretending to be communications from a legitimate/trustworthy source. Typical attacks occur via email, instant messaging, or comparable means; commonly directing users to Web sites that appear to be legitimate sites, while actually stealing the entered information.
2.2.2.2.1.14 Continuous, adaptive and changing cyber attacks based on detailed surveillance of organizations.
2.2.2.2.1.14.1 Adversary attacks continually change in response to surveillance of organizations and protective measures that organizations take.
2.2.2.2.1.15 Coordinating cyber attacks on organizations using external (outsider), internal (insider), and supply chain (supplier) attack vectors.
2.2.2.2.1.15.1 Adversary employs continuous, coordinated attacks, potentially using all three attack vectors for the purpose of impeding organizational operations.
2.2.2.2.1.16 Create and operate false front organizations that operate within the critical life cycle path to inject malicious information system components into the supply chain.
2.2.2.2.1.16.1 Adversary creates the appearance of legitimate suppliers that then inject corrupted/malicious information system components into the supply chain of organizations.
2.2.2.2.1.17 Deliver known malware to internal organizational information systems (e.g., virus via email).
2.2.2.2.1.17.1 Adversary uses common delivery mechanisms (e.g., email) to install/insert known malware (e. g., malware whose existence is known) into organizational information systems.
2.2.2.2.1.18 Deliver modified malware to internal organizational information systems.
2.2.2.2.1.18.1 Adversary uses more sophisticated means (e.g., Web traffic, instant messaging, FTP) to deliver malware and possibly modifications of known malware to gain access to internal organizational information systems.
2.2.2.2.1.19 Devise attacks specifically based on deployed information technology environment.
2.2.2.2.1.19.1 Adversary develops attacks, using known and unknown attacks that are designed to take advantage of adversary knowledge of the information technology infrastructure.
2.2.2.2.1.20 Discovering and accessing sensitive data/information stored on publicly accessible information systems.
2.2.2.2.1.20.1 Adversary attempts to scan or mine information on publically accessible servers and Web pages of organizations with the intent of finding information that is sensitive (i.e., not approved for public release).
2.2.2.2.1.21 Distributed Denial of Service (DDoS) attack.
2.2.2.2.1.21.1 Adversary uses multiple compromised information systems to attack a single target, thereby causing denial of service for users of the targeted information systems.
2.2.2.2.1.22 Exploit known vulnerabilities in mobile systems (e.g., laptops, PDAs, smart phones).
2.2.2.2.1.22.1 Adversary takes advantage of fact that transportable information systems are outside physical protection of organizations and logical protection of corporate firewalls, and compromises the systems based on known vulnerabilities to gather information from those systems.
2.2.2.2.1.23 Exploiting vulnerabilities in information systems timed with organizational mission/business operations tempo.
2.2.2.2.1.23.1 Adversary launches attacks on organizations in a time and manner consistent with organizational needs to conduct mission/business operations.
2.2.2.2.1.24 Externally placed adversary sniffing and intercepting of wireless network traffic.
2.2.2.2.1.24.1 Adversary strategically in position to intercept wireless communications of organizations.
2.2.2.2.1.25 Hijacking information system sessions of data traffic between the organization and external entities.
2.2.2.2.1.25.1 Adversary takes control of (hijacks) already established, legitimate information system sessions between organizations and external entities (e.g., users connecting from off-site locations).
2.2.2.2.1.26 Injecting false but believable data/information into organizational information systems.
2.2.2.2.1.26.1 Adversary injects false but believable data into organizational information systems. This action by the adversary may impede the ability of organizations to carry out missions/business functions correctly and/or undercut the credibility other entities may place in the information or services provided by organizations.
2.2.2.2.1.27 Insert subverted individuals into privileged positions in organizations.
2.2.2.2.1.27.1 Adversary has individuals in privileged positions within organizations that are willing and able to carry out actions to cause harm to organizational missions/business functions. Subverted individuals may be active supporters of adversary, supporting adversary (albeit under duress), or unknowingly supporting adversary (e.g., false flag). Adversary may target privileged functions to gain access to sensitive information (e.g., user accounts, system files, etc.) and may leverage access to one privileged capability to get to another capability.
2.2.2.2.1.28 Counterfeit/Spoofed Web site.
2.2.2.2.1.28.1 Adversary creates duplicates of legitimate Web sites and directs users to counterfeit sites to gather information.
2.2.2.2.1.29 Deliver targeted Trojan for control of internal systems and exfiltration of data.
2.2.2.2.1.29.1 Adversary manages to install software containing Trojan horses that are specifically designed to take control of internal organizational information systems, identify sensitive information, exfiltrate the information back to adversary, and conceal these actions.
2.2.2.2.1.30 Employ open source discovery of organizational information useful for future cyber attacks.
2.2.2.2.1.30.1 Adversary mines publically accessible information with the goal of discerning information about information systems, users, or organizational personnel that the adversary can subsequently employ in support of an attack.
2.2.2.2.1.31 Exploit vulnerabilities on internal organizational information systems.
2.2.2.2.1.31.1 Adversary searches for known vulnerabilities in organizational internal information systems and exploits those vulnerabilities.
2.2.2.2.1.32 Inserting malicious code into organizational information systems to facilitate exfiltration of data/information.
2.2.2.2.1.32.1 Adversary successfully implants malware into internal organizational information systems, where the malware over time identifies and then successfully exfiltrates valuable information.
2.2.2.2.1.33 Installing general-purpose sniffers on organization- controlled information systems or networks.
2.2.2.2.1.33.1 Adversary manages to install sniffing software onto internal organizational information systems or networks.
2.2.2.2.1.34 Leverage traffic/data movement allowed across perimeter (e.g., email communications, removable storage) to compromise internal information systems (e.g., using open ports to exfiltrate information).
2.2.2.2.1.34.1 Adversary makes use of permitted information flows (e.g., email communications) to facilitate compromises to internal information systems (e.g., phishing attacks to direct users to go to Web sites containing malware) which allows adversary to obtain and exfiltrate sensitive information through perimeters.
2.2.2.2.1.35 Insert subverted individuals into the organizations.
2.2.2.2.1.35.1 Adversary has individuals in place within organizations that are willing and able to carry out actions to cause harm to organizational missions/business functions. Subverted individuals may be active supporters of adversary, supporting adversary (albeit under duress), or unknowingly supporting adversary (e.g., false flag).
2.2.2.2.1.36 Insert counterfeited hardware into the supply chain.
2.2.2.2.1.36.1 Adversary intercepts hardware from legitimate suppliers. Adversary modifies the hardware or replaces it with faulty or otherwise modified hardware.
2.2.2.2.1.37 Inserting malicious code into organizational information systems and information system components (e.g., commercial information technology products) known to be used by organizations.
2.2.2.2.1.37.1 Adversary inserts malware into information systems specifically targeted to the hardware, software, and firmware used by organizations (resulting from the reconnaissance of organizations by adversary).
2.2.2.2.1.38 Inserting specialized, non-detectable, malicious code into organizational information systems based on system configurations.
2.2.2.2.1.38.1 Adversary launches multiple, potentially changing attacks specifically targeting critical information system components based on reconnaissance and placement within organizational information systems.
2.2.2.2.1.39 Insider-based session hijacking.
2.2.2.2.1.39.1 Adversary places an entity within organizations in order to gain access to organizational information systems or networks for the express purpose of taking control (hijacking) an already established, legitimate session either between organizations and external entities (e.g., users connecting from remote locations) or between two locations within internal networks.
2.2.2.2.1.40 Installing persistent and targeted sniffers on organizational information systems and networks.
2.2.2.2.1.40.1 Adversary places within the internal organizational information systems or networks software designed to (over a continuous period of time) collect (sniff) network traffic.
2.2.2.2.1.41 Intercept/decrypt weak or unencrypted communication traffic and protocols.
2.2.2.2.1.41.1 Adversary takes advantage of communications that are either unencrypted or use weak encryption (e.g., encryption containing publically known flaws), targets those communications, and gains access to transmitted information and channels.
2.2.2.2.1.42 Jamming wireless communications.
2.2.2.2.1.42.1 Adversary takes measures to interfere with the wireless communications so as to impede or prevent communications from reaching intended recipients.
2.2.2.2.1.43 Malicious activity using unauthorized ports, protocols, and services.
2.2.2.2.1.43.1 Adversary conducts attacks using ports, protocols, and services for ingress and egress that are not authorized for use by organizations.
2.2.2.2.1.44 Malicious creation, deletion, and/or modification of files on publicly accessible information systems (e.g., Web defacement).
2.2.2.2.1.44.1 Adversary vandalizes, or otherwise makes unauthorized changes to organizational Web sites or files on Web sites.
2.2.2.2.1.45 Mapping and scanning organization-controlled (internal) networks and information systems from within (inside) organizations.
2.2.2.2.1.45.1 Adversary installs malware inside perimeter that allows the adversary to scan network to identify targets of opportunity. Because the scanning does not cross the perimeter, it is not detected by externally placed intrusion detection systems.
2.2.2.2.1.46 Mishandling of critical and/or sensitive information by authorized users.
2.2.2.2.1.46.1 Authorized users inadvertently expose critical/sensitive information.
2.2.2.2.1.47 Multistage attacks (e.g., hopping).
2.2.2.2.1.47.1 Adversary moves attack location from one compromised information system to other information systems making identification of source difficult.
2.2.2.2.1.48 Network traffic modification (man in the middle) attacks by externally placed adversary.
2.2.2.2.1.48.1 Adversary intercepts/eavesdrops on sessions between organizations and external entities. Adversary then relays messages between the organizations and external entities, making them believe that they are talking directly to each other over a private connection, when in fact the entire communication is controlled by the adversary.
2.2.2.2.1.49 Network traffic modification (man in the middle) attacks by internally placed adversary.
2.2.2.2.1.49.1 Adversary operating within the infrastructure of organizations intercepts and corrupts data sessions.
2.2.2.2.1.50 Non-target specific insertion of malware into downloadable software and/or into commercial information technology products.
2.2.2.2.1.50.1 Adversary corrupts or inserts malware into common freeware, shareware, or commercial information technology products. Adversary is not targeting specific organizations in this attack, simply looking for entry points into internal organizational information systems.
2.2.2.2.1.51 Operate across organizations to acquire specific information or achieve desired outcome.
2.2.2.2.1.51.1 Adversary does not limit planning to the targeting of one organization. Adversary observes multiple organizations to acquire necessary information on targets of interest.
2.2.2.2.1.52 Opportunistically stealing or scavenging information systems/components.
2.2.2.2.1.52.1 Adversary takes advantage of opportunities (due to advantageous positioning) to steal information systems or components (e. g., laptop computers or data storage media) that are left unattended outside of the physical perimeters of organizations.
2.2.2.2.1.53 Perimeter network reconnaissance/scanning.
2.2.2.2.1.53.1 Adversary uses commercial or free software to scan organizational perimeters with the goal of obtaining information that provides the adversary with a better understanding of the information technology infrastructure and facilitates the ability of the adversary to launch successful attacks.
2.2.2.2.1.54 Pollution of critical data.
2.2.2.2.1.54.1 Adversary implants corrupted and incorrect data in the critical data that organizations use to cause organizations to take suboptimal actions or to subsequently disbelieve reliable inputs.
2.2.2.2.1.55 Poorly configured or unauthorized information systems exposed to the Internet.
2.2.2.2.1.55.1 Adversary gains access through the Internet, to information systems that are not authorized for such access or that do not meet the specified configuration requirements of organizations.
2.2.2.2.1.56 Salting the physical perimeter of organizations with removable media containing malware.
2.2.2.2.1.56.1 Adversary places removable media (e.g., flash drives) containing malware in locations external to the physical perimeters of organizations but where employees are likely to find and install on organizational information systems.
2.2.2.2.1.57 Simple Denial of Service (DoS) Attack.
2.2.2.2.1.57.1 Adversary attempts to make an Internet-accessible resource unavailable to intended users, or prevent the resource from functioning efficiently or at all, temporarily or indefinitely.
2.2.2.2.1.58 Social engineering by insiders within organizations to convince other insiders to take harmful actions.
2.2.2.2.1.58.1 Internally placed adversaries take actions (e.g., using email, phone) so that individuals within organizations reveal critical/sensitive information (e.g., personally identifiable information).
2.2.2.2.1.59 Social engineering by outsiders to convince insiders to take armful actions.
2.2.2.2.1.59.1 Externally placed adversaries take actions (using email, phone) with the intent of persuading or otherwise tricking individuals within organizations into revealing critical/sensitive information (e.g., personally identifiable information).
2.2.2.2.1.60 Spear phishing attack.
2.2.2.2.1.60.1 Adversary employs phishing attacks targeted at high-value targets (e.g., senior leaders/executives).
2.2.2.2.1.61 Spill sensitive information.
2.2.2.2.1.61.1 Adversary contaminates organizational information systems (including devices and networks) by placing on the systems or sending to/over the systems, information of a classification/sensitivity which the systems have not been authorized to handle. The information is exposed to individuals that are not authorized access to such information, and the information system, device, or network is unavailable while the spill is investigated and mitigated.
2.2.2.2.1.62 Spread attacks across organizations from existing footholds.
2.2.2.2.1.62.1 Adversary builds upon existing footholds within organizations and works to extend the footholds to other parts of organizations including organizational infrastructure. Adversary places itself in positions to further undermine the ability for organizations to carry out missions/business functions.
2.2.2.2.1.63 Successfully compromise software of critical information systems within organizations.
2.2.2.2.1.63.1 Adversary inserts malware or otherwise corrupts critical internal organizational information systems.
2.2.2.2.1.64 Tailgate authorized staff to gain access to organizational facilities.
2.2.2.2.1.64.1 Adversary follows authorized individuals into secure/controlled locations with the goal of gaining access to facilities, circumventing physical security checks.
2.2.2.2.1.65 Tailored zero-day attacks on organizational information systems.
2.2.2.2.1.65.1 Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Zero-day attacks are based on adversary insight into the information systems and applications used by organizations as well as adversary reconnaissance of organizations.
2.2.2.2.1.66 Tamper with critical organizational information system components and inject the components into the systems.
2.2.2.2.1.66.1 Adversary replaces, though supply chain, subverted insider, or some combination thereof, critical information system components with modified or corrupted components that operate in such a manner as to severely disrupt organizational missions/business functions or operations.
2.2.2.2.1.67 Targeting and compromising home computers (including personal digital assistants and smart phones) of critical employees within organizations.
2.2.2.2.1.67.1 Adversary targets key employees of organizations outside the security perimeters established by organizations by placing malware in the personally owned information systems and devices of individuals (e.g., laptop/notebook computers, personal digital assistants, smart phones). The intent is to take advantage of any instances where employees use personal information systems or devices to convey critical/sensitive information.
2.2.2.2.1.68 Targeting and exploiting critical hardware, software, or firmware (both commercial off-the-shelf and custom information systems and components).
2.2.2.2.1.68.1 Adversary targets and attempts to compromise the operation of software (e.g., through malware injections) that performs critical functions for organizations. This is largely accomplished as supply chain attacks.
2.2.2.2.1.69 Unauthorized internal information system access by insiders.
2.2.2.2.1.69.1 Adversary is an individual who has authorized access to organizational information systems, but gains (or attempts to gain) access that exceeds authorization.
2.2.2.2.1.70 Undermine the ability of organizations to detect attacks.
2.2.2.2.1.70.1 Adversary takes actions to inhibit the effectiveness of the intrusion detection systems or auditing capabilities within organizations.
2.2.2.2.1.71 Use remote information system connections of authorized users as bridge to gain unauthorized access to internal networks (i.e., split tunneling).
2.2.2.2.1.71.1 Adversary takes advantage of external information systems (e.g., laptop computers at remote locations) that are simultaneously connected securely to organizations and to nonsecure remote connections gaining unauthorized access to organizations via nonsecure, open channels.
2.2.2.2.1.72 Using postal service or other commercial delivery services to insert malicious scanning devices (e.g., wireless sniffers) inside facilities.
2.2.2.2.1.72.1 Adversary uses courier service to deliver to organizational mailrooms a device that is able to scan wireless communications accessible from within the mailrooms and then wirelessly transmit information back to adversary.
2.2.2.2.1.73 Zero-day attacks (non-targeted).
2.2.2.2.1.73.1 Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Attacks are not based on any adversary insights into specific vulnerabilities of organizations.
2.2.2.3 Taxonomy of non-adversarial threat events (tailored version of Table E-3)
2.2.2.3.1 Table E-3 -- Non-adversarial threat events (opposite problem -- the list is too thin -- we need to expand)
2.2.2.3.1.1 Threat source - accidental ordinary user
2.2.2.3.1.1.1 Threat event - spill sensitive information
2.2.2.3.1.1.1.1 Description - Authorized user erroneously contaminates a device, information system, or network by placing on it or sending to it information of a classification/sensitivity which it has not been authorized to handle. The information is exposed to access by unauthorized individuals, and as a result, the device, system, or network is unavailable while the spill is investigated and mitigated.
2.2.2.3.1.2 Threat source - Accidental Privileged User or Administrator
2.2.2.3.1.2.1 Threat event - Mishandling of critical and/or sensitive information by authorized users
2.2.2.3.1.2.1.1 Description - Authorized privileged user inadvertently exposes critical/sensitive information.
2.2.2.3.1.3 Threat source - Communication
2.2.2.3.1.3.1 Threat event - Communications contention
2.2.2.3.1.3.1.1 Description - Degraded communications performance due to contention.
2.2.2.3.1.4 Threat source - Earthquake
2.2.2.3.1.4.1 Threat event - Earthquake at primary facility
2.2.2.3.1.4.1.1 Description - Earthquake of organization-defined magnitude at primary facility makes facility inoperable.
2.2.2.3.1.5 Threat source - Fire
2.2.2.3.1.5.1 Threat event - Fire at primary facility
2.2.2.3.1.5.1.1 Description - Fire (not due to adversarial activity) at primary facility makes facility inoperable.
2.2.2.3.1.6 Threat source - Processing
2.2.2.3.1.6.1 Threat event - Resource depletion
2.2.2.3.1.6.1.1 Description - Degraded processing performance due to resource depletion.
2.2.2.3.1.7 Threat source - Storage
2.2.2.3.1.7.1 Threat event - disk error
2.2.2.3.1.7.1.1 Description - Corrupted storage due to a disk error.
2.2.2.3.1.8 Threat source - Storage
2.2.2.3.1.8.1 Threat event - pervasive disk error
2.2.2.3.1.8.1.1 Description - Multiple disk errors due to aging of a set of devices all acquired at the same time, from the same supplier.
2.2.2.4 Assessment scale for assessing the relevance of threat events (tailored version of Table E-4)
2.2.2.4.1 Table E-4 -- Relevance of threat events
2.2.2.4.1.1 Confirmed
2.2.2.4.1.1.1 Seen by the organization
2.2.2.4.1.2 Expected
2.2.2.4.1.2.1 Seen by the organization's peers or partners
2.2.2.4.1.3 Anticipated
2.2.2.4.1.3.1 Reported by a trusted source
2.2.2.4.1.4 Predicted
2.2.2.4.1.4.1 Predicted by a trusted source
2.2.2.4.1.5 Possible
2.2.2.4.1.5.1 Described by a somewhat credible source
2.2.2.4.1.6 N/A
2.2.2.4.1.6.1 Not currently applicable.
2.2.2.4.1.6.1.1 For example, a threat event or TTP could assume specific technologies, architectures, or processes that are not present in the organization, mission/business process, EA segment, or information system; or predisposing conditions that are not present (e.g., location in a flood plain). Alternately, if the organization is using detailed or specific threat information, a threat event or TTP could be deemed inapplicable because information indicates that no adversary is expected to initiate the threat event or use the TTP.
2.2.2.5 Threat events identified in previous risk assessments, if appropriate
2.2.3 Use tailored version of Tables E-2 and E-3 to identify threat events -- document those in table E-5
2.2.3.1 Table E-2 -- Adversarial threat events (giant list -- we need to radically thin this one)
2.2.3.1.1 Access sensitive information through network sniffing.
2.2.3.1.1.1 Adversary gains access to the exposed wired or wireless data channels that organizations (or organizational personnel) use to transmit information, and intercept communications. Adversary actions might include, for example, targeting public kiosks or hotel networking connections.
2.2.3.1.2 Adapt cyber attacks based on detailed surveillance.
2.2.3.1.2.1 Adversary adapts attacks in response to surveillance of organizations and the protective measures that organizations employ.
2.2.3.1.3 Exploit recently discovered vulnerabilities.
2.2.3.1.3.1 Adversary exploits recently discovered vulnerabilities in organizational information systems in an attempt to attack the systems before mitigation measures are available or in place.
2.2.3.1.4 Employ brute force login attempts/password guessing.
2.2.3.1.4.1 Adversary attempts to gain access to organizational information systems by random or systematic guessing of passwords, possibly supported by password cracking utilities.
2.2.3.1.5 Cause degradation or denial of attacker selected services or capabilities.
2.2.3.1.5.1 Adversary launches attacks specifically intended to impede the ability of organizations to function.
2.2.3.1.6 Cause deterioration/destruction of critical information system components and functions.
2.2.3.1.6.1 Adversary attempts to destroy or deteriorate critical information system components for purposes of impeding or eliminating the ability of organizations to carry out missions or business functions. Detection of this action is not a concern.
2.2.3.1.7 Combine internal and external attacks across multiple information systems and information technologies to achieve a breach or compromise.
2.2.3.1.7.1 Adversary combines attacks that require both physical presence within organizations and cyber methods to achieve success. Physical components may be as simple as convincing maintenance personnel to leave doors or cabinets open.
2.2.3.1.8 Compromise critical information systems via physical access by outsiders.
2.2.3.1.8.1 Adversary without authorized access to organizational information systems, attempts to physically gain access to the systems.
2.2.3.1.9 Compromise mission critical information.
2.2.3.1.9.1 Adversary takes action to compromise the integrity of mission critical information, thus preventing/impeding ability of organizations to which information is supplied, from carrying out operations.
2.2.3.1.10 Compromise information systems or devices used externally and reintroduce into the enterprise.
2.2.3.1.10.1 Adversary manages to install malware on information systems or devices while the systems/devices are external to organizations for purposes of subsequently infecting organizations when reconnected.
2.2.3.1.11 Compromise design, manufacture, and/or distribution of information system components (including hardware, software, and firmware) organizations are known to use.
2.2.3.1.11.1 Adversary is able to compromise the design, manufacturing, and/or distribution of critical information system components at selected suppliers.
2.2.3.1.12 Conduct reconnaissance, surveillance, and target acquisition of targeted organizations.
2.2.3.1.12.1 Adversary uses various means (e.g., scanning, physical observation) to examine and assess organizations and ascertain points of vulnerability.
2.2.3.1.13 Conduct phishing attacks.
2.2.3.1.13.1 Adversary attempts to acquire sensitive information such as usernames, passwords, or SSNs, by pretending to be communications from a legitimate/trustworthy source. Typical attacks occur via email, instant messaging, or comparable means; commonly directing users to Web sites that appear to be legitimate sites, while actually stealing the entered information.
2.2.3.1.14 Continuous, adaptive and changing cyber attacks based on detailed surveillance of organizations.
2.2.3.1.14.1 Adversary attacks continually change in response to surveillance of organizations and protective measures that organizations take.
2.2.3.1.15 Coordinating cyber attacks on organizations using external (outsider), internal (insider), and supply chain (supplier) attack vectors.
2.2.3.1.15.1 Adversary employs continuous, coordinated attacks, potentially using all three attack vectors for the purpose of impeding organizational operations.
2.2.3.1.16 Create and operate false front organizations that operate within the critical life cycle path to inject malicious information system components into the supply chain.
2.2.3.1.16.1 Adversary creates the appearance of legitimate suppliers that then inject corrupted/malicious information system components into the supply chain of organizations.
2.2.3.1.17 Deliver known malware to internal organizational information systems (e.g., virus via email).
2.2.3.1.17.1 Adversary uses common delivery mechanisms (e.g., email) to install/insert known malware (e. g., malware whose existence is known) into organizational information systems.
2.2.3.1.18 Deliver modified malware to internal organizational information systems.
2.2.3.1.18.1 Adversary uses more sophisticated means (e.g., Web traffic, instant messaging, FTP) to deliver malware and possibly modifications of known malware to gain access to internal organizational information systems.
2.2.3.1.19 Devise attacks specifically based on deployed information technology environment.
2.2.3.1.19.1 Adversary develops attacks, using known and unknown attacks that are designed to take advantage of adversary knowledge of the information technology infrastructure.
2.2.3.1.20 Discovering and accessing sensitive data/information stored on publicly accessible information systems.
2.2.3.1.20.1 Adversary attempts to scan or mine information on publically accessible servers and Web pages of organizations with the intent of finding information that is sensitive (i.e., not approved for public release).
2.2.3.1.21 Distributed Denial of Service (DDoS) attack.
2.2.3.1.21.1 Adversary uses multiple compromised information systems to attack a single target, thereby causing denial of service for users of the targeted information systems.
2.2.3.1.22 Exploit known vulnerabilities in mobile systems (e.g., laptops, PDAs, smart phones).
2.2.3.1.22.1 Adversary takes advantage of fact that transportable information systems are outside physical protection of organizations and logical protection of corporate firewalls, and compromises the systems based on known vulnerabilities to gather information from those systems.
2.2.3.1.23 Exploiting vulnerabilities in information systems timed with organizational mission/business operations tempo.
2.2.3.1.23.1 Adversary launches attacks on organizations in a time and manner consistent with organizational needs to conduct mission/business operations.
2.2.3.1.24 Externally placed adversary sniffing and intercepting of wireless network traffic.
2.2.3.1.24.1 Adversary strategically in position to intercept wireless communications of organizations.
2.2.3.1.25 Hijacking information system sessions of data traffic between the organization and external entities.
2.2.3.1.25.1 Adversary takes control of (hijacks) already established, legitimate information system sessions between organizations and external entities (e.g., users connecting from off-site locations).
2.2.3.1.26 Injecting false but believable data/information into organizational information systems.
2.2.3.1.26.1 Adversary injects false but believable data into organizational information systems. This action by the adversary may impede the ability of organizations to carry out missions/business functions correctly and/or undercut the credibility other entities may place in the information or services provided by organizations.
2.2.3.1.27 Insert subverted individuals into privileged positions in organizations.
2.2.3.1.27.1 Adversary has individuals in privileged positions within organizations that are willing and able to carry out actions to cause harm to organizational missions/business functions. Subverted individuals may be active supporters of adversary, supporting adversary (albeit under duress), or unknowingly supporting adversary (e.g., false flag). Adversary may target privileged functions to gain access to sensitive information (e.g., user accounts, system files, etc.) and may leverage access to one privileged capability to get to another capability.
2.2.3.1.28 Counterfeit/Spoofed Web site.
2.2.3.1.28.1 Adversary creates duplicates of legitimate Web sites and directs users to counterfeit sites to gather information.
2.2.3.1.29 Deliver targeted Trojan for control of internal systems and exfiltration of data.
2.2.3.1.29.1 Adversary manages to install software containing Trojan horses that are specifically designed to take control of internal organizational information systems, identify sensitive information, exfiltrate the information back to adversary, and conceal these actions.
2.2.3.1.30 Employ open source discovery of organizational information useful for future cyber attacks.
2.2.3.1.30.1 Adversary mines publically accessible information with the goal of discerning information about information systems, users, or organizational personnel that the adversary can subsequently employ in support of an attack.
2.2.3.1.31 Exploit vulnerabilities on internal organizational information systems.
2.2.3.1.31.1 Adversary searches for known vulnerabilities in organizational internal information systems and exploits those vulnerabilities.
2.2.3.1.32 Inserting malicious code into organizational information systems to facilitate exfiltration of data/information.
2.2.3.1.32.1 Adversary successfully implants malware into internal organizational information systems, where the malware over time identifies and then successfully exfiltrates valuable information.
2.2.3.1.33 Installing general-purpose sniffers on organization- controlled information systems or networks.
2.2.3.1.33.1 Adversary manages to install sniffing software onto internal organizational information systems or networks.
2.2.3.1.34 Leverage traffic/data movement allowed across perimeter (e.g., email communications, removable storage) to compromise internal information systems (e.g., using open ports to exfiltrate information).
2.2.3.1.34.1 Adversary makes use of permitted information flows (e.g., email communications) to facilitate compromises to internal information systems (e.g., phishing attacks to direct users to go to Web sites containing malware) which allows adversary to obtain and exfiltrate sensitive information through perimeters.
2.2.3.1.35 Insert subverted individuals into the organizations.
2.2.3.1.35.1 Adversary has individuals in place within organizations that are willing and able to carry out actions to cause harm to organizational missions/business functions. Subverted individuals may be active supporters of adversary, supporting adversary (albeit under duress), or unknowingly supporting adversary (e.g., false flag).
2.2.3.1.36 Insert counterfeited hardware into the supply chain.
2.2.3.1.36.1 Adversary intercepts hardware from legitimate suppliers. Adversary modifies the hardware or replaces it with faulty or otherwise modified hardware.
2.2.3.1.37 Inserting malicious code into organizational information systems and information system components (e.g., commercial information technology products) known to be used by organizations.
2.2.3.1.37.1 Adversary inserts malware into information systems specifically targeted to the hardware, software, and firmware used by organizations (resulting from the reconnaissance of organizations by adversary).
2.2.3.1.38 Inserting specialized, non-detectable, malicious code into organizational information systems based on system configurations.
2.2.3.1.38.1 Adversary launches multiple, potentially changing attacks specifically targeting critical information system components based on reconnaissance and placement within organizational information systems.
2.2.3.1.39 Insider-based session hijacking.
2.2.3.1.39.1 Adversary places an entity within organizations in order to gain access to organizational information systems or networks for the express purpose of taking control (hijacking) an already established, legitimate session either between organizations and external entities (e.g., users connecting from remote locations) or between two locations within internal networks.
2.2.3.1.40 Installing persistent and targeted sniffers on organizational information systems and networks.
2.2.3.1.40.1 Adversary places within the internal organizational information systems or networks software designed to (over a continuous period of time) collect (sniff) network traffic.
2.2.3.1.41 Intercept/decrypt weak or unencrypted communication traffic and protocols.
2.2.3.1.41.1 Adversary takes advantage of communications that are either unencrypted or use weak encryption (e.g., encryption containing publically known flaws), targets those communications, and gains access to transmitted information and channels.
2.2.3.1.42 Jamming wireless communications.
2.2.3.1.42.1 Adversary takes measures to interfere with the wireless communications so as to impede or prevent communications from reaching intended recipients.
2.2.3.1.43 Malicious activity using unauthorized ports, protocols, and services.
2.2.3.1.43.1 Adversary conducts attacks using ports, protocols, and services for ingress and egress that are not authorized for use by organizations.
2.2.3.1.44 Malicious creation, deletion, and/or modification of files on publicly accessible information systems (e.g., Web defacement).
2.2.3.1.44.1 Adversary vandalizes, or otherwise makes unauthorized changes to organizational Web sites or files on Web sites.
2.2.3.1.45 Mapping and scanning organization-controlled (internal) networks and information systems from within (inside) organizations.
2.2.3.1.45.1 Adversary installs malware inside perimeter that allows the adversary to scan network to identify targets of opportunity. Because the scanning does not cross the perimeter, it is not detected by externally placed intrusion detection systems.
2.2.3.1.46 Mishandling of critical and/or sensitive information by authorized users.
2.2.3.1.46.1 Authorized users inadvertently expose critical/sensitive information.
2.2.3.1.47 Multistage attacks (e.g., hopping).
2.2.3.1.47.1 Adversary moves attack location from one compromised information system to other information systems making identification of source difficult.
2.2.3.1.48 Network traffic modification (man in the middle) attacks by externally placed adversary.
2.2.3.1.48.1 Adversary intercepts/eavesdrops on sessions between organizations and external entities. Adversary then relays messages between the organizations and external entities, making them believe that they are talking directly to each other over a private connection, when in fact the entire communication is controlled by the adversary.
2.2.3.1.49 Network traffic modification (man in the middle) attacks by internally placed adversary.
2.2.3.1.49.1 Adversary operating within the infrastructure of organizations intercepts and corrupts data sessions.
2.2.3.1.50 Non-target specific insertion of malware into downloadable software and/or into commercial information technology products.
2.2.3.1.50.1 Adversary corrupts or inserts malware into common freeware, shareware, or commercial information technology products. Adversary is not targeting specific organizations in this attack, simply looking for entry points into internal organizational information systems.
2.2.3.1.51 Operate across organizations to acquire specific information or achieve desired outcome.
2.2.3.1.51.1 Adversary does not limit planning to the targeting of one organization. Adversary observes multiple organizations to acquire necessary information on targets of interest.
2.2.3.1.52 Opportunistically stealing or scavenging information systems/components.
2.2.3.1.52.1 Adversary takes advantage of opportunities (due to advantageous positioning) to steal information systems or components (e. g., laptop computers or data storage media) that are left unattended outside of the physical perimeters of organizations.
2.2.3.1.53 Perimeter network reconnaissance/scanning.
2.2.3.1.53.1 Adversary uses commercial or free software to scan organizational perimeters with the goal of obtaining information that provides the adversary with a better understanding of the information technology infrastructure and facilitates the ability of the adversary to launch successful attacks.
2.2.3.1.54 Pollution of critical data.
2.2.3.1.54.1 Adversary implants corrupted and incorrect data in the critical data that organizations use to cause organizations to take suboptimal actions or to subsequently disbelieve reliable inputs.
2.2.3.1.55 Poorly configured or unauthorized information systems exposed to the Internet.
2.2.3.1.55.1 Adversary gains access through the Internet, to information systems that are not authorized for such access or that do not meet the specified configuration requirements of organizations.
2.2.3.1.56 Salting the physical perimeter of organizations with removable media containing malware.
2.2.3.1.56.1 Adversary places removable media (e.g., flash drives) containing malware in locations external to the physical perimeters of organizations but where employees are likely to find and install on organizational information systems.
2.2.3.1.57 Simple Denial of Service (DoS) Attack.
2.2.3.1.57.1 Adversary attempts to make an Internet-accessible resource unavailable to intended users, or prevent the resource from functioning efficiently or at all, temporarily or indefinitely.
2.2.3.1.58 Social engineering by insiders within organizations to convince other insiders to take harmful actions.
2.2.3.1.58.1 Internally placed adversaries take actions (e.g., using email, phone) so that individuals within organizations reveal critical/sensitive information (e.g., personally identifiable information).
2.2.3.1.59 Social engineering by outsiders to convince insiders to take armful actions.
2.2.3.1.59.1 Externally placed adversaries take actions (using email, phone) with the intent of persuading or otherwise tricking individuals within organizations into revealing critical/sensitive information (e.g., personally identifiable information).
2.2.3.1.60 Spear phishing attack.
2.2.3.1.60.1 Adversary employs phishing attacks targeted at high-value targets (e.g., senior leaders/executives).
2.2.3.1.61 Spill sensitive information.
2.2.3.1.61.1 Adversary contaminates organizational information systems (including devices and networks) by placing on the systems or sending to/over the systems, information of a classification/sensitivity which the systems have not been authorized to handle. The information is exposed to individuals that are not authorized access to such information, and the information system, device, or network is unavailable while the spill is investigated and mitigated.
2.2.3.1.62 Spread attacks across organizations from existing footholds.
2.2.3.1.62.1 Adversary builds upon existing footholds within organizations and works to extend the footholds to other parts of organizations including organizational infrastructure. Adversary places itself in positions to further undermine the ability for organizations to carry out missions/business functions.
2.2.3.1.63 Successfully compromise software of critical information systems within organizations.
2.2.3.1.63.1 Adversary inserts malware or otherwise corrupts critical internal organizational information systems.
2.2.3.1.64 Tailgate authorized staff to gain access to organizational facilities.
2.2.3.1.64.1 Adversary follows authorized individuals into secure/controlled locations with the goal of gaining access to facilities, circumventing physical security checks.
2.2.3.1.65 Tailored zero-day attacks on organizational information systems.
2.2.3.1.65.1 Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Zero-day attacks are based on adversary insight into the information systems and applications used by organizations as well as adversary reconnaissance of organizations.
2.2.3.1.66 Tamper with critical organizational information system components and inject the components into the systems.
2.2.3.1.66.1 Adversary replaces, though supply chain, subverted insider, or some combination thereof, critical information system components with modified or corrupted components that operate in such a manner as to severely disrupt organizational missions/business functions or operations.
2.2.3.1.67 Targeting and compromising home computers (including personal digital assistants and smart phones) of critical employees within organizations.
2.2.3.1.67.1 Adversary targets key employees of organizations outside the security perimeters established by organizations by placing malware in the personally owned information systems and devices of individuals (e.g., laptop/notebook computers, personal digital assistants, smart phones). The intent is to take advantage of any instances where employees use personal information systems or devices to convey critical/sensitive information.
2.2.3.1.68 Targeting and exploiting critical hardware, software, or firmware (both commercial off-the-shelf and custom information systems and components).
2.2.3.1.68.1 Adversary targets and attempts to compromise the operation of software (e.g., through malware injections) that performs critical functions for organizations. This is largely accomplished as supply chain attacks.
2.2.3.1.69 Unauthorized internal information system access by insiders.
2.2.3.1.69.1 Adversary is an individual who has authorized access to organizational information systems, but gains (or attempts to gain) access that exceeds authorization.
2.2.3.1.70 Undermine the ability of organizations to detect attacks.
2.2.3.1.70.1 Adversary takes actions to inhibit the effectiveness of the intrusion detection systems or auditing capabilities within organizations.
2.2.3.1.71 Use remote information system connections of authorized users as bridge to gain unauthorized access to internal networks (i.e., split tunneling).
2.2.3.1.71.1 Adversary takes advantage of external information systems (e.g., laptop computers at remote locations) that are simultaneously connected securely to organizations and to nonsecure remote connections gaining unauthorized access to organizations via nonsecure, open channels.
2.2.3.1.72 Using postal service or other commercial delivery services to insert malicious scanning devices (e.g., wireless sniffers) inside facilities.
2.2.3.1.72.1 Adversary uses courier service to deliver to organizational mailrooms a device that is able to scan wireless communications accessible from within the mailrooms and then wirelessly transmit information back to adversary.
2.2.3.1.73 Zero-day attacks (non-targeted).
2.2.3.1.73.1 Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Attacks are not based on any adversary insights into specific vulnerabilities of organizations.
2.2.3.2 Table E-3 -- Non-adversarial threat events (opposite problem -- the list is too thin -- we need to expand)
2.2.3.2.1 Threat source - accidental ordinary user
2.2.3.2.1.1 Threat event - spill sensitive information
2.2.3.2.1.1.1 Description - Authorized user erroneously contaminates a device, information system, or network by placing on it or sending to it information of a classification/sensitivity which it has not been authorized to handle. The information is exposed to access by unauthorized individuals, and as a result, the device, system, or network is unavailable while the spill is investigated and mitigated.
2.2.3.2.2 Threat source - Accidental Privileged User or Administrator
2.2.3.2.2.1 Threat event - Mishandling of critical and/or sensitive information by authorized users
2.2.3.2.2.1.1 Description - Authorized privileged user inadvertently exposes critical/sensitive information.
2.2.3.2.3 Threat source - Communication
2.2.3.2.3.1 Threat event - Communications contention
2.2.3.2.3.1.1 Description - Degraded communications performance due to contention.
2.2.3.2.4 Threat source - Earthquake
2.2.3.2.4.1 Threat event - Earthquake at primary facility
2.2.3.2.4.1.1 Description - Earthquake of organization-defined magnitude at primary facility makes facility inoperable.
2.2.3.2.5 Threat source - Fire
2.2.3.2.5.1 Threat event - Fire at primary facility
2.2.3.2.5.1.1 Description - Fire (not due to adversarial activity) at primary facility makes facility inoperable.
2.2.3.2.6 Threat source - Processing
2.2.3.2.6.1 Threat event - Resource depletion
2.2.3.2.6.1.1 Description - Degraded processing performance due to resource depletion.
2.2.3.2.7 Threat source - Storage
2.2.3.2.7.1 Threat event - disk error
2.2.3.2.7.1.1 Description - Corrupted storage due to a disk error.
2.2.3.2.8 Threat source - Storage
2.2.3.2.8.1 Threat event - pervasive disk error
2.2.3.2.8.1.1 Description - Multiple disk errors due to aging of a set of devices all acquired at the same time, from the same supplier.
2.2.3.3 Table E-5 -- Threat events
2.2.3.3.1 Identifier (defined by us)
2.2.3.3.2 Threat event
2.2.3.3.3 Threat source
2.2.3.3.4 Relevance
2.2.4 Use tables D-7 and D-8 (from Step 2-1) to identify threat sources that could initiate threat events -- document those in table E-5
2.2.4.1 Table D-7 -- Adversarial threat sources
2.2.4.1.1 Identifier (defined by us)
2.2.4.1.2 Threat source (Task 1-4 and Table D-2)
2.2.4.1.3 Source of information
2.2.4.1.4 In scope? (yes/no)
2.2.4.1.5 Capability (tailored Table D-3)
2.2.4.1.6 Intent (tailored Table D-4)
2.2.4.1.7 Targeting (tailored Table D-5)
2.2.4.2 Table D-8 -- Non-adversarial threat sources
2.2.4.2.1 Identifier (defined by us)
2.2.4.2.2 Threat source (Task 1-4 and Table D-2)
2.2.4.2.3 Source of information
2.2.4.2.4 In scope? (yes/no)
2.2.4.2.5 Range of effects (tailored Table D-6)
2.2.4.3 Table E-5 -- Threat events
2.2.4.3.1 Identifier (defined by us)
2.2.4.3.2 Threat event
2.2.4.3.3 Threat source
2.2.4.3.4 Relevance
2.2.5 Use the assessment scale from Table E-4 to assess the relevance of threat events -- document these in table E-5
2.2.5.1 Table E-4 -- Relevance of threat events
2.2.5.1.1 Confirmed
2.2.5.1.1.1 Seen by the organization
2.2.5.1.2 Expected
2.2.5.1.2.1 Seen by the organization's peers or partners
2.2.5.1.3 Anticipated
2.2.5.1.3.1 Reported by a trusted source
2.2.5.1.4 Predicted
2.2.5.1.4.1 Predicted by a trusted source
2.2.5.1.5 Possible
2.2.5.1.5.1 Described by a somewhat credible source
2.2.5.1.6 N/A
2.2.5.1.6.1 Not currently applicable.
2.2.5.1.6.1.1 For example, a threat event or TTP could assume specific technologies, architectures, or processes that are not present in the organization, mission/business process, EA segment, or information system; or predisposing conditions that are not present (e.g., location in a flood plain). Alternately, if the organization is using detailed or specific threat information, a threat event or TTP could be deemed inapplicable because information indicates that no adversary is expected to initiate the threat event or use the TTP.
2.2.5.2 Table E-5 -- Threat events
2.2.5.2.1 Identifier (defined by us)
2.2.5.2.2 Threat event
2.2.5.2.3 Threat source
2.2.5.2.4 Relevance
2.2.6 Use Table E-5 and D-7 to update columns 1-6 in Table I-5 (adversary risk)
2.2.6.1 Table E-5 -- Threat events
2.2.6.1.1 Identifier (defined by us)
2.2.6.1.2 Threat event
2.2.6.1.3 Threat source
2.2.6.1.4 Relevance
2.2.6.2 Table D-7 -- Adversarial threat sources
2.2.6.2.1 Identifier (defined by us)
2.2.6.2.2 Threat source (Task 1-4 and Table D-2)
2.2.6.2.3 Source of information
2.2.6.2.4 In scope? (yes/no)
2.2.6.2.5 Capability (tailored Table D-3)
2.2.6.2.6 Intent (tailored Table D-4)
2.2.6.2.7 Targeting (tailored Table D-5)
2.2.6.3 Table I-5 -- Adversarial risk
2.2.6.3.1 Threat event
2.2.6.3.1.1 Identify threat event -- Task 2-2, Table e_2, Table E-5, Table I-5
2.2.6.3.2 Threat sources
2.2.6.3.2.1 Identify threat sources that could initiate the threat event -- Task 2-1, Table D-1, Table D-2, Table D-7, Table I-5
2.2.6.3.3 Capability
2.2.6.3.3.1 Assess threat source capability -- Task 2-1, Table D-3, Table D-7, Table I-5
2.2.6.3.4 Intent
2.2.6.3.4.1 Assess threat source intent -- Task 2-1, Table D-4, Table D-7, Table I-5
2.2.6.3.5 Targeting
2.2.6.3.5.1 Assess threat source targeting (Task 2-1, Table D-5, Table D-7, Table I-5
2.2.6.3.6 Relevance
2.2.6.3.6.1 Determine relevance of threat event -- use as a screen, if relevance criteria is not met, do not complete subsequent columns -- Task 2-2, Table E-1, Table E-4, Table E-5, Table I-5
2.2.6.3.7 Likelihood of attack initiation
2.2.6.3.7.1 Determine Likelihood that one or more of the threat sources initiates the threat event, taking into consideration capability, intent and targeting (Task 2-4, Table G-1, Table G-2, Table I-5)
2.2.6.3.8 Vulnerabilities Predisposing Conditions
2.2.6.3.8.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-5.)
2.2.6.3.9 Likelihood that Initiated Attack Succeeds
2.2.6.3.9.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration threat source capability, vulnerabilities, and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-5.)
2.2.6.3.10 Overall Likelihood
2.2.6.3.10.1 Determine the likelihood that the threat event will be initiated and result in adverse impact (i.e., combination of likelihood of attack initiation and likelihood that initiated attack succeeds). (Task 2-4; Table G-1; Table G-5; Table I-5.)
2.2.6.3.11 Level of Impact
2.2.6.3.11.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H-1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-5.)
2.2.6.3.12 Risk
2.2.6.3.12.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-5.)
2.2.7 Use Table E-5 and D-8 to update columns 1-6 in Table I-5 (non-adversary risk)
2.2.7.1 Table E-5 -- Threat events
2.2.7.1.1 Identifier (defined by us)
2.2.7.1.2 Threat event
2.2.7.1.3 Threat source
2.2.7.1.4 Relevance
2.2.7.2 Table D-8 -- Non-adversarial threat sources
2.2.7.2.1 Identifier (defined by us)
2.2.7.2.2 Threat source (Task 1-4 and Table D-2)
2.2.7.2.3 Source of information
2.2.7.2.4 In scope? (yes/no)
2.2.7.2.5 Range of effects (tailored Table D-6)
2.2.7.3 Table I-6 -- Non-adversarial risk
2.2.7.3.1 Threat Event
2.2.7.3.1.1 Identify threat event. (Task 2-2; Table E-1; Table E-3; Table E-5; Table I-7.)
2.2.7.3.2 Threat Sources
2.2.7.3.2.1 Identify threat sources that could initiate the threat event. (Task 2-1; Table D-1; Table D-2; Table D-8; Table I-7.)
2.2.7.3.3 Range of Effects
2.2.7.3.3.1 Identify the ranges of effects from the threat source. (Task 2-1; Table D-1; Table D-6; Table I-7.)
2.2.7.3.4 Relevance
2.2.7.3.4.1 Determine relevance of threat event. (Task 2-2; Table E-1; Table E-4; Table E-5; Table I-7.) If the relevance of the threat event does not meet the organization’s criteria for further consideration, do not complete the remaining columns.
2.2.7.3.5 Likelihood of Threat Event Occurring
2.2.7.3.5.1 Determine the likelihood that the threat event will occur. (Task 2-4; Table G-1; Table G-3; Table I-7.)
2.2.7.3.6 Vulnerabilities Predisposing Conditions
2.2.7.3.6.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-7.)
2.2.7.3.7 Likelihood that Threat Event Results in Adverse Impact
2.2.7.3.7.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration vulnerabilities and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-7.)
2.2.7.3.8 Overall Likelihood
2.2.7.3.8.1 Determine the likelihood that the threat event will occur and result in adverse impacts (i.e., combination of likelihood of threat occurring and likelihood that the threat event results in adverse impact). (Task 2-4; Table G-1; Table G-5; Table I-7.)
2.2.7.3.9 Level of Impact
2.2.7.3.9.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the Nation) from the threat event. (Task 2-5; Table H- 1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-7.)
2.2.7.3.10 Risk
2.2.7.3.10.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-7.)
2.3 2-3 -- Identify vulnerabilities and predisposing conditions
2.3.1 TASK 2-3: Identify vulnerabilities and predisposing conditions that affect the likelihood that threat events of concern result in adverse impacts to the organization.
2.3.1.1 Supplemental Guidance
2.3.1.1.1 The primary purpose of vulnerability assessments is to understand the nature and degree to which organizations, mission/business processes, and information systems are vulnerable to threat sources identified in Task 2-1 and the threat events identified in Task 2-2 that can be initiated by those threat sources.
2.3.1.1.2 There is potentially a many-to-many relationship between threat events and vulnerabilities. Multiple threat events can exploit a single vulnerability, and conversely, multiple vulnerabilities can be exploited by a single threat event. Vulnerabilities can be identified at varying degrees of granularity and specificity. The level of detail provided in any particular vulnerability assessment is consistent with the purpose of the risk assessment and the type of inputs needed to support follow-on likelihood and impact determinations.
2.3.1.1.3 Many risk assessments tend to rely on threat-vulnerability pairs as the focal point of the assessments. However, due to the ever-increasing complexity within organizations, mission/business processes, and the information systems supporting those processes, the number of vulnerabilities tends to be large. Therefore, the vulnerability identification task is used to understand the general nature of the vulnerabilities (including scope, number, and type) relevant to the assessment (see Task 1-3) and performing a cataloging of specific vulnerabilities as necessary to do so.
2.3.1.1.4 Organizations determine which vulnerabilities are relevant to which threat events in order to reduce the space of potential risks to be assessed. Organizations also make explicit:
2.3.1.1.4.1 (i) the process used to conduct vulnerability assessments;
2.3.1.1.4.2 (ii) assumptions related to the assessments;
2.3.1.1.4.3 (iii) credible sources and methods for obtaining vulnerability information; and
2.3.1.1.4.4 (iv) the process/rationale for the conclusions reached as to how vulnerable organizations are to the identified threat events of concern.
2.3.1.1.5 And finally, organizations capture information to support determination of uncertainty.
2.3.1.1.6 In addition to identifying vulnerabilities, organizations also identify any predisposing conditions which may affect susceptibility to certain vulnerabilities. Predisposing conditions that exist within organizations (including mission/business processes, information systems, and environments of operation) can contribute to (i.e., increase or decrease) the likelihood that one or more threat events, once initiated by threat sources, result in adverse impacts to organizational operations, organizational assets, individuals, other organizations, or the Nation.
2.3.1.1.7 Organizations determine which predisposing conditions are relevant to which threat events in order to reduce the space of potential risks to be assessed.
2.3.1.1.8 Appendix F provides a set of exemplary tables for use in identifying vulnerabilities and predisposing conditions:
2.3.1.1.8.1 Table F-1 provides a set of exemplary inputs to the vulnerability and predisposing condition identification task;
2.3.1.1.8.2 Table F-2 provides an exemplary assessment scale for assessing the severity of identified vulnerabilities;
2.3.1.1.8.3 Table F-3 provides a template for summarizing/documenting the results of vulnerability identification;
2.3.1.1.8.4 Table F-4 provides an exemplary taxonomy that can be used to identify and characterize predisposing conditions;
2.3.1.1.8.5 Table F-5 provides an exemplary assessment scale for assessing the pervasiveness of predisposing conditions; and
2.3.1.1.8.6 Table F-6 provides a template for summarizing/documenting the results of identifying predisposing conditions.
2.3.1.1.9 The information produced in Task 2-3 provides vulnerability and predisposing condition inputs to the risk tables in Appendix I.
2.3.2 Provide vulnerability and predisposing condition inputs (Table F-1)
2.3.2.1 TABLE F-1: INPUTS – VULNERABILITIES AND PREDISPOSING CONDITIONS
2.3.2.1.1 From Tier 1 (organization level)
2.3.2.1.1.1 Sources of vulnerability information deemed to be credible (e.g.,open source and/or classified vulnerabilities, previous risk/vulnerability assessments, Mission and/or Business Impact Analyses). (Section 3.1, Task 1-4.)
2.3.2.1.1.2 Vulnerability information and guidance specific to Tier 1 (e.g., vulnerabilities related to organizational governance, core missions/business functions, management/operational policies, procedures, and structures, external mission/business relationships).
2.3.2.1.1.3 Taxonomy of predisposing conditions, annotated by the organization, if necessary. (Table F-4)
2.3.2.1.1.4 Characterization of vulnerabilities and predisposing conditions.
2.3.2.1.1.4.1 Assessment scale for assessing the severity of vulnerabilities, annotated by the organization, if necessary. (Table F-2)
2.3.2.1.1.4.2 Assessment scale for assessing the pervasiveness of predisposing conditions, annotated by the organization, if necessary. (Table F-5)
2.3.2.1.2 From Tier 2 (mission or business-process level)
2.3.2.1.2.1 Vulnerability information and guidance specific to Tier 2 (e.g., vulnerabilities related to organizational mission/business processes, EA segments, common infrastructure, support services, common controls, and external dependencies).
2.3.2.1.3 From Tier 3 (information system level)
2.3.2.1.3.1 Vulnerability information and guidance specific to Tier 3 (e.g., vulnerabilities related to information systems, information technologies, information system components, applications, networks, environments of operation).
2.3.2.1.3.2 Security assessment reports (i.e., deficiencies in assessed controls identified as vulnerabilities).
2.3.2.1.3.3 Results of monitoring activities (e.g., automated and nonautomated data feeds).
2.3.2.1.3.4 Vulnerability assessments, Red Team reports, or other reports from analyses of information systems, subsystems, information technology products, devices, networks, or applications.
2.3.2.1.3.5 Contingency Plans, Disaster Recovery Plans, Incident Reports.
2.3.2.1.3.6 Vendor/manufacturer vulnerability reports.
2.3.3 Identify vulnerabilities (Table F-3)
2.3.3.1 TABLE F-3: TEMPLATE – IDENTIFICATION OF VULNERABILITIES
2.3.3.1.1 Columns
2.3.3.1.1.1 Identifier (defined by us)
2.3.3.1.1.2 Vulnerability (and Source of Information)
2.3.3.1.1.2.1 Task 2-3, Task 1-4 or organization-defined
2.3.3.1.1.3 Vulnerability severity
2.3.3.1.1.3.1 Table F-2 assessment scale or organization defined
2.3.4 Assess the severity of identified vulnerabilities (use Table F-2 for scale) and update Table F-3
2.3.4.1 TABLE F-2: ASSESSMENT SCALE – VULNERABILITY SEVERITY
2.3.4.1.1 Very High
2.3.4.1.1.1 10
2.3.4.1.1.1.1 Relevant security control or other remediation is not implemented and not planned; or no security measure can be identified to remediate the vulnerability.
2.3.4.1.2 High
2.3.4.1.2.1 8
2.3.4.1.2.1.1 Relevant security control or other remediation is planned but not implemented.
2.3.4.1.3 Moderate
2.3.4.1.3.1 5
2.3.4.1.3.1.1 Relevant security control or other remediation is partially implemented and somewhat effective.
2.3.4.1.4 Low
2.3.4.1.4.1 2
2.3.4.1.4.1.1 Relevant security control or other remediation is fully implemented and somewhat effective.
2.3.4.1.5 Very Low
2.3.4.1.5.1 0
2.3.4.1.5.1.1 Relevant security control or other remediation is fully implemented, assessed, and effective.
2.3.5 Assess the pervasiveness of the predisposing conditions (use Table F-5 for scale) and update Table F-6
2.3.5.1 TABLE F-4: TAXONOMY OF PREDISPOSING CONDITIONS
2.3.5.1.1 INFORMATION RELATED -- Needs to handle information (as it is created, transmitted, stored, processed, and/or displayed) in a specific manner, due to its sensitivity (or lack of sensitivity), legal or regulatory requirements, and/or contractual or other organizational agreements.
2.3.5.1.1.1 Classified National Security Information
2.3.5.1.1.2 Compartments
2.3.5.1.1.3 Controlled Unclassified Information
2.3.5.1.1.4 Personally Identifiable Information
2.3.5.1.1.5 Special Access Programs
2.3.5.1.1.6 Agreement-Determined
2.3.5.1.1.6.1 NOFORN
2.3.5.1.1.6.2 Proprietary
2.3.5.1.2 TECHNICAL -- Needs to use technologies in specific ways.
2.3.5.1.2.1 Architectural
2.3.5.1.2.1.1 Compliance with technical standards
2.3.5.1.2.1.2 Use of specific products or product lines
2.3.5.1.2.1.3 Solutions for and/or approaches to user-based collaboration and information sharing
2.3.5.1.2.1.4 Allocation of specific security functionality to common controls
2.3.5.1.2.2 Functional
2.3.5.1.2.2.1 Networked multiuser
2.3.5.1.2.2.2 Single-user
2.3.5.1.2.2.3 Stand-alone / nonnetworked
2.3.5.1.2.2.4 Restricted functionality (e.g., communications, sensors, embedded controllers)
2.3.5.1.3 OPERATIONAL/ENVIRONMENTAL -- Ability to rely upon physical, procedural, and personnel controls provided by the operational environment.
2.3.5.1.3.1 OPERATIONAL / ENVIRONMENTAL
2.3.5.1.3.1.1 Mobility
2.3.5.1.3.1.1.1 Fixed-site (specify location)
2.3.5.1.3.1.1.2 Semi-mobile
2.3.5.1.3.1.1.2.1 Land-based (e.g., van)
2.3.5.1.3.1.1.2.2 Sea-based
2.3.5.1.3.1.1.2.3 Airborne
2.3.5.1.3.1.1.2.4 Space-based
2.3.5.1.3.1.1.3 Mobile (e.g., handheld device)
2.3.5.1.3.2 Population with physical and/or logical access to components of the information system, mission/business process, EA segment
2.3.5.1.3.2.1 Size of population
2.3.5.1.3.2.2 Clearance/vetting of population
2.3.6 Summarize results. Using Tables F-3 and F-6, update Table I-5 (adversarial risk) and Table I-7 (non-adversarial risk)
2.3.6.1 TABLE F-5: ASSESSMENT SCALE – PERVASIVENESS OF PREDISPOSING CONDITIONS
2.3.6.1.1 Very High
2.3.6.1.1.1 10
2.3.6.1.1.1.1 Applies to all organizational missions/business functions (Tier 1), mission/business processes (Tier 2), or information systems (Tier 3).
2.3.6.1.2 High
2.3.6.1.2.1 8
2.3.6.1.2.1.1 Applies to most organizational missions/business functions (Tier 1), mission/business processes (Tier 2), or information systems (Tier 3).
2.3.6.1.3 Moderate
2.3.6.1.3.1 5
2.3.6.1.3.1.1 Applies to many organizational missions/business functions (Tier 1), mission/business processes (Tier 2), or information systems (Tier 3).
2.3.6.1.4 Low
2.3.6.1.4.1 2
2.3.6.1.4.1.1 Applies to some organizational missions/business functions (Tier 1), mission/business processes (Tier 2), or information systems (Tier 3).
2.3.6.1.5 Very Low
2.3.6.1.5.1 0
2.3.6.1.5.1.1 Applies to few organizational missions/business functions (Tier 1), mission/business processes (Tier 2), or information systems (Tier 3).
2.3.6.2 TABLE F-6: TEMPLATE – IDENTIFICATION OF PREDISPOSING CONDITIONS
2.3.6.2.1 Identifier (defined by us)
2.3.6.2.2 Predisposing Condition (and Source of Information)
2.3.6.2.2.1 Table F-4, Task 1-4 or organization-defined
2.3.6.2.3 Pervasiveness of condition
2.3.6.2.3.1 Table F-5 assessment scale or organization defined
2.3.6.3 Table I-5 -- Adversarial risk
2.3.6.3.1 Threat event
2.3.6.3.1.1 Identify threat event -- Task 2-2, Table e_2, Table E-5, Table I-5
2.3.6.3.2 Threat sources
2.3.6.3.2.1 Identify threat sources that could initiate the threat event -- Task 2-1, Table D-1, Table D-2, Table D-7, Table I-5
2.3.6.3.3 Capability
2.3.6.3.3.1 Assess threat source capability -- Task 2-1, Table D-3, Table D-7, Table I-5
2.3.6.3.4 Intent
2.3.6.3.4.1 Assess threat source intent -- Task 2-1, Table D-4, Table D-7, Table I-5
2.3.6.3.5 Targeting
2.3.6.3.5.1 Assess threat source targeting (Task 2-1, Table D-5, Table D-7, Table I-5
2.3.6.3.6 Relevance
2.3.6.3.6.1 Determine relevance of threat event -- use as a screen, if relevance criteria is not met, do not complete subsequent columns -- Task 2-2, Table E-1, Table E-4, Table E-5, Table I-5
2.3.6.3.7 Likelihood of attack initiation
2.3.6.3.7.1 Determine Likelihood that one or more of the threat sources initiates the threat event, taking into consideration capability, intent and targeting (Task 2-4, Table G-1, Table G-2, Table I-5)
2.3.6.3.8 Vulnerabilities Predisposing Conditions
2.3.6.3.8.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-5.)
2.3.6.3.9 Likelihood that Initiated Attack Succeeds
2.3.6.3.9.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration threat source capability, vulnerabilities, and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-5.)
2.3.6.3.10 Overall Likelihood
2.3.6.3.10.1 Determine the likelihood that the threat event will be initiated and result in adverse impact (i.e., combination of likelihood of attack initiation and likelihood that initiated attack succeeds). (Task 2-4; Table G-1; Table G-5; Table I-5.)
2.3.6.3.11 Level of Impact
2.3.6.3.11.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H-1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-5.)
2.3.6.3.12 Risk
2.3.6.3.12.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-5.)
2.3.6.4 Table I-6 -- Non-adversarial risk
2.3.6.4.1 Threat Event
2.3.6.4.1.1 Identify threat event. (Task 2-2; Table E-1; Table E-3; Table E-5; Table I-7.)
2.3.6.4.2 Threat Sources
2.3.6.4.2.1 Identify threat sources that could initiate the threat event. (Task 2-1; Table D-1; Table D-2; Table D-8; Table I-7.)
2.3.6.4.3 Range of Effects
2.3.6.4.3.1 Identify the ranges of effects from the threat source. (Task 2-1; Table D-1; Table D-6; Table I-7.)
2.3.6.4.4 Relevance
2.3.6.4.4.1 Determine relevance of threat event. (Task 2-2; Table E-1; Table E-4; Table E-5; Table I-7.) If the relevance of the threat event does not meet the organization’s criteria for further consideration, do not complete the remaining columns.
2.3.6.4.5 Likelihood of Threat Event Occurring
2.3.6.4.5.1 Determine the likelihood that the threat event will occur. (Task 2-4; Table G-1; Table G-3; Table I-7.)
2.3.6.4.6 Vulnerabilities Predisposing Conditions
2.3.6.4.6.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-7.)
2.3.6.4.7 Likelihood that Threat Event Results in Adverse Impact
2.3.6.4.7.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration vulnerabilities and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-7.)
2.3.6.4.8 Overall Likelihood
2.3.6.4.8.1 Determine the likelihood that the threat event will occur and result in adverse impacts (i.e., combination of likelihood of threat occurring and likelihood that the threat event results in adverse impact). (Task 2-4; Table G-1; Table G-5; Table I-7.)
2.3.6.4.9 Level of Impact
2.3.6.4.9.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H- 1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-7.)
2.3.6.4.10 Risk
2.3.6.4.10.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-7.)
2.4 2-4 -- Determine likelihood
2.4.1 TASK 2-4: Determine the likelihood that threat events of concern result in adverse impacts to the organization, considering: (i) the characteristics of the threat sources that could initiate the events; (ii) the vulnerabilities and predisposing conditions identified; and (iii) organizational susceptibility reflecting safeguards/countermeasures planned or implemented to impede such events.
2.4.1.1 Supplemental Guidance
2.4.1.1.1 Organizations employ a three-step process to determine the overall likelihood of threat events.
2.4.1.1.1.1 First, organizations assess the likelihood that threat events will be initiated (for adversarial threat events) or will occur (for non-adversarial threat events).
2.4.1.1.1.2 Second, organizations assess the likelihood that threat events once initiated or occurring, will result in adverse impacts to organizational operations and assets, individuals, other organizations, or the Nation.
2.4.1.1.1.3 Finally, organizations assess the overall likelihood as a combination of likelihood of initiation/occurrence and likelihood of resulting in adverse impact.
2.4.1.1.2 Organizations also make explicit:
2.4.1.1.2.1 (i) the process used to conduct likelihood determinations;
2.4.1.1.2.2 (ii) assumptions related to the determinations;
2.4.1.1.2.3 (iii) credible sources/methods for obtaining likelihood information; and
2.4.1.1.2.4 (iv) the rationale for the conclusions reached with regard to the likelihood determinations.
2.4.1.1.3 And finally, organizations capture information to support determination of uncertainty.
2.4.1.1.4 Appendix G provides a set of exemplary tables for use in determining likelihood of threat events:
2.4.1.1.4.1 Table G-1 provides a set of exemplary inputs to the likelihood determination task;
2.4.1.1.4.2 Table G-2 provides an exemplary assessment scale for assessing the likelihood of adversarial threat events;
2.4.1.1.4.3 Table G-3 provides an exemplary assessment scale for assessing the likelihood of non-adversarial threat events occurring;
2.4.1.1.4.4 Table G-4 provides an exemplary assessment scale for assessing the likelihood of threat events having adverse impacts if the events are initiated (adversarial) or occur (non-adversarial); and
2.4.1.1.4.5 Table G-5 provides an exemplary assessment scale for assessing the overall likelihood of threat events (i.e., a combination of the likelihood of initiation/occurrence and the likelihood of impact).
2.4.1.1.5 Organizations assess the likelihood of threat event initiation by taking into consideration the characteristics of the threat sources of concern including capability, intent, and targeting (see Task 2-1 and Appendix D).
2.4.1.1.6 If threat events require more capability than adversaries possess (and adversaries are cognizant of this fact), then the adversaries are not expected to initiate the events.
2.4.1.1.7 If adversaries do not expect to achieve intended objectives by executing threat events, then the adversaries are not expected to initiate the events.
2.4.1.1.8 And finally, if adversaries are not actively targeting specific organizations or their mission/business functions, adversaries are not expected to initiate threat events.
2.4.1.1.9 Organizations can use the assessment scale in Table G-2 and provide a rationale for the assessment allowing explicit consideration of deterrence and threat shifting. Threat shifting is the response of adversaries to perceived safeguards, countermeasures, or obstructions, in which adversaries change some characteristic of their intent to do harm in order to avoid and/or overcome those safeguards, countermeasures, or obstacles.
2.4.1.1.10 Threat shifting can occur in one or more domains including:
2.4.1.1.10.1 (i) the time domain (e.g., a delay in attack or illegal entry to conduct additional surveillance, etc.);
2.4.1.1.10.2 (ii) the target domain (selecting a different, less-protected target);
2.4.1.1.10.3 (iii) the resource domain (e.g., adding resources to the attack in order to reduce uncertainty or overcome countermeasures); or
2.4.1.1.10.4 (iv) the attack planning/attack method domain (e.g., changing the attack weapon or attack path).
2.4.1.1.11 Threat shifting is a natural consequence of a dynamic set of interactions between threat sources and asset types targeted. With more sophisticated threat sources, it also tends to default to the path of least resistance to exploit particular vulnerabilities and the responses are not always predictable. In addition to the safeguards and countermeasures applied and the impact of a successful exploit of an organizational vulnerability, another influence on threat shifting is the benefit to the attacker. That perceived benefit on the attacker side can also influence how much/when threat shifting occurs.
2.4.1.1.12 Organizations can assess the likelihood of threat event occurrence (non-adversarial) using Table G-3 and provide a similar rationale for the assessment.
2.4.1.1.13 Organizations assess the likelihood that threat events result in adverse impacts by taking into consideration the set of identified vulnerabilities and predisposing conditions (see Task 2-3 and Appendix F). For threat events initiated by adversaries, organizations consider characteristics of associated threat sources. For non-adversarial threat events, organizations take into account the anticipated severity and duration of the event (as included in the description of the event). Organizations can use the assessment scale in Table G-4 and provide a rationale for the assessment allowing explicit consideration as stated above.
2.4.1.1.14 Threat events for which no vulnerabilities or predisposing conditions are identified, have a very low likelihood of resulting in adverse impacts. Such threat events can be highlighted and moved to the end of the table (or to a separate table), so that they can be tracked for consideration in follow-on risk assessments. However, no further consideration during the current assessment is warranted.
2.4.1.1.15 The overall likelihood of a threat event is a combination of:
2.4.1.1.15.1 (i) the likelihood that the event will occur (e.g., due to human error or natural disaster) or be initiated by an adversary; and
2.4.1.1.15.2 (ii) the likelihood that the initiation/occurrence will result in adverse impacts.
2.4.1.1.16 Organizations assess the overall likelihood of threat events by using inputs from Tables G-2, G-3 and G-4. Any specific algorithm or rule for combining the determined likelihood values depends on:
2.4.1.1.16.1 (i) general organizational attitudes toward risk, including overall risk tolerance and tolerance for uncertainty;
2.4.1.1.16.2 (ii) specific tolerances toward uncertainty in different risk factors; and
2.4.1.1.16.3 (iii) organizational weighting of risk factors.
2.4.1.1.17 For example, organizations could use any of the following rules (or could define a different rule):
2.4.1.1.17.1 (i) use the maximum of the two likelihood values;
2.4.1.1.17.2 (ii) use the minimum of the two likelihood values;
2.4.1.1.17.3 (iii) consider likelihood of initiation/occurrence only, assuming that if threat events are initiated or occur, the events will result in adverse impacts;
2.4.1.1.17.4 (iv) consider likelihood of impact only, assuming that if threat events could result in adverse impacts, adversaries will initiate the events; or
2.4.1.1.17.5 (v) take a weighted average of the two likelihood values.
2.4.1.1.18 Threat-vulnerability pairing is undesirable when analyzing and assessing likelihood at the mission/business function level, and in many cases, is deprecated even at the information system level. This analysis approach typically drives the level of detail in identifying threat events and vulnerabilities, rather than allowing organizations to make effective use of sources of threat information and/or to identify threats at a level of detail that is meaningful.
2.4.1.1.19 Depending on the level of detail in threat specification, a given threat event could exploit multiple weaknesses and dependencies. In assessing likelihoods, organizations need to look not only at vulnerabilities that threat events could exploit, but also at mission susceptibility to events for which no security controls (or viable implementations of security controls) exist (e.g., due to functional dependencies, particularly to external dependencies).
2.4.1.1.20 In certain situations, the most effective way to reduce mission/business risk attributable to information security risk is to redesign mission/business processes so there are potential work-arounds when information systems are compromised.
2.4.1.1.21 The information produced in Task 2-4 provides threat event likelihood inputs to the risk tables in Appendix I.
2.4.2 Provide "likelihood" inputs (use Table G-1)
2.4.2.1 TABLE G-1: INPUTS – DETERMINATION OF LIKELIHOOD
2.4.2.1.1 From Tier 1 (Organization level)
2.4.2.1.1.1 Sources of threat information identified for organization-wide use (e.g., specific information that may be useful in determining likelihoods such as adversary capabilities, intent, and targeting objectives).
2.4.2.1.1.2 Likelihood information and guidance specific to Tier 1 (e.g., likelihood information related to organizational governance, core missions/business functions, management/operational policies, procedures, and structures, external mission/business relationships).
2.4.2.1.1.3 Guidance on organization-wide levels of likelihood needing no further consideration.
2.4.2.1.1.4 Assessment scale for assessing the likelihood of threat event initiation (adversarial threat events), annotated by the organization, if necessary. (Table G-2)
2.4.2.1.1.5 Assessment scale for assessing the likelihood of threat event occurrence (non-adversarial threat events), annotated by the organization, if necessary. (Table G-3)
2.4.2.1.1.6 Assessment scale for assessing the likelihood of threat events resulting in adverse impacts, annotated by the organization, if necessary. (Table G-4)
2.4.2.1.1.7 Assessment scale for assessing the overall likelihood of threat events being initiated or occurring and resulting in adverse impacts, annotated by the organization, if necessary. (Table G-5)
2.4.2.1.2 From Tier 2: (Mission/business process level)
2.4.2.1.2.1 Likelihood information and guidance specific to Tier 2 (e.g., likelihood information related to mission/business processes, EA segments, common infrastructure, support services, common controls, and external dependencies).
2.4.2.1.3 From Tier 3: (Information system level)
2.4.2.1.3.1 Likelihood information and guidance specific to Tier 3 (e.g., likelihood information related to information systems, information technologies, information system components, applications, networks, environments of operation).
2.4.2.1.3.2 Historical data on successful and unsuccessful cyber attacks; attack detection rates.
2.4.2.1.3.3 Security assessment reports (i.e., deficiencies in assessed controls identified as vulnerabilities).
2.4.2.1.3.4 Results of monitoring activities (e.g., automated and nonautomated data feeds).
2.4.2.1.3.5 Vulnerability assessments, Red Team reports, or other reports from analyses of information systems, subsystems, information technology products, devices, networks, or applications.
2.4.2.1.3.6 Contingency Plans, Disaster Recovery Plans, Incident Reports.
2.4.2.1.3.7 Vendor/manufacturer vulnerability reports.
2.4.3 Identify likelihood determination factors (from organization-defined information sources)
2.4.4 Assess the likelihood of threat event initiation (for adversary threats) and the likelihood of threat event occurrence (for non‐adversary threats). Use assessment scales in Table G‐2 and Table G‐3, as extended or modified by the organization
2.4.4.1 TABLE G-2: ASSESSMENT SCALE – LIKELIHOOD OF THREAT EVENT INITIATION (ADVERSARIAL)
2.4.4.1.1 Very High
2.4.4.1.1.1 10
2.4.4.1.1.1.1 Adversary is almost certain to initiate the threat event.
2.4.4.1.2 High
2.4.4.1.2.1 8
2.4.4.1.2.1.1 Adversary is highly likely to initiate the threat event.
2.4.4.1.3 Moderate
2.4.4.1.3.1 5
2.4.4.1.3.1.1 Adversary is somewhat likely to initiate the threat event.
2.4.4.1.4 Low
2.4.4.1.4.1 2
2.4.4.1.4.1.1 Adversary is unlikely to initiate the threat event.
2.4.4.1.5 Very Low
2.4.4.1.5.1 0
2.4.4.1.5.1.1 Adversary is highly unlikely to initiate the threat event.
2.4.4.2 TABLE G-3: ASSESSMENT SCALE – LIKELIHOOD OF THREAT EVENT OCCURRENCE (NON-ADVERSARIAL)
2.4.4.2.1 Very High
2.4.4.2.1.1 10
2.4.4.2.1.1.1 Error, accident, or act of nature is almost certain to occur; or occurs more than 100 times a year.
2.4.4.2.2 High
2.4.4.2.2.1 8
2.4.4.2.2.1.1 Error, accident, or act of nature is highly likely to occur; or occurs between 10-100 times a year.
2.4.4.2.3 Moderate
2.4.4.2.3.1 5
2.4.4.2.3.1.1 Error, accident, or act of nature is somewhat likely to occur; or occurs between 1-10 times a year.
2.4.4.2.4 Low
2.4.4.2.4.1 2
2.4.4.2.4.1.1 Error, accident, or act of nature is unlikely to occur; or occurs less than once a year, but more than once every 10 years.
2.4.4.2.5 Very Low
2.4.4.2.5.1 0
2.4.4.2.5.1.1 Error, accident, or act of nature is highly unlikely to occur; or occurs less than once every 10 years.
2.4.5 Assess the likelihood of threat events resulting in adverse impacts, given initiation or occurrence. Use assessment scale in Table G‐4, as extended or modified by the organization.
2.4.5.1 TABLE G-4: ASSESSMENT SCALE – LIKELIHOOD OF THREAT EVENT RESULTING IN ADVERSE IMPACTS
2.4.5.1.1 Very High
2.4.5.1.1.1 10
2.4.5.1.1.1.1 If the threat event is initiated or occurs, it is almost certain to have adverse impacts.
2.4.5.1.2 High
2.4.5.1.2.1 8
2.4.5.1.2.1.1 If the threat event is initiated or occurs, it is highly likely to have adverse impacts.
2.4.5.1.3 Moderate
2.4.5.1.3.1 5
2.4.5.1.3.1.1 If the threat event is initiated or occurs, it is somewhat likely to have adverse impacts.
2.4.5.1.4 Low
2.4.5.1.4.1 2
2.4.5.1.4.1.1 If the threat event is initiated or occurs, it is unlikely to have adverse impacts.
2.4.5.1.5 Very Low
2.4.5.1.5.1 0
2.4.5.1.5.1.1 If the threat event is initiated or occurs, it is highly unlikely to have adverse impacts.
2.4.6 Assess the overall likelihood of threat event initiation/occurrence and the threat events resulting in adverse impacts. Use assessment scale in Table G‐5, as extended or modified by the organization
2.4.6.1 TABLE G-5: ASSESSMENT SCALE – OVERALL LIKELIHOOD
2.4.6.1.1
2.4.7 Summarize results -- Use Table G‐2, Table G‐4, and Table G‐5 to update Columns 7, 9, and 10 in Table I‐5 (adversary risk) and Table G‐3, Table G‐4, and Table G‐5 to update Columns 5, 7, and 8 in Table I‐7 (non‐adversary risk), as appropriate.
2.4.7.1 Table I-5 -- Adversarial risk
2.4.7.1.1 Threat event
2.4.7.1.1.1 Identify threat event -- Task 2-2, Table e_2, Table E-5, Table I-5
2.4.7.1.2 Threat sources
2.4.7.1.2.1 Identify threat sources that could initiate the threat event -- Task 2-1, Table D-1, Table D-2, Table D-7, Table I-5
2.4.7.1.3 Capability
2.4.7.1.3.1 Assess threat source capability -- Task 2-1, Table D-3, Table D-7, Table I-5
2.4.7.1.4 Intent
2.4.7.1.4.1 Assess threat source intent -- Task 2-1, Table D-4, Table D-7, Table I-5
2.4.7.1.5 Targeting
2.4.7.1.5.1 Assess threat source targeting (Task 2-1, Table D-5, Table D-7, Table I-5
2.4.7.1.6 Relevance
2.4.7.1.6.1 Determine relevance of threat event -- use as a screen, if relevance criteria is not met, do not complete subsequent columns -- Task 2-2, Table E-1, Table E-4, Table E-5, Table I-5
2.4.7.1.7 Likelihood of attack initiation
2.4.7.1.7.1 Determine Likelihood that one or more of the threat sources initiates the threat event, taking into consideration capability, intent and targeting (Task 2-4, Table G-1, Table G-2, Table I-5)
2.4.7.1.8 Vulnerabilities Predisposing Conditions
2.4.7.1.8.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-5.)
2.4.7.1.9 Likelihood that Initiated Attack Succeeds
2.4.7.1.9.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration threat source capability, vulnerabilities, and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-5.)
2.4.7.1.10 Overall Likelihood
2.4.7.1.10.1 Determine the likelihood that the threat event will be initiated and result in adverse impact (i.e., combination of likelihood of attack initiation and likelihood that initiated attack succeeds). (Task 2-4; Table G-1; Table G-5; Table I-5.)
2.4.7.1.11 Level of Impact
2.4.7.1.11.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H-1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-5.)
2.4.7.1.12 Risk
2.4.7.1.12.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-5.)
2.4.7.2 Table I-6 -- Non-adversarial risk
2.4.7.2.1 Threat Event
2.4.7.2.1.1 Identify threat event. (Task 2-2; Table E-1; Table E-3; Table E-5; Table I-7.)
2.4.7.2.2 Threat Sources
2.4.7.2.2.1 Identify threat sources that could initiate the threat event. (Task 2-1; Table D-1; Table D-2; Table D-8; Table I-7.)
2.4.7.2.3 Range of Effects
2.4.7.2.3.1 Identify the ranges of effects from the threat source. (Task 2-1; Table D-1; Table D-6; Table I-7.)
2.4.7.2.4 Relevance
2.4.7.2.4.1 Determine relevance of threat event. (Task 2-2; Table E-1; Table E-4; Table E-5; Table I-7.) If the relevance of the threat event does not meet the organization’s criteria for further consideration, do not complete the remaining columns.
2.4.7.2.5 Likelihood of Threat Event Occurring
2.4.7.2.5.1 Determine the likelihood that the threat event will occur. (Task 2-4; Table G-1; Table G-3; Table I-7.)
2.4.7.2.6 Vulnerabilities Predisposing Conditions
2.4.7.2.6.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-7.)
2.4.7.2.7 Likelihood that Threat Event Results in Adverse Impact
2.4.7.2.7.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration vulnerabilities and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-7.)
2.4.7.2.8 Overall Likelihood
2.4.7.2.8.1 Determine the likelihood that the threat event will occur and result in adverse impacts (i.e., combination of likelihood of threat occurring and likelihood that the threat event results in adverse impact). (Task 2-4; Table G-1; Table G-5; Table I-7.)
2.4.7.2.9 Level of Impact
2.4.7.2.9.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H- 1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-7.)
2.4.7.2.10 Risk
2.4.7.2.10.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-7.)
2.5 2-5 -- Determine level of impact
2.5.1 TASK 2-5: Determine the adverse impacts to the organization from threat events of concern considering: (i) the characteristics of the threat sources that could initiate the events; (ii) the vulnerabilities and predisposing conditions identified; and (iii) organizational susceptibility reflecting the safeguards/countermeasures planned or implemented to impede such events.
2.5.1.1 Supplemental Guidance
2.5.1.1.1 Organizations describe adverse impacts in terms of the potential harm caused to organizational operations and assets, individuals, other organizations, or the Nation. Organizations can also describe impacts in terms of failure to achieve one or more security objectives (i.e., confidentiality, integrity, or availability). Organizations make explicit:
2.5.1.1.1.1 (i) the process used to conduct impact determinations;
2.5.1.1.1.2 (ii) assumptions related to impact determinations;
2.5.1.1.1.3 (iii) credible sources and methods for obtaining impact information; and
2.5.1.1.1.4 (iv) the rationale for the conclusions reached with regard to impact determinations.
2.5.1.1.2 Assessing impact can involve identifying assets or potential targets of threat sources, including information resources (e.g., information, information systems, information technologies, applications, data repositories, communications links), people, and physical resources (e.g., buildings, power supplies), which could be affected by threat events. The focus is on high-value assets (i.e., those assets for which loss, damage, or compromise could result in significant adverse impacts to organizations).
2.5.1.1.3 Organizations may explicitly identify how established priorities and values guide the identification of high-value assets and impacts to organizational stakeholders. If not, priorities and values related to identifying targets of threat sources and organizational impacts can typically be derived from strategic planning and policies. For example, security categorization levels indicate the organizational impacts of compromising different types of information; Privacy Impact Assessments and criticality levels (when defined as part of continuity-of-operations planning or Mission/Business Impact Analysis) indicate the impacts of destruction, corruption, or loss of accountability for information resources to organizational stakeholders. Strategic plans and policies also assert or imply the relative priorities of immediate or near-term mission/business function accomplishment and long-term organizational viability (which can be undermined by reputation loss or by sanctions resulting from compromise of sensitive information).
2.5.1.1.4 Organizations can also consider the range of effects of threat events including the relative size of the set of resources affected, when making final impact determinations.
2.5.1.1.5 Organizational risk tolerance assumptions may state that threat events with an impact below a specific value do not warrant further analysis.
2.5.1.1.6 And finally, organizations capture information to support determination of uncertainty.
2.5.1.1.7 Appendix H provides a set of exemplary tables for use in determining adverse impacts:
2.5.1.1.7.1 Table H-1 provides a set of exemplary inputs to the impact determination task;
2.5.1.1.7.2 Table H-2 provides representative examples of adverse impacts to organizations focusing on harm to organizational operations and assets, individuals, other organizations, and the Nation;
2.5.1.1.7.3 Table H-3 provides an exemplary assessment scale for assessing the impact of threat events;
2.5.1.1.7.4 Table H-4 provides an exemplary assessment scale for assessing the range of effects of threat events; and
2.5.1.1.7.5 Table H-5 provides a template for summarizing/documenting adverse impacts.
2.5.1.1.8 The information produced in Task 2-5 provides adverse impact inputs to the risk tables in Appendix I.
2.5.2 Provide "impact" inputs (use Table H-1)
2.5.2.1 TABLE H-1: INPUTS – DETERMINATION OF IMPACT
2.5.2.1.1 From Tier 1 (Organization level)
2.5.2.1.1.1 Sources of threat information identified for organization-wide use (e.g., specific information that may be useful in determining likelihoods such as adversary capabilities, intent, and targeting objectives).
2.5.2.1.1.2 Impact information and guidance specific to Tier 1 (e.g., impact information related to organizational governance, core missions/business functions, management and operational policies, procedures, and structures, external mission/business relationships).
2.5.2.1.1.3 Guidance on organization-wide levels of impact needing no further consideration.
2.5.2.1.1.4 Identification of critical missions/business functions.
2.5.2.1.1.5 Exemplary set of impacts, annotated by the organization, if necessary. (Table H-2)
2.5.2.1.1.6 Assessment scale for assessing the impact of threat events, annotated by the organization, if necessary. (Table H-3)
2.5.2.1.1.7 Assessment scale for assessing the range of threat effects, annotated by the organization, if necessary. (Table H-4)
2.5.2.1.2 From Tier 2: (Mission/business process level)
2.5.2.1.2.1 Impact information and guidance specific to Tier 2 (e.g., impact information related to mission/business processes, EA segments, common infrastructure, support services, common controls, and external dependencies).
2.5.2.1.2.2 Identification of high-value assets.
2.5.2.1.3 From Tier 3: (Information system level)
2.5.2.1.3.1 Impact information and guidance specific to Tier 3 (e.g., likelihood information affecting information systems, information technologies, information system components, applications, networks, environments of operation).
2.5.2.1.3.2 Historical data on successful and unsuccessful cyber attacks; attack detection rates.
2.5.2.1.3.3 Security assessment reports (i.e., deficiencies in assessed controls identified as vulnerabilities).
2.5.2.1.3.4 Results of continuous monitoring activities (e.g., automated and nonautomated data feeds).
2.5.2.1.3.5 Vulnerability assessments, Red Team reports, or other reports from analyses of information systems, subsystems, information technology products, devices, networks, or applications.
2.5.2.1.3.6 Contingency Plans, Disaster Recovery Plans, Incident Reports.
2.5.3 Identify likelihood determination factors (from organization-defined information sources)
2.5.4 Use Table H‐2, as extended or modified by the organization, to identify adverse impacts and affected assets, updating Table H‐5.
2.5.4.1 TABLE H-2: EXAMPLES OF ADVERSE IMPACTS
2.5.4.1.1 HARM TO OPERATIONS
2.5.4.1.1.1 Inability to perform current missions/business functions.
2.5.4.1.1.1.1 In a sufficiently timely manner.
2.5.4.1.1.1.2 With sufficient confidence and/or correctness.
2.5.4.1.1.1.3 Within planned resource constraints.
2.5.4.1.1.2 Inability, or limited ability, to perform missions/business functions in the future.
2.5.4.1.1.2.1 Inability to restore missions/business functions.
2.5.4.1.1.2.2 In a sufficiently timely manner.
2.5.4.1.1.2.3 With sufficient confidence and/or correctness.
2.5.4.1.1.2.4 Within planned resource constraints.
2.5.4.1.1.3 Harms (e.g., financial costs, sanctions) due to noncompliance.
2.5.4.1.1.3.1 With applicable laws or regulations.
2.5.4.1.1.3.2 With contractual requirements or other requirements in other binding agreements.
2.5.4.1.1.4 Direct financial costs.
2.5.4.1.1.5 Relational harms.
2.5.4.1.1.5.1 Damage to trust relationships.
2.5.4.1.1.5.2 Damage to image or reputation (and hence future or potential trust relationships).
2.5.4.1.2 HARM TO ASSETS
2.5.4.1.2.1 Damage to or loss of physical facilities.
2.5.4.1.2.2 Damage to or loss of information systems or networks.
2.5.4.1.2.3 Damage to or loss of information technology or equipment.
2.5.4.1.2.4 Damage to or loss of component parts or supplies.
2.5.4.1.2.5 Damage to or of loss of information assets.
2.5.4.1.2.6 Loss of intellectual property.
2.5.4.1.3 HARM TO INDIVIDUALS
2.5.4.1.3.1 Identity theft.
2.5.4.1.3.2 Loss of Personally Identifiable Information.
2.5.4.1.3.3 Injury or loss of life.
2.5.4.1.3.4 Damage to image or reputation.
2.5.4.1.3.5 Physical or psychological mistreatment.
2.5.4.1.4 HARM TO OTHER ORGANIZATIONS
2.5.4.1.4.1 Harms (e.g., financial costs, sanctions) due to noncompliance.
2.5.4.1.4.1.1 With applicable laws or regulations.
2.5.4.1.4.1.2 With contractual requirements or other requirements in other binding agreements.
2.5.4.1.4.2 Direct financial costs.
2.5.4.1.4.3 Relational harms.
2.5.4.1.4.3.1 Damage to trust relationships.
2.5.4.1.4.3.2 Damage to reputation (and hence future or potential trust relationships).
2.5.4.1.5 HARM TO THE world/WORLD
2.5.4.1.5.1 Damage to or incapacitation of a critical infrastructure sector.
2.5.4.1.5.2 Loss of government continuity of operations.
2.5.4.1.5.3 Relational harms.
2.5.4.1.5.3.1 Damage to trust relationships with other governments or with nongovernmental entities.
2.5.4.1.5.3.2 Damage to national reputation (and hence future or potential trust relationships).
2.5.4.1.5.4 Damage to current or future ability to achieve national objectives.
2.5.4.2 TABLE H-5: TEMPLATE – IDENTIFICATION OF ADVERSE IMPACTS
2.5.4.2.1 Identifier (defined by us)
2.5.4.2.2 Impact (and Affected Asset)
2.5.4.2.2.1 Table H-2 or organization-defined
2.5.4.2.3 Security Objectives Not Achieved
2.5.4.2.4 Maximum Impact
2.5.4.2.4.1 Table H-3 and H-4 or organization defined
2.5.5 Assess the impact of threat events, updating Table H‐5. Use assessment scales in Table H‐3 and Table H‐4, as extended or modified by the organization,
2.5.5.1 TABLE H-3: ASSESSMENT SCALE – IMPACT OF THREAT EVENTS
2.5.5.1.1 Very High
2.5.5.1.1.1 10
2.5.5.1.1.1.1 The threat event could be expected to have multiple severe or catastrophic adverse effects on organizational operations, organizational assets, individuals, other organizations, or the world.
2.5.5.1.2 High
2.5.5.1.2.1 8
2.5.5.1.2.1.1 The threat event could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, individuals, other organizations, or the world. A severe or catastrophic adverse effect means that, for example, the threat event might:
2.5.5.1.2.1.1.1 (i) cause a severe degradation in or loss of mission capability to an extent and duration that the organization is not able to perform one or more of its primary functions;
2.5.5.1.2.1.1.2 (ii) result in major damage to organizational assets;
2.5.5.1.2.1.1.3 (iii) result in major financial loss; or
2.5.5.1.2.1.1.4 (iv) result in severe or catastrophic harm to individuals involving loss of life or serious life-threatening injuries.
2.5.5.1.3 Moderate
2.5.5.1.3.1 5
2.5.5.1.3.1.1 The threat event could be expected to have a serious adverse effect on organizational operations, organizational assets, individuals other organizations, or the world. A serious adverse effect means that, for example, the threat event might:
2.5.5.1.3.1.1.1 (i) cause a significant degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is significantly reduced;
2.5.5.1.3.1.1.2 (ii) result in significant damage to organizational assets;
2.5.5.1.3.1.1.3 (iii) result in significant financial loss; or
2.5.5.1.3.1.1.4 (iv) result in significant harm to individuals that does not involve loss of life or serious life-threatening injuries.
2.5.5.1.4 Low
2.5.5.1.4.1 2
2.5.5.1.4.1.1 The threat event could be expected to have a limited adverse effect on organizational operations, organizational assets, individuals other organizations, or the world. A limited adverse effect means that, for example, the threat event might:
2.5.5.1.4.1.1.1 (i) cause a degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is noticeably reduced;
2.5.5.1.4.1.1.2 (ii) result in minor damage to organizational assets;
2.5.5.1.4.1.1.3 (iii) result in minor financial loss; or
2.5.5.1.4.1.1.4 (iv) result in minor harm to individuals.
2.5.5.1.5 Very Low
2.5.5.1.5.1 0
2.5.5.1.5.1.1 The threat event could be expected to have a negligible adverse effect on organizational operations, organizational assets, individuals other organizations, or the world.
2.5.5.2 TABLE H-4: ASSESSMENT SCALE – RANGE OF EFFECTS OF THREAT EVENTS
2.5.5.2.1 Very High
2.5.5.2.1.1 10
2.5.5.2.1.1.1 The effects of the error, accident, or act of nature are sweeping, involving almost all of the cyber resources of the organization.
2.5.5.2.2 High
2.5.5.2.2.1 8
2.5.5.2.2.1.1 The effects of the error, accident, or act of nature are extensive, involving most of the cyber resources of the organization, including many critical resources.
2.5.5.2.3 Moderate
2.5.5.2.3.1 5
2.5.5.2.3.1.1 The effects of the error, accident, or act of nature are substantial, involving a significant portion of the cyber resources of the organization, including some critical resources.
2.5.5.2.4 Low
2.5.5.2.4.1 2
2.5.5.2.4.1.1 The effects of the error, accident, or act of nature are limited, involving some of the cyber resources of the organization, but involving no critical resources.
2.5.5.2.5 Very Low
2.5.5.2.5.1 0
2.5.5.2.5.1.1 The effects of the error, accident, or act of nature are minimal or negligible, involving few if any of the cyber resources of the organization, and involving no critical resources.
2.5.6 Summarize results -- use Table H‐5 to update Column 11 in Table I‐5 (adversary risk) and Column 9 in Table I‐7 (non‐adversary risk), as appropriate.
2.5.6.1 TABLE H-5: TEMPLATE – IDENTIFICATION OF ADVERSE IMPACTS
2.5.6.1.1 Identifier (defined by us)
2.5.6.1.2 Impact (and Affected Asset)
2.5.6.1.2.1 Table H-2 or organization-defined
2.5.6.1.3 Security Objectives Not Achieved
2.5.6.1.4 Maximum Impact
2.5.6.1.4.1 Table H-3 and H-4 or organization defined
2.5.6.2 Table I-5 -- Adversarial risk
2.5.6.2.1 Threat event
2.5.6.2.1.1 Identify threat event -- Task 2-2, Table e_2, Table E-5, Table I-5
2.5.6.2.2 Threat sources
2.5.6.2.2.1 Identify threat sources that could initiate the threat event -- Task 2-1, Table D-1, Table D-2, Table D-7, Table I-5
2.5.6.2.3 Capability
2.5.6.2.3.1 Assess threat source capability -- Task 2-1, Table D-3, Table D-7, Table I-5
2.5.6.2.4 Intent
2.5.6.2.4.1 Assess threat source intent -- Task 2-1, Table D-4, Table D-7, Table I-5
2.5.6.2.5 Targeting
2.5.6.2.5.1 Assess threat source targeting (Task 2-1, Table D-5, Table D-7, Table I-5
2.5.6.2.6 Relevance
2.5.6.2.6.1 Determine relevance of threat event -- use as a screen, if relevance criteria is not met, do not complete subsequent columns -- Task 2-2, Table E-1, Table E-4, Table E-5, Table I-5
2.5.6.2.7 Likelihood of attack initiation
2.5.6.2.7.1 Determine Likelihood that one or more of the threat sources initiates the threat event, taking into consideration capability, intent and targeting (Task 2-4, Table G-1, Table G-2, Table I-5)
2.5.6.2.8 Vulnerabilities Predisposing Conditions
2.5.6.2.8.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-5.)
2.5.6.2.9 Likelihood that Initiated Attack Succeeds
2.5.6.2.9.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration threat source capability, vulnerabilities, and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-5.)
2.5.6.2.10 Overall Likelihood
2.5.6.2.10.1 Determine the likelihood that the threat event will be initiated and result in adverse impact (i.e., combination of likelihood of attack initiation and likelihood that initiated attack succeeds). (Task 2-4; Table G-1; Table G-5; Table I-5.)
2.5.6.2.11 Level of Impact
2.5.6.2.11.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H-1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-5.)
2.5.6.2.12 Risk
2.5.6.2.12.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-5.)
2.5.6.3 Table I-6 -- Non-adversarial risk
2.5.6.3.1 Threat Event
2.5.6.3.1.1 Identify threat event. (Task 2-2; Table E-1; Table E-3; Table E-5; Table I-7.)
2.5.6.3.2 Threat Sources
2.5.6.3.2.1 Identify threat sources that could initiate the threat event. (Task 2-1; Table D-1; Table D-2; Table D-8; Table I-7.)
2.5.6.3.3 Range of Effects
2.5.6.3.3.1 Identify the ranges of effects from the threat source. (Task 2-1; Table D-1; Table D-6; Table I-7.)
2.5.6.3.4 Relevance
2.5.6.3.4.1 Determine relevance of threat event. (Task 2-2; Table E-1; Table E-4; Table E-5; Table I-7.) If the relevance of the threat event does not meet the organization’s criteria for further consideration, do not complete the remaining columns.
2.5.6.3.5 Likelihood of Threat Event Occurring
2.5.6.3.5.1 Determine the likelihood that the threat event will occur. (Task 2-4; Table G-1; Table G-3; Table I-7.)
2.5.6.3.6 Vulnerabilities Predisposing Conditions
2.5.6.3.6.1 Identify vulnerabilities which could be exploited by threat sources initiating the threat event, the severity of the vulnerabilities, the predisposing conditions which could increase the likelihood of adverse impacts, and the pervasiveness of the predisposing conditions. (Task 2-5; Table F-1; Table F-2; Table F-3; Table F-4; Table F-5; Table F-6; Table I-7.)
2.5.6.3.7 Likelihood that Threat Event Results in Adverse Impact
2.5.6.3.7.1 Determine the likelihood that the threat event, once initiated, will result in adverse impact, taking into consideration vulnerabilities and predisposing conditions. (Task 2-4; Table G-1; Table G-4; Table I-7.)
2.5.6.3.8 Overall Likelihood
2.5.6.3.8.1 Determine the likelihood that the threat event will occur and result in adverse impacts (i.e., combination of likelihood of threat occurring and likelihood that the threat event results in adverse impact). (Task 2-4; Table G-1; Table G-5; Table I-7.)
2.5.6.3.9 Level of Impact
2.5.6.3.9.1 Determine the adverse impact (i.e., potential harm to organizational operations, organizational assets, individuals, other organizations, or the world) from the threat event. (Task 2-5; Table H- 1, Table H-2; Table H-3; Table H-4; Table H-5; Table I-7.)
2.5.6.3.10 Risk
2.5.6.3.10.1 Determine the level of risk as a combination of likelihood and impact. (Task 2-6; Table I-1; Table I-2; Table I-3; Table I-7.)
2.6 2-6 -- Determine risk
2.6.1 TASK 2-6: Determine the risk to the organization from threat events of concern considering: (i) the impact that would result from the events; and (ii) the likelihood of the events occurring.
2.6.1.1 Supplemental Guidance
2.6.1.1.1 Organizations assess the risks from threat events as a combination of likelihood and impact. The level of risk associated with identified threat events represents a determination of the degree to which organizations are threatened by such events. Organizations make explicit the uncertainty in the risk determinations, including, for example, organizational assumptions and subjective judgments/decisions.
2.6.1.1.2 Organizations update the list of threat events, including information regarding identification of targeting information, impacts, and the determination of the risk associated with the events.
2.6.1.1.3 Organizations can order the list of threat events of concern by the level of risk determined during the risk assessment—with the greatest attention going to high-risk events. One factor that is consistent when determining risk is that at certainty (i.e., one hundred percent probability), the risk level equals the impact level. Each risk corresponds to a specific threat event with a level of impact if that event occurs. In general, the risk level is typically not higher than the impact level, and likelihood can serve to reduce risk below that impact level. However, when addressing organization-wide risk management issues with a large number of missions/business functions, mission/business processes, and supporting information systems, the upper bound on risk always being equal to impact at certainty, may not hold due to the potential for aggregation of risk. When multiple risks materialize, even if each risk is at the moderate level, the aggregation of those moderate-level risks could aggregate to a higher level of risk for organizations.
2.6.1.1.4 To address situations where harm occurs multiple times, organizations can define a threat event as multiple occurrences of harm and an impact level associated with the cumulative degree of harm.
2.6.1.1.5 During the execution of Tasks 2-1 through 2-5, organizations capture key information related to uncertainties in risk assessments. These uncertainties arise from sources such as missing information, subjective determinations, and assumptions made. The effectiveness of risk assessment results is in part determined by the ability of decision makers to be able to determine the continued applicability of assumptions made as part of the assessment. Information related to uncertainty is compiled and presented in a manner that readily supports informed risk management decisions.
2.6.1.1.6 Appendix I provides a set of exemplary tables for use in determining risk:
2.6.1.1.6.1 Table I-1 provides a set of exemplary inputs to the risk and uncertainty determination task;
2.6.1.1.6.2 Table I-2 and Table I-3 provide exemplary assessment scales for assessing levels of risk;
2.6.1.1.6.3 Tables I-4 and I-6 provide descriptions of column headings for key data elements used in risk determinations for adversarial and non-adversarial threat events, respectively; and
2.6.1.1.6.4 Tables I-5 and I-7 provide templates for summarizing/documenting key data elements used in risk determinations for adversarial and non-adversarial threat events, respectively.
2.6.1.1.7 The information produced in Task 2-6 provides risk inputs to the risk tables in Appendix I.
2.6.2 Provide "risk and uncertainty" inputs (use Table I-1)
2.6.2.1 TABLE I-1: INPUTS – RISK
2.6.2.1.1 From Tier 1 (Organization level)
2.6.2.1.1.1 Sources of risk and uncertainty information identified for organization-wide use (e.g., specific information that may be useful in determining likelihoods such as adversary capabilities, intent, and targeting objectives).
2.6.2.1.1.2 Guidance on organization-wide levels of risk (including uncertainty) needing no further consideration.
2.6.2.1.1.3 Criteria for uncertainty determinations.
2.6.2.1.1.4 List of high-risk events from previous risk assessments.
2.6.2.1.1.5 Assessment scale for assessing level of risk, annotated by the organization, if necessary. (Table I-2)
2.6.2.1.1.6 Assessment scale for assessing the level of risk as a combination of likelihood and impact, annotated by the organization, if necessary. (Table I-3)
2.6.2.1.2 From Tier 2: (Mission/business process level)
2.6.2.1.2.1 Risk-related information and guidance specific to Tier 2 (e.g., risk and uncertainty information related to mission/business processes, EA segments, common infrastructure, support services, common controls, and external dependencies).
2.6.2.1.3 From Tier 3: (Information system level)
2.6.2.1.3.1 Risk-related information and guidance specific to Tier 3 (e.g., likelihood information affecting information systems, information technologies, information system components, applications, networks, environments of operation).
2.6.3 Use Table I‐2 and Table I‐3, as extended or modified by the organization, to determine risk, updating Column 13 in Table I‐5 (adversary risk) and Column 11 in Table I‐7 (non‐adversary risk), as appropriate.
2.6.3.1 TABLE I-2: ASSESSMENT SCALE – LEVEL OF RISK
2.6.3.1.1 Very High
2.6.3.1.1.1 10
2.6.3.1.1.1.1 Very high risk means that a threat event could be expected to have multiple severe or catastrophic adverse effects on organizational operations, organizational assets, individuals, other organizations, or the world.
2.6.3.1.2 High
2.6.3.1.2.1 8
2.6.3.1.2.1.1 High risk means that a threat event could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, individuals, other organizations, or the world.
2.6.3.1.3 Moderate
2.6.3.1.3.1 5
2.6.3.1.3.1.1 Moderate risk means that a threat event could be expected to have a serious adverse effect on organizational operations, organizational assets, individuals, other organizations, or the world.
2.6.3.1.4 Low
2.6.3.1.4.1 2
2.6.3.1.4.1.1 Low risk means that a threat event could be expected to have a limited adverse effect on organizational operations, organizational assets, individuals, other organizations, or the world.
2.6.3.1.5 Very Low
2.6.3.1.5.1 0
2.6.3.1.5.1.1 Very low risk means that a threat event could be expected to have a negligible adverse effect on organizational operations, organizational assets, individuals, other organizations, or the world.
3 Step 3 - Maintain risk assessment
3.1 Overview
3.1.1 The third step in the risk assessment process is to maintain the assessment. The objective of this step is to keep current over time, the specific knowledge of the risk organizations incur. The results of risk assessments inform risk decisions and risk responses by organizations.
3.1.2 To support ongoing risk management decisions (e.g., authorization decisions for information systems and common controls), organizations maintain risk assessments to incorporate any changes detected through risk monitoring. [37] Risk monitoring provides organizations with the means to, on an ongoing basis:
3.1.2.1 (i) verify compliance;[38]
3.1.2.2 (ii) determine the effectiveness of risk response measures; and
3.1.2.3 (iii) identify risk-impacting changes to organizational information systems and the environments in which those systems operate.[39]
3.1.3 Maintaining risk assessments includes the following specific tasks:
3.1.4 Monitoring risk factors identified in risk assessments on an ongoing basis and understanding subsequent changes to those factors; and
3.1.5 Updating key components of risk assessments reflecting the monitoring activities carried out by organizations.
3.1.6 [37] Risk monitoring, the fourth step in the risk management process, is described in NIST Special Publication 800-39. The step in the risk assessment process to maintain the assessment results overlaps to some degree with the risk monitoring step in the risk management process. This reinforces the important concept that many of the activities in the risk management process are complementary and mutually reinforcing.
3.1.7 [38] Compliance verification ensures that organizations have implemented required risk response measures and that information security requirements derived from and traceable to organizational missions/business functions, federal legislation, directives, regulations, policies, and standards/guidelines are satisfied.
3.1.8 [39] Draft NIST Special Publication 800-137 provides guidance on the ongoing monitoring of organizational information systems and environments of operation.
3.2 3-1 -- Monitor risk factors
3.2.1 TASK 3-1: Conduct ongoing monitoring of the factors that contribute to changes in risk to organizational operations and assets, individuals, other organizations, or the world.
3.3 3-2 -- Update risk assessment
3.3.1 TASK 3-2: Update existing risk assessment using the results from ongoing monitoring of risk factors.