Skip to content

Commit

Permalink
Preparing sphinx docs for release
Browse files Browse the repository at this point in the history
  • Loading branch information
mehaase committed Apr 9, 2024
1 parent 0d744c2 commit b0cd66b
Show file tree
Hide file tree
Showing 21 changed files with 711 additions and 349 deletions.
Binary file removed docs/_static/CTIDresources.jpg
Binary file not shown.
Binary file modified docs/_static/fin6advemu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/m3tid-components.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/projects-triangle.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/_static/stp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/tid.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/_static/topattackttp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
75 changes: 38 additions & 37 deletions docs/components/cti.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,75 +2,76 @@
Cyber Threat Intelligence
=========================

This section outlines the key components that have been identified for the CTI dimension as well as maturity levels within the components. These components and levels form the
basis for assessing how threat informed an organization’s CTI program is. This assessment can be conducted using the companion spreadsheet published with this white paper.

This section outlines the key components that have been identified for the CTI dimension
as well as maturity levels within the components. These components and levels form the
basis for assessing how threat informed an organization’s CTI program is. This
assessment can be conducted using the companion spreadsheet published with this white
paper.

Depth of Threat Data [#f1]_
----------------------------

What level of information (roughly relative to the Pyramid of Pain) is being used to track adversaries.
What level of information (roughly relative to the Pyramid of Pain) is being used to
track adversaries.

1. None
2. Ephemeral IOCs: hashes, IPs, domains: data sources an adversary can change easily 
3. Tools / Software used by adversaries: tools or software which can be swapped or modified by an adversary to evade detection 
4. Techniques and Tactics used by adversaries: the techniques and behaviors that are harder to change for an adversary
5. Low-variance adversary behaviors and associated observables: specific actions most implementations of a technique must use so it is very difficult for an adversary to change or avoid

2. Ephemeral IOCs: hashes, IPs, domains: data sources an adversary can change easily
3. Tools / Software used by adversaries: tools or software which can be swapped or
modified by an adversary to evade detection
4. Techniques and Tactics used by adversaries: the techniques and behaviors that are
harder to change for an adversary
5. Low-variance adversary behaviors and associated observables: specific actions most
implementations of a technique must use so it is very difficult for an adversary to
change or avoid

Breadth of Threat Information
-----------------------------

Complementary to the depth component score above, this component reflects roughly how many relevant Techniques are understood at that level of depth.

1. None
2. Single Technique
3. Multiple Techniques
4. All top-priority Techniques relevant to the organization
5. All Techniques relevant to the organization [#f2]_
Complementary to the depth component score above, this component reflects roughly how
many relevant Techniques are understood at that level of depth.

1. None
2. Single Technique
3. Multiple Techniques
4. All top-priority Techniques relevant to the organization
5. All Techniques relevant to the organization [#f2]_

Relevance of Threat Data
------------------------

Where is the threat information coming from and how timely is it? 

1. None
2. Generic reports or freely available reporting
3. Internal reports
4. Recent, in-depth reporting (often requires a subscription)
5. Customized briefings
Where is the threat information coming from and how timely is it?

1. None
2. Generic reports or freely available reporting
3. Internal reports
4. Recent, in-depth reporting (often requires a subscription)
5. Customized briefings

Utilization of Threat Information
---------------------------------

How is the threat information being used by an organization?

1. None
2. Lightly / occasionally read
3. Regularly ingested for analysis
4. Analyzed automatically [#f3]_ and/or by trained analysts
5. Contextualized in disseminated reports for other internal stakeholders to operationalize

1. None
2. Lightly / occasionally read
3. Regularly ingested for analysis
4. Analyzed automatically [#f3]_ and/or by trained analysts
5. Contextualized in disseminated reports for other internal stakeholders to operationalize

Dissemination of Threat Reporting
---------------------------------

What threat information is passed along within an organization? [#f4]_

1. None
2. Tactical reporting with highly perishable information (IOCs)
3. Tactical reporting focused on adversary behavior (TTPs)
4. Operational reporting on pertinent security trends
5. Strategic reporting on business impacts of security trends

1. None
2. Tactical reporting with highly perishable information (IOCs)
3. Tactical reporting focused on adversary behavior (TTPs)
4. Operational reporting on pertinent security trends
5. Strategic reporting on business impacts of security trends

.. rubric:: References

.. [#f1] https://center-for-threat-informed-defense.github.io/summiting-the-pyramid/levels/
.. [#f2] https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/our-work/top-attack-techniques/
.. [#f3] https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/our-work/threat-report-attck-mapper-tram/
.. [#f4] https://github.com/center-for-threat-informed-defense/cti-blueprints/wiki
88 changes: 53 additions & 35 deletions docs/components/dm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,11 @@
Defensive Measures
==================

This section outlines the key components that have been identified for the Defensive Measures dimension as well as maturity levels within the components. These components and levels form the basis for assessing how threat informed an organization’s Defensive program is. This assessment can be conducted using the companion spreadsheet published with this white paper.
This section outlines the key components that have been identified for the Defensive
Measures dimension as well as maturity levels within the components. These components
and levels form the basis for assessing how threat informed an organization’s Defensive
program is. This assessment can be conducted using the companion spreadsheet published
with this white paper.

Foundational Security [#f1]_
----------------------------
Expand All @@ -11,58 +15,74 @@ The degree to which threat informs and prioritizes preventative security measure

1. None
2. Ad Hoc patching, limited asset inventory, basic security measures
3. Several mitigations and security controls [#f2]_ connected to relevant threats implemented, key attack surfaces and critical assets identified
4. Knowledge of threat informs a risk management process to prioritize a set of mitigations and controls
5. Prioritized [#f3]_ automated patching [#f4]_, attack surfaces understood, full asset inventory mapped to business operations and threats, hygiene best-practices implemented

3. Several mitigations and security controls [#f2]_ connected to relevant threats
implemented, key attack surfaces and critical assets identified
4. Knowledge of threat informs a risk management process to prioritize a set of
mitigations and controls
5. Prioritized [#f3]_ automated patching [#f4]_, attack surfaces understood, full asset
inventory mapped to business operations and threats, hygiene best-practices
implemented

Data Collection
----------------

Is the right data being collected based on the needs identified from analysis of threat intelligence?

1. None
2. Minimal visibility (e.g., single network sensor at network boundary)
3. Compliant with best practices for network and devices (e.g., logs collected from each device according to the manufacturer’s recommendations)
4. Threat-informed detection requirements guide sensor configuration and deployment [#f5]_ (e.g., additional Sysmon configuration driven by detection needs for ATT&CK Techniques)
5. Threat-Optimized (Sensors evaluated, configured, and deployed to meet all threat-informed detection needs)
Is the right data being collected based on the needs identified from analysis of threat
intelligence?

1. None
2. Minimal visibility (e.g., single network sensor at network boundary)
3. Compliant with best practices for network and devices (e.g., logs collected from each
device according to the manufacturer’s recommendations)
4. Threat-informed detection requirements guide sensor configuration and deployment
[#f5]_ (e.g., additional Sysmon configuration driven by detection needs for ATT&CK
Techniques)
5. Threat-Optimized (Sensors evaluated, configured, and deployed to meet all
threat-informed detection needs)

Detection Engineering
------------------------

How much are detection analytics designed, tested, and tuned to optimize precision, recall, and robustness for relevant malicious behaviors?

1. None
2. Import rules / analytics from open repository
3. Prioritize and tune imported rules / analytics from repository
4. Testing and tuning of custom detection analytics
5. Detection analytics developed based on knowledge of low-variance behaviors, customized to reduce false positives while maintaining robust [#f6]_ recall [#f7]_
How much are detection analytics designed, tested, and tuned to optimize precision,
recall, and robustness for relevant malicious behaviors?

1. None
2. Import rules / analytics from open repository
3. Prioritize and tune imported rules / analytics from repository
4. Testing and tuning of custom detection analytics
5. Detection analytics developed based on knowledge of low-variance behaviors,
customized to reduce false positives while maintaining robust [#f6]_ recall [#f7]_

Incident Response
------------------

How automated, strategic, and effective are responsive measures against top-priority threats?

1. None
2. Ad Hoc, Manual, Reactive
3. Playbook-enabled, partially automated
4. Informed by knowledge of threat actor (e.g., initial detection leads to follow-on investigation to detect other malicious actions expected in the campaign based on CTI) Proactive hunts are conducted driven by threat information rather than only alerts from existing analytics.
5. Strategic, holistic, optimized to deter future events (e.g., with an understanding of the full campaign and the adversary’s likely reaction to defensive response, the defenders take decisive and coordinated actions that effectively evict the adversary such that it is not easy for them to return)
How automated, strategic, and effective are responsive measures against top-priority
threats?

1. None
2. Ad Hoc, Manual, Reactive
3. Playbook-enabled, partially automated
4. Informed by knowledge of threat actor (e.g., initial detection leads to follow-on
investigation to detect other malicious actions expected in the campaign based on
CTI) Proactive hunts are conducted driven by threat information rather than only
alerts from existing analytics.
5. Strategic, holistic, optimized to deter future events (e.g., with an understanding of
the full campaign and the adversary’s likely reaction to defensive response, the
defenders take decisive and coordinated actions that effectively evict the adversary
such that it is not easy for them to return)

Deception Operations [#f8]_
---------------------------------

How extensive and effective are deception operations to enable defensive objectives and the collection of new threat intelligence?

1. None
2. Sandboxing of suspicious executables (e.g., email attachment detonation before delivery)
3. 1 to several Honey* (pot, token, document…) deployed and monitored, enabling detection of malicious use and early warning
4. Honey network deployed and monitored
5. Intentional, long-term deception operations in a realistic honey network
How extensive and effective are deception operations to enable defensive objectives and
the collection of new threat intelligence?

1. None
2. Sandboxing of suspicious executables (e.g., email attachment detonation before
delivery)
3. 1 to several Honey* (pot, token, document…) deployed and monitored, enabling
detection of malicious use and early warning
4. Honey network deployed and monitored
5. Intentional, long-term deception operations in a realistic honey network

.. rubric:: References

Expand All @@ -74,5 +94,3 @@ How extensive and effective are deception operations to enable defensive objecti
.. [#f6] https://center-for-threat-informed-defense.github.io/summiting-the-pyramid/
.. [#f7] https://center-for-threat-informed-defense.github.io/summiting-the-pyramid/definitions/
.. [#f8] https://engage.mitre.org/
11 changes: 8 additions & 3 deletions docs/components/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,18 @@
Appendix A - Key Components and Maturity Levels
===============================================

Expanded definitions of Threat Informed Defense Dimensions, Components, and Levels.
.. figure:: ../_static/m3tid-components.png
:alt: Threat-Informed Defense: Dimensions and Components
:align: center

Threat-Informed Defense: Dimensions and Components

This appendix includes detailed definitions of the threat-informed defense dimensions
and components.

.. toctree::
:maxdepth: 1

cti
dm
tne


68 changes: 35 additions & 33 deletions docs/components/tne.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,67 +2,70 @@
Test & Evaluation
==================

This section outlines the key components that have been identified for the Test & Evaluation dimension as well as maturity levels within the components. These components and levels form the basis for assessing how threat informed an organization’s T&E program is. This assessment can be conducted using the companion spreadsheet published with this white paper.
This section outlines the key components that have been identified for the Test &
Evaluation dimension as well as maturity levels within the components. These components
and levels form the basis for assessing how threat informed an organization’s T&E
program is. This assessment can be conducted using the companion spreadsheet published
with this white paper.

Type of Testing
----------------

Are cybersecurity tests focused on helping defenders improve against prioritized threats?

1. None
2. Security Control / Risk Assessment (reactive, compliance-focused)
3. Vulnerability Assessment / Penetration Test (reactive, threat-focused)
4. Adversary Emulation (proactive, threat-focused) [#f1]_ [#f2]_
5. Purple Teaming (proactive, threat-focused, collaborative)
Are cybersecurity tests focused on helping defenders improve against prioritized
threats?

1. None
2. Security Control / Risk Assessment (reactive, compliance-focused)
3. Vulnerability Assessment / Penetration Test (reactive, threat-focused)
4. Adversary Emulation (proactive, threat-focused) [#f1]_ [#f2]_
5. Purple Teaming (proactive, threat-focused, collaborative)

Frequency of Testing
-----------------------------

Do your tests keep pace with changing adversaries and defended technologies?

1. None
2. Annual
3. Semi-Annual
4. Monthly
5. Continuous

1. None
2. Annual
3. Semi-Annual
4. Monthly
5. Continuous

Test Planning
------------------------

Are tests coordinated and prioritized on the most relevant threat behaviors?

1. None
2. Ad hoc
3. Deliberately planned and scoped, informed by Threat Actor or prioritized TTPs [#f3]_
4. Collaboratively planned with Defenders, focused on known gaps and validating coverage
5. Collaboratively planned with Defenders, linked to organizational Metrics or KPIs

1. None
2. Ad hoc
3. Deliberately planned and scoped, informed by Threat Actor or prioritized TTPs [#f3]_
4. Collaboratively planned with Defenders, focused on known gaps and validating coverage
5. Collaboratively planned with Defenders, linked to organizational Metrics or KPIs

Test Execution
---------------------------------

Does testing cover adversary TTPs in addition to traditional IOCs?

1. None
2. Scanners or other tooling, not threat-focused
3. Commodity tooling, IOC-focused
4. Commodity tooling, TTP-focused, minimum 1 implementation of a technique [#f4]_
5. Commodity or Custom tooling, TTP-focused, multiple (including evasive [#f5]_ ) implementations of a technique

1. None
2. Scanners or other tooling, not threat-focused
3. Commodity tooling, IOC-focused
4. Commodity tooling, TTP-focused, minimum 1 implementation of a technique [#f4]_
5. Commodity or Custom tooling, TTP-focused, multiple (including evasive [#f5]_ )
implementations of a technique

Test Results
---------------------------------

How effectively do test results cause improvements in defensive measures?

1. None
2. Results generated
3. Results generated, leadership interest, actions taken
4. Results formally tracked; findings drive detection improvements and architectural changes
5. Results formally tracked; findings drive organizational programs, hiring, training, and other significant investments

1. None
2. Results generated
3. Results generated, leadership interest, actions taken
4. Results formally tracked; findings drive detection improvements and architectural
changes
5. Results formally tracked; findings drive organizational programs, hiring, training,
and other significant investments

.. rubric:: References

Expand All @@ -71,4 +74,3 @@ How effectively do test results cause improvements in defensive measures?
.. [#f3] https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/our-work/attack-flow/
.. [#f4] https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/our-work/micro-emulation-plans/
.. [#f5] https://posts.specterops.io/reactive-progress-and-tradecraft-innovation-b616f85b6c0a
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

# -- Project information -----------------------------------------------------

project = "M3TID"
project = "Measure, Maximize, Mature Threat-Informed Defense"
author = "Center for Threat-Informed Defense"
copyright_years = "2024"
prs_numbers = "CT0105"
Expand Down
Loading

0 comments on commit b0cd66b

Please sign in to comment.