404
Page not found
How did we get here?
diff --git a/404.html b/404.html new file mode 100644 index 000000000..2c1b76b4a --- /dev/null +++ b/404.html @@ -0,0 +1,40 @@ + + +
+ + + + + + +As you can see from the picture below, the process for developing automated security tests starts with requirements documents like SRGs, STIGs or CIS Benchmark that are written in regular, human language and then implemented as code. We need that code to record test results in a standardized format so that we can easily export our security data somewhere people can use it to make decisions (like the Heimdall visualization app).
',12),m={href:"https://saf.mitre.org",target:"_blank",rel:"noopener noreferrer"},f=e("figure",null,[e("img",{src:i,alt:"The SAF Lifecycle",tabindex:"0",loading:"lazy"}),e("figcaption",null,"The SAF Lifecycle")],-1);function _(p,g){const a=r("ExternalLinkIcon");return o(),n("div",null,[d,e("p",null,[t("This challenge is what the "),e("a",m,[t("MITRE Security Automation Framework"),l(a)]),t(" or MITRE SAF was developed to simplify -- to make the journey from a Requirement Document to an automated test profile and back again a little easier to navigate.")]),f])}const x=s(h,[["render",_],["__file","02.html.vue"]]);export{x as default}; diff --git a/assets/02.html-H7TT4yNP.js b/assets/02.html-H7TT4yNP.js new file mode 100644 index 000000000..8b7cba9fc --- /dev/null +++ b/assets/02.html-H7TT4yNP.js @@ -0,0 +1 @@ +import{_ as o}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as s,o as n,c,d as a,e,b as h,f as t}from"./app-PAvzDPkc.js";const i={},d=t('The repository and profile are organized into two primary branches: main
and TBD
. The repository has a set of tags
representing iterative releases of the STIG from one Benchmark major version to the next. It also has a set of releases for fixes and updates to the profile between STIG Benchmark Releases.
main
branchThe main
branch contains the most recent code for the profile. It may include bugs and is typically aligned with the latest patch release for the profile. This branch is primarily used for development and testing workflows for the various testing targets. For production validation, use the latest stable patch release.
v{x}r{xx}
branchesReleases use Semantic Versioning (SemVer), aligning with the STIG Benchmark versioning system of Major Version and Release. The SemVer patch number is used for updates, bug fixes, and code changes between STIG Benchmark Releases for the given product. STIG Benchmarks use a Version and Release tagging pattern v{x}r{xx}
- like V1R12 - and we mirror that pattern in our SemVer releases.
We don't use a specific current
or latest
tag. The current
/latest
tag for the profile and repository will always be the latest major tag of the benchmark. For example, if v1.12.3
is the latest Benchmark release from the STIG author, then the tag v1.12
will point to the v1.12.3
release of the code.
To use the current main
, point directly to the GitHub repo.
Major tags point to the latest patch release of the benchmark. For example, v1.3
and v1.3.0
represent the first release of the Red Hat Enterprise Linux 8 STIG V1R3 Benchmark. The v1.12.xx
tag(s) would represent the V1R12 Benchmark releases as we find bugs, fixes, or general improvements to the testing profile. This tag will point to its v{x}r{xx}
counterpart.
The latest patch release always points to the major release for the profile.
For example, after releasing v1.12.0
, we will point v1.12
to that patch release: v1.12.0
. When an issue is found, we will fix, tag, and release v1.12.1
. We will then 'move' the v1.12
tag so that it points to tag v1.12.1
. This way, your pipelines can choose if they want to pin on a specific release of the InSpec profile or always run 'current'.
InSpec organizes its code into profiles
. A profile
is a set of automated tests that usually relates directly back to a Security Requirements Benchmark -- such as a CIS Benchmark or a Defense Information Security Agency (DISA) Security Technical Implementation Guide (STIGs) - and provides an organized structure to articulate that set of requirements using tests in code.
Profiles have two (2) required elements:
inspec.yml
filecontrols
directoryand four (4) optional elements:
libraries
directoryfiles
directoryinputs.yml
fileREADME.md
fileWe will be going over each of these during our class.
$ tree nginx
+ nginx
+ └── profile
+ ├── README.md
+ ├── inputs.yml
+ ├── controls
+ │ ├── V-2230.rb
+ │ └── V-2232.rb
+ ├── files
+ │ └── services-and-ports.yml
+ ├── inspec.yml
+ └── libraries
+ └── nginx_helper.rb
+
control "V-13727" do
+ title "The worker_processes StartServers directive must be set properly."
+
+ desc "These requirements are set to mitigate the effects of several types of
+ denial of service attacks. Although there is some latitude concerning the
+ settings themselves, the requirements attempt to provide reasonable limits
+ for the protection of the web server. If necessary, these limits can be
+ adjusted to accommodate the operational requirement of a given system."
+
+ impact 0.5
+ tag "severity": "medium"
+ tag "gtitle": "WA000-WWA026"
+ tag "gid": "V-13727"
+ tag "rid": "SV-36645r2_rule"
+ tag "stig_id": "WA000-WWA026 A22"
+ tag "nist": ["CM-6", "Rev_4"]
+
+ tag "check": "To view the worker_processes directive value enter the
+ following command:
+ grep ""worker_processes"" on the nginx.conf file and any separate included
+ configuration files
+ If the value of ""worker_processes"" is not set to auto or explicitly set,
+ this is a finding:
+ worker_processes auto;
+ worker_processes defines the number of worker processes. The optimal value
+ depends on many factors including (but not limited to) the number of CPU
+ cores, the number of hard disk drives that store data, and load pattern. When
+ one is in doubt, setting it to the number of available CPU cores would be a
+ good start (the value “auto” will try to autodetect it)."
+
+ tag "fix": "Edit the configuration file and set the value of
+ ""worker_processes"" to the value of auto or a value of 1 or higher:
+ worker_processes auto;"
+
+ describe nginx_conf(NGINX_CONF_FILE).params['worker_processes'] do
+ it { should cmp [['auto']] }
+ end
+end
+
Remember that a profile
is a set of automated tests that usually relates directly back to a Security Requirements Benchmark.
Profiles have two (2) required elements:
inspec.yml
filecontrols
directoryand optional elements such as:
libraries
directoryfiles
directoryinputs.yml
fileREADME.md
fileInSpec can create the profile structure for you using the following command:
$ inspec init profile my_inspec_profile
+
This will give you the required files along with some optional files.
$ tree my_inspec_profile
+
+ my_inspec_profile
+ ├── README.md
+ ├── controls
+ │ └── example.rb
+ └── inspec.yml
+
Let's take a look at the default ruby file in the controls
directory.
This example shows two tests. Both tests check for the existence of the /tmp
directory. The second test provides additional information about the test. Let's break down each component.
control
(line 9) is followed by the control's name. Each control in a profile has a unique name.impact
(line 10) measures the relative importance of the test and must be a value between 0.0 and 1.0.title
(line 11) defines the control's purpose.desc
(line 12) provides a more complete description of what the control checks for.describe
(lines 13 — 15) defines the test. Here, the test checks for the existence of the /tmp
directory.As with many test frameworks, InSpec code resembles natural language. Here's the format of a describe block.
describe < entity > do
+ it { < expectation > }
+end
+
inspec.yml | inputs.yml |
---|---|
Required | Optional |
Should not be renamed | Can be renamed |
Needs to be at the root of the profile | Can be anywhere |
Automatically used during executioninspec exec profile1 | Needs to be passed in during executioninspec exec profile1 --input-file <path> |
Purpose is to define default input values and profile metadata | Purpose is to override default input values with parameters for the local environments |
Defined by the author of the profile | Defined by the user of the profile |
Before we go further, let's discuss what we mean by "security guidance" and some of its characteristics that matter to us as security (and automation!) practitioners.
Security guidance is documentation that defines what constitutes a secure configuration for a software component or category of components. It includes organizational requirements for security, best practices, and instructions on how to fix problems. It answers the important question that all developers and engineers ask when they want to secure their software -- "What counts as 'secure' for my system in the first place?"
Most software projects will (or at least should) align themselves to a particular source for security guidance and follow it as a baseline that answers this question. For example, many commercial companies (and even some civilian government agencies) use the Center for Internet Security Benchmarks (CIS Benchmarks) as their baseline, while software deployed by the US Department of Defense is required to comply with the Defense Information Systems Agency's Security Technical Implementation Guides (STIGs), broadly speaking.
There are many different types of guidance documentation available to software developers. Software vendors often publish best practices guides, administration guides, or business requirements documents to instruct their userbase on how to best make use of the product. Security guidance is ultimately just another type of guidance document for effectively using a piece of software.
Other Guidance Sources
Have you used security guidance documents from other sources besides the ones mentioned here?
Security Requirements Are Functional Requirements!
Software developers have a historical tendency to consider security as a completely separate activity from the basic process of building a software product. In a DevSecOps environment, however, security is just another functional requirement of your code.
You cannot ship code if it does not meet a functional requirement. Likewise, you cannot ship code that does not meet a security requirement!
This class content will be giving heavy focus to STIGs, since Vulcan was originally created to address pain points in the authorship process for STIG documents (which we will describe in detail a little later). We assume that most of the students who engage with this content will be working on government projects that require securing systems to "STIG standard," or that the students work for software vendors who need to write such guidance. However, the principles behind what makes a quality STIG can be applied to security guidance of all kinds, and we hope you can take the lessons here and apply them to whatever guidance you create for your software.
Many organizations that use popular secrity guidance documents as their baselines have their own specific organizational security policies which conflict with that baseline. For example, let's say that the STIG for the Red Hat 8 operating system specifies that users should have, at minimum, 15 characters in their passwords, but your company's security policy requires a minimum of 20.
Consequently, many government agencies use baseline security guidance as foundations to create their own tailored content for in-house use. In addition to Vulcan's usual workflow for creating new baselines, you can use it to ingest a published baseline document and conduct this tailoring process to create security guidance tailored to your organization.
',11),d={class:"hint-container tip"},p=e("p",{class:"hint-container-title"},"Automating Overlay Validation",-1),f={href:"https://mitre.github.io/saf-training/courses/beginner/10.html#profile-dependencies-overlays",target:"_blank",rel:"noopener noreferrer"},m=e("h2",{id:"_2-2-finding-security-guidance-documentation-baselines",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_2-2-finding-security-guidance-documentation-baselines","aria-hidden":"true"},"#"),t(" 2.2 Finding Security Guidance Documentation Baselines")],-1),y=e("p",null,"Commonly-used security guidance is often available on the open Internet.",-1),g={href:"https://www.cisecurity.org/cis-benchmarks",target:"_blank",rel:"noopener noreferrer"},b={href:"https://public.cyber.mil/stigs/downloads/",target:"_blank",rel:"noopener noreferrer"},w=e("ul",null,[e("li",null,"DISA distributes STIGs as a set of PDFs describing metadata like a changelog and cover sheets, where the underlying STIG itself is stored as an XML document.")],-1),_=e("p",null,'Your first question when planning for securing your software component should always be "is there already published guidance documentation available for it?"',-1),v=e("h3",{id:"_2-2-1-what-do-i-do-if-there-isn-t-already-published-guidance-documentation-available-for-it",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_2-2-1-what-do-i-do-if-there-isn-t-already-published-guidance-documentation-available-for-it","aria-hidden":"true"},"#"),t(" 2.2.1 What Do I Do If There Isn't Already Published Guidance Documentation Available For It?")],-1),S=e("p",null,[t("Similarly, if you need to secure a software component that "),e("em",null,"does not"),t(" have a published guidance document already, your best strategy is to find the next-closest guidance document and use it to inform the content you create. You can think of the process of writing security guidance as an open-book test; you should feel free to borrow the best ideas other from other baselines!")],-1),I={href:"https://public.cyber.mil/stigs/faqs/#toggle-id-11",target:"_blank",rel:"noopener noreferrer"},k=o('Therefore, where no guidance exists, use the closest reasonable baseline.
Then use Vulcan to make some. Good news; you're already reading the instructions on how to do that.
One of the overall goals of the Security Automation Framework is to get people writing quality security automation content, not just any old hardening scripts and test suites.
In formal government assessment settings, you will need to prove that you are covering particular categories of security controls with your activities. You can only do that if you build your automation content around a well-built security guidance document that itself heavily references all of your upstream requirements.
Therefore, the Planning capability of the SAF -- dealing with the selection and creation of security baselines for automation -- is a critical component of the overall framework, even though it itself is not automated.
The Plan capability comes first in the list because every other capability needs to refer back to it!
',9);function q(T,x){const a=s("ExternalLinkIcon");return r(),c("div",null,[h,e("div",d,[p,e("p",null,[t("You can check out the "),e("a",f,[t("Beginner Security Automation Developer Class"),i(a)]),t(" for examples of how to write automated validation code with InSpec that is tailored from a baseline.")])]),m,y,e("ul",null,[e("li",null,[t("CIS publishes its popular "),e("a",g,[t("Benchmarks"),i(a)]),t(" for free with registration (in PDF form, other formats require a subscription to CIS's SecureSuite)")]),e("li",null,[t("DISA publishes all STIGs (and all the rest of its security documentation materials) for free on the "),e("a",b,[t("DOD Cyber Exchange"),i(a)]),t(" public web page. "),w])]),_,v,S,e("p",null,[t("In the case of STIGs, DISA's official guidance (as per their "),e("a",I,[t("FAQ"),i(a)]),t(") states to check for a STIG for an earlier version of the same software and modify it as necessary.")]),k])}const G=n(u,[["render",q],["__file","02.html.vue"]]);export{G as default}; diff --git a/assets/03.html-3gTCNJob.js b/assets/03.html-3gTCNJob.js new file mode 100644 index 000000000..a954cbc45 --- /dev/null +++ b/assets/03.html-3gTCNJob.js @@ -0,0 +1 @@ +const e=JSON.parse(`{"key":"v-0a3647e2","path":"/courses/user/03.html","title":"3. What's the SAF?","lang":"en-US","frontmatter":{"order":3,"next":"04.md","title":"3. What's the SAF?","author":"Aaron Lippold","headerDepth":3,"description":"3. SAF Scavenger Hunt Explore the SAF homepage (https://saf.mitre.org/) to find the answers to this scavenger hunt and familiarize yourself with the topics of this course. When ...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/user/03.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"3. What's the SAF?"}],["meta",{"property":"og:description","content":"3. SAF Scavenger Hunt Explore the SAF homepage (https://saf.mitre.org/) to find the answers to this scavenger hunt and familiarize yourself with the topics of this course. When ..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"3. What's the SAF?\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"3. SAF Scavenger Hunt","slug":"_3-saf-scavenger-hunt","link":"#_3-saf-scavenger-hunt","children":[]}],"git":{},"readingTime":{"minutes":1.46,"words":438},"filePathRelative":"courses/user/03.md","autoDesc":true}`);export{e as data}; diff --git a/assets/03.html-3kNnUOXI.js b/assets/03.html-3kNnUOXI.js new file mode 100644 index 000000000..22a4011af --- /dev/null +++ b/assets/03.html-3kNnUOXI.js @@ -0,0 +1,119 @@ +import{_ as u}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as l,o as d,c as m,d as n,e as s,b as a,w as e,f as c}from"./app-PAvzDPkc.js";const k="/saf-training/assets/NGINX_Heimdall_Report_View-X-NIfGbI.png",v={},b=n("h2",{id:"revisiting-the-nginx-web-server-inspec-profile",tabindex:"-1"},[n("a",{class:"header-anchor",href:"#revisiting-the-nginx-web-server-inspec-profile","aria-hidden":"true"},"#"),s(" Revisiting the NGINX Web Server InSpec Profile")],-1),g=c(`InSpec is a framework which is used to validate the security configuration of a certain target. In this case, we are interested in validating that an NGINX server complies with our requirements.
First let's find our nginx container id using the docker ps
command:
docker ps
+
Which will return something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+8bs80z6b5n9s redhat/ubi8 "/bin/bash" 2 weeks ago Up 1 hour redhat8
+8ba6b8av5n7s nginx:latest "/docker.…" 2 weeks ago Up 1 hour 80/tcp nginx
+
We can then use the container name of our nginx container nginx
to target the inspec validation scans at that container.
InSpec profiles are a set of automated tests that relate back to a security requirements benchmark, so the controls are always motivated by the requirements.
http_ssl
stream_ssl
mail_ssl
/etc/nginx/nginx.conf
- should exist as a file.root
user and group.InSpec profiles consist of automated tests, that align to security requirements, written in ruby files inside the controls directory.
If you don't have my_nginx
profile, run the following command to initialize your InSpec profile.
inspec init profile my_nginx
+
Append the inputs
sections in your profile at my_nginx/inspec.yml
name: my_nginx
+title: InSpec Profile
+maintainer: The Authors
+copyright: The Authors
+copyright_email: you@example.com
+license: Apache-2.0
+summary: An InSpec Compliance Profile
+version: 0.1.0
+supports:
+ platform: os
+
+inputs:
+ - name: nginx_version
+ type: String
+ value: 1.10.3
+
+ - name: nginx_modules
+ type: Array
+ value:
+ - http_ssl
+ - stream_ssl
+ - mail_ssl
+
+ - name: admin_users
+ type: Array
+ value:
+ - admin
+
Create an inputs file in your profile at inputs-linux.yml
admin_users:
+ - admin
+ - root
+
Paste the following controls in your profile at my_nginx/controls/example.rb
control 'nginx-version' do
+ impact 1.0
+ title 'NGINX version'
+ desc 'The required version of NGINX should be installed.'
+ describe nginx do
+ its('version') { should cmp >= input('nginx_version') }
+ end
+end
+
+control 'nginx-modules' do
+ impact 1.0
+ title 'NGINX modules'
+ desc 'The required NGINX modules should be installed.'
+ required_modules = input('nginx_modules')
+ describe nginx do
+ required_modules.each do |required_module|
+ its('modules') { should include required_module }
+ end
+ end
+end
+
+control 'nginx-conf-file' do
+ impact 1.0
+ title 'NGINX configuration file'
+ desc 'The NGINX config file should exist.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_file }
+ end
+end
+
+control 'nginx-conf-perms' do
+ impact 1.0
+ title 'NGINX configuration permissions'
+ desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_owned_by 'root' }
+ it { should be_grouped_into 'root' }
+ it { should_not be_readable.by('others') }
+ it { should_not be_writable.by('others') }
+ it { should_not be_executable.by('others') }
+ end
+end
+
+control 'nginx-shell-access' do
+ impact 1.0
+ title 'NGINX shell access'
+ desc 'The NGINX shell access should be restricted to admin users.'
+ non_admin_users = users.shells(/bash/).usernames
+ describe "Shell access for non-admin users" do
+ it "should be removed." do
+ failure_message = "These non-admin should not have shell access: #{non_admin_users.join(", ")}"
+ expect(non_admin_users).to be_in(input('admin_users')), failure_message
+ end
+ end
+end
+
To run inspec exec
on the target, ensure that you are in the directory that has my_nginx
profile.
inspec exec my_nginx -t docker://nginx --input-file inputs-linux.yml --reporter cli json:my_nginx_results.json
+
Let's take a look at the default control file, controls/example.rb
.
title 'sample section'
+
+# you can also use plain tests
+describe file('/tmp') do
+ it { should be_directory }
+end
+
+# you add controls here
+control 'tmp-1.0' do # A unique ID for this control
+ impact 0.7 # The criticality, if this control fails.
+ title 'Create /tmp directory' # A human-readable title
+ desc 'An optional description...'
+ describe file('/tmp') do # The actual test
+ it { should be_directory }
+ end
+end
+
Tip for developing profiles
When creating new profiles use the existing example file as a template
This example shows two tests. Both tests check for the existence of the /tmp
directory. The second test provides additional information about the test. Let's break down each component.
control
(line 9) is followed by the control's name. Each control in a profile has a unique name.impact
(line 10) measures the relative importance of the test and must be a value between 0.0 and 1.0.title
(line 11) defines the control's purpose.desc
(line 12) provides a more complete description of what the control checks for.describe
(lines 13 — 15) defines the test. Here, the test checks for the existence of the /tmp
directory.In Ruby, the do
and end
keywords define a block
. An InSpec control always contains at least one describe
block. However, a control can contain many describe
blocks.
As with many test frameworks, InSpec code resembles natural language. Here's the format of a describe block.
describe <entity> do
+ it { <expectation> }
+end
+
An InSpec test has two main components: the subject to examine and the subject's expected state. Here, <entity>
is the subject you want to examine, for example, a package name, service, file, or network port. The <expectation>
specifies the desired result or expected state, for example, that a port should be open (or perhaps should not be open.)
Let's take a closer look at the describe
block in the example.
describe file('/tmp') do
+ it { should be_directory }
+end
+
Because InSpec resembles human-readable language, you might read this test as "/tmp should be a directory." Let's break down each component.
file
Note
If you're familiar with Chef, you know that a resource configures one part of the system. InSpec resources are similar.
it
The it
statement validates one of your resource's features. A describe
block contains one or more it
statements. it
enables you to test the resource itself. You'll also see its
, which describes some feature of the resource, such as its mode or owner. You'll see examples of both it
and its
shortly.
it vs. its
Important! Just like in English grammar, pay attention to the difference between the thing (it) and the possessive word (its).
it
is used to describe an action or the expected behavior of the subject/resource in question.
e.g. it { should be_owned_by 'root' }
its
is used to specify the expectation(s) of an attribute of the subject/resource.
e.g. its("signal") { should eq "on" }
should
should
describes the expectation. should
asserts that the condition that follows should be true. Alternatively, should_not
asserts that the condition that follows should not be true. You'll see examples of both shortly.
be_directory
A and D are valid InSpec profiles!
INVALID PROFILES:
VALID PROFILES:
TIP: inspec check
To see if you have a valid InSpec profile, you can run inspec check <path-to-inspec-profile-folder>
INSPEC_CONTROL
: Specifies which single control to run in the bundle exec kitchen verify
phase, useful for testing and debugging a single requirement. none
KITCHEN_LOCAL_YAML
: Specifies the target testing environment you want to use to run and validate the profile. none
VANILLA_CONTAINER_IMAGE
: Specifies the Docker container image you consider 'not hardened' (used by kitchen.container.yml
). registry.access.redhat.com/ubi8/ubi:8.9-1028
HARDENED_CONTAINER_IMAGE
: Specifies the Docker container image you consider 'hardened' (used by kitchen.container.yml
). registry1.dso.mil/ironbank/redhat/ubi/ubi8
ruby --version
.bundle exec inspec --version
.bundle exec kitchen version
.aws-cli
is correctly configured by running aws s3 ls
(or your preferred test command for AWS CLI).bundle exec inspec --version
.docker pull https://repo1.dso.mil/dsop/redhat/ubi/ubi8
.The main pillars are
The SAF helps teams plan what guidance will help them keep their software secure. It also provide libraries and tools for automatically hardening and validating software based on that guidance, normalize other security data, and visualize all the information to properly inform teams of risk and vulnerabilities.
Nope!
SAF, the Security Automation Framework, is a Framework and uses a COLLECTION of tools, techniques, applications, and libraries to streamline security automation. Since teams operate in different environments with different components, not everyone's security journey will look the same.
Some notable tools within the Security Automation Framework are Vulcan, the SAF CLI, and Heimdall.
A Security Technical Implementation Guide (STIG) is a set of requirements imposed by the US Department of Defense and implementation instructions for those requirements that are specific to a paticular software component. The components can be any piece of technology that needs a secure configuration -- operating systems, webservers, application runtimes, routers, and so on.
STIGs are published by the Defense Information Systems Agency (DISA), but they're usually written by software vendors, which naturally have the most domain knowledge about how to secure their products. DISA then peer reviews the vendor's draft content to ensure it meets its rigorous standards. We'll describe the process for working with DISA to formally publish a STIG later on.
STIGs are also expected to stay up-to-date alongside the software component they describe. STIGs must be updated by the authors and released any time there is a serious change in the software component the STIG describes. Complicated STIGs for widely-used and often-updated components may be updated multiple times a year.
Are STIGs Familiar?
Have you ever been required to configure an application or system to STIG-standard before?
STIGs are created based off of high-level, general guidance documents called Security Requirements Guides (SRGs), also published by DISA. SRGs describe DOD-selected security requirements for entire categories of software components, and all STIG requirements are essentially sets of instructions for how to get a particular component to comply with a general SRG (or even a set of SRGs, for complex systems). STIGs are instructions for security that can be followed even by people who are not experts in the technology in question.
How much STIG are we talking?
STIGs can include hundreds of individual requirements depending on the complexity of the system being configured. At time of writing, the Windows Server 2019 STIG included 303 controls.
We need a way to track and manage all of these easily!
For example, there is an SRG that covers operating systems in general (the aptly-named "General Purpose Operating System Security Requirements Guide"). That piece of guidance is full of requirements for an operating system -- any operating system -- to be considered reasonably secure. There is a requirement in that SRG (SRG ID: SRG-OS-000021-GPOS-00005) which states that "The operating system must enforce the limit of three consecutive invalid logon attempts by a user during a 15-minute time period."
This requirement is saying that an attacker shouldn't be able to brute force a user's password by throwing a high number of guesses at the system. Simple enough, right?
However, this guidance isn't particularly useful unless we know how to implement it on a particular operating system. That's where the STIG comes in. The STIG for, say, Red Hat 8 ("Red Hat Enterprise Linux 8 STIG") has its own requirement for limiting consecutive logon attempts (Rule ID: SV-230334r627750_rule) that cites the relevant SRG IDs that it satisfies. That STIG rule tells us exactly how to configure Red Hat to satisfy this requirement, down to which configuration files we need to edit.
You can think of the process of STIG authorship as distilling the high-level, general requirements of an SRG into a checklist that anybody can follow to lock down their component.
STIGs are ideally created by a team of subject matter experts on a particular piece of software, all of whom work together to close the gap between the SRG and their shiny new STIG.
Published directives from the DOD's Chief Information Officer (DOD CIO) describe the overall Risk Management Framework for DOD Systems (DOD RMF). The DOD RMF requires all information systems across the DOD to be categorized according to how much risk they represent to the organization if compromised. It also requires system owners to select controls from the National Institute of Standards and Technology's (NIST) security control families.
',17),m={class:"hint-container note"},p=e("p",{class:"hint-container-title"},"NIST Control Families",-1),f={href:"https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final",target:"_blank",rel:"noopener noreferrer"},g=o('To speed up the process of control selection, DISA created and published the Control Correlation Identifiers (CCIs), which describe actions that can be taken to cover the NIST security control families. CCIs are a bridge between the extremely high-level policy documents that govern the whole DOD and the low-level documents (i.e. STIGs) that actually tell people how to implement that policy.
DISA also publishes the SRGs, which describe subsets of CCIs that apply to general categories of information systems. This means that individual system owners do not have to figure out on their own which control families need to be covered for their system; they can instead say "I am deploying a router, so I must cover the requirements selected in the Router SRG."
As described before, an SRG can then be tailored into STIGs that give security guidance for individual pieces of software. The full chain of requirements to implementation therefore looks like this:
The good news is that you, the STIG content author, don't have to worry about SRGs or control selections all that much; the whole point of all the good work that DISA has done is that most of these mappings have been done for you. You are responsible for the last leg of the journey -- you know your requirements from the SRG, and now you need to figure out how to implement them as a configuration baseline.
',6);function y(S,w){const s=n("ExternalLinkIcon");return a(),r("div",null,[d,e("div",m,[p,e("p",null,[t("You may be familiar with the control families from "),e("a",f,[t("NIST Special Publication 800-53"),l(s)]),t(", because almost all US Government agencies (and quite a few companies) use them as the authoritative list that defines what security controls are.")])]),g])}const I=i(u,[["render",y],["__file","03.html.vue"]]);export{I as default}; diff --git a/assets/03.html-yprIC4go.js b/assets/03.html-yprIC4go.js new file mode 100644 index 000000000..4be3693d1 --- /dev/null +++ b/assets/03.html-yprIC4go.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-7858d005","path":"/courses/profile-dev-test/03.html","title":"Environment Setup","lang":"en-US","frontmatter":{"order":3,"next":"04.md","title":"Environment Setup","author":"Aaron Lippold","description":"RVM, or another Ruby Management Tool; Ruby v3 or higher; Git; VS Code or another IDE; Docker (if you want to test hardened and non-hardened containers); AWS CLI; AWS Account; 1....","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/03.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Environment Setup"}],["meta",{"property":"og:description","content":"RVM, or another Ruby Management Tool; Ruby v3 or higher; Git; VS Code or another IDE; Docker (if you want to test hardened and non-hardened containers); AWS CLI; AWS Account; 1...."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Environment Setup\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":1.56,"words":468},"filePathRelative":"courses/profile-dev-test/03.md","autoDesc":true}');export{e as data}; diff --git a/assets/04.html-7Ib1xBlQ.js b/assets/04.html-7Ib1xBlQ.js new file mode 100644 index 000000000..3f6315628 --- /dev/null +++ b/assets/04.html-7Ib1xBlQ.js @@ -0,0 +1 @@ +const t=JSON.parse(`{"key":"v-0beb2081","path":"/courses/user/04.html","title":"4. Getting Started - Plan","lang":"en-US","frontmatter":{"order":4,"next":"05.md","title":"4. Getting Started - Plan","author":"Aaron Lippold","headerDepth":3,"description":"4. Start with Planning The SAF's main pillars are Plan, Harden, Validate, Normalize, and Visualize. First, it is necessary to plan what components will be in your system and ide...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/user/04.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"4. Getting Started - Plan"}],["meta",{"property":"og:description","content":"4. Start with Planning The SAF's main pillars are Plan, Harden, Validate, Normalize, and Visualize. First, it is necessary to plan what components will be in your system and ide..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"4. Getting Started - Plan\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"4. Start with Planning","slug":"_4-start-with-planning","link":"#_4-start-with-planning","children":[{"level":3,"title":"4.1 Identify your stack of components for the system","slug":"_4-1-identify-your-stack-of-components-for-the-system","link":"#_4-1-identify-your-stack-of-components-for-the-system","children":[]},{"level":3,"title":"4.2 What is the guidance?","slug":"_4-2-what-is-the-guidance","link":"#_4-2-what-is-the-guidance","children":[]},{"level":3,"title":"4.3. Content libraries for software components","slug":"_4-3-content-libraries-for-software-components","link":"#_4-3-content-libraries-for-software-components","children":[]},{"level":3,"title":"4.4. What if there is no content for a software component?","slug":"_4-4-what-if-there-is-no-content-for-a-software-component","link":"#_4-4-what-if-there-is-no-content-for-a-software-component","children":[]}]}],"git":{},"readingTime":{"minutes":1.71,"words":513},"filePathRelative":"courses/user/04.md","autoDesc":true}`);export{t as data}; diff --git a/assets/04.html-DtdjMdcP.js b/assets/04.html-DtdjMdcP.js new file mode 100644 index 000000000..d4cdafca5 --- /dev/null +++ b/assets/04.html-DtdjMdcP.js @@ -0,0 +1 @@ +const e=JSON.parse(`{"key":"v-27518a08","path":"/courses/beginner/04.html","title":"4. How to Get Started - InSpec Commands & Docs","lang":"en-US","frontmatter":{"order":4,"next":"05.md","title":"4. How to Get Started - InSpec Commands & Docs","author":"Aaron Lippold","description":"InSpec Commands and Documentation Before we test our NGINX configuration, let's take a look at the InSpec commands and documentation we can use to write tests. How to Run InSpec...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/beginner/04.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"4. How to Get Started - InSpec Commands & Docs"}],["meta",{"property":"og:description","content":"InSpec Commands and Documentation Before we test our NGINX configuration, let's take a look at the InSpec commands and documentation we can use to write tests. How to Run InSpec..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"4. How to Get Started - InSpec Commands & Docs\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"InSpec Commands and Documentation","slug":"inspec-commands-and-documentation","link":"#inspec-commands-and-documentation","children":[{"level":3,"title":"How to Run InSpec","slug":"how-to-run-inspec","link":"#how-to-run-inspec","children":[]},{"level":3,"title":"How to Write InSpec","slug":"how-to-write-inspec","link":"#how-to-write-inspec","children":[]},{"level":3,"title":"The InSpec shell","slug":"the-inspec-shell","link":"#the-inspec-shell","children":[]},{"level":3,"title":"Entering the InSpec shell","slug":"entering-the-inspec-shell","link":"#entering-the-inspec-shell","children":[]},{"level":3,"title":"Using the InSpec Shell","slug":"using-the-inspec-shell","link":"#using-the-inspec-shell","children":[]},{"level":3,"title":"Exploring Resources","slug":"exploring-resources","link":"#exploring-resources","children":[]}]}],"git":{},"readingTime":{"minutes":7.37,"words":2212},"filePathRelative":"courses/beginner/04.md","autoDesc":true}`);export{e as data}; diff --git a/assets/04.html-QTDREr1e.js b/assets/04.html-QTDREr1e.js new file mode 100644 index 000000000..25117bc2e --- /dev/null +++ b/assets/04.html-QTDREr1e.js @@ -0,0 +1 @@ +import{_ as n}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as s,o as a,c as l,d as e,e as r,b as t}from"./app-PAvzDPkc.js";const c={},d=e("h1",{id:"vulcan-resources",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#vulcan-resources","aria-hidden":"true"},"#"),r(" Vulcan Resources")],-1),i=e("h2",{id:"docs",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#docs","aria-hidden":"true"},"#"),r(" Docs")],-1),_={href:"https://saf.mitre.org/docs/vulcan-install",target:"_blank",rel:"noopener noreferrer"},h={href:"https://github.com/mitre/vulcan",target:"_blank",rel:"noopener noreferrer"},u={href:"https://github.com/orgs/mitre/projects/7",target:"_blank",rel:"noopener noreferrer"},f=e("h2",{id:"stig-resources",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#stig-resources","aria-hidden":"true"},"#"),r(" STIG resources")],-1),m=e("li",null,[e("a",{href:"../assets/downloads/U_Vendor_STIG_Process_Guide_V4R1_20220815.pdf"},"Vendor STIG Process Guide")],-1),p={href:"https://dl.dod.cyber.mil/wp-content/uploads/stigs/pdf/U_Vendor_STIG_Intent_Form.pdf",target:"_blank",rel:"noopener noreferrer"},g={href:"https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/vmware-stig-program-overview.pdf",target:"_blank",rel:"noopener noreferrer"};function v(V,b){const o=s("ExternalLinkIcon");return a(),l("div",null,[d,i,e("ol",null,[e("li",null,[e("a",_,[r("Vulcan full documentation"),t(o)])]),e("li",null,[e("a",h,[r("Vulcan GitHub"),t(o)]),r(" -- Feel free to leave us a feature request!")]),e("li",null,[e("a",u,[r("Vulcan Project roadmap"),t(o)])])]),f,e("ol",null,[m,e("li",null,[r("DISA's "),e("a",p,[r("Vendor STIG Intent Form"),t(o)]),r(". Used to formally start the Vendor STIG process.")]),e("li",null,[r("VMWare's "),e("a",g,[r("STIG Program Overview"),t(o)]),r(". A good primer on terms and process for STIGs.")])])])}const k=n(c,[["render",v],["__file","04.html.vue"]]);export{k as default}; diff --git a/assets/04.html-RCsMUVdJ.js b/assets/04.html-RCsMUVdJ.js new file mode 100644 index 000000000..0002bfbf9 --- /dev/null +++ b/assets/04.html-RCsMUVdJ.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-28c6a192","path":"/resources/04.html","title":"Vulcan Resources","lang":"en-US","frontmatter":{"index":true,"icon":"page","title":"Vulcan Resources","author":"Will Dower","headerDepth":3,"description":"Docs 1. Vulcan full documentation (https://saf.mitre.org/docs/vulcan-install) 2. Vulcan GitHub (https://github.com/mitre/vulcan) -- Feel free to leave us a feature request! 3. V...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/resources/04.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Vulcan Resources"}],["meta",{"property":"og:description","content":"Docs 1. Vulcan full documentation (https://saf.mitre.org/docs/vulcan-install) 2. Vulcan GitHub (https://github.com/mitre/vulcan) -- Feel free to leave us a feature request! 3. V..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Will Dower"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Vulcan Resources\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Will Dower\\"}]}"]]},"headers":[{"level":2,"title":"Docs","slug":"docs","link":"#docs","children":[]},{"level":2,"title":"STIG resources","slug":"stig-resources","link":"#stig-resources","children":[]}],"git":{},"readingTime":{"minutes":0.3,"words":89},"filePathRelative":"resources/04.md","autoDesc":true}');export{e as data}; diff --git a/assets/04.html-Ri1HP-KJ.js b/assets/04.html-Ri1HP-KJ.js new file mode 100644 index 000000000..10e328dd1 --- /dev/null +++ b/assets/04.html-Ri1HP-KJ.js @@ -0,0 +1 @@ +import{_ as i}from"./SAF_Site_Harden-697HRqN7.js";import{_ as s}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as l,c,a as h,d as e,e as t,b as n,f as o}from"./app-PAvzDPkc.js";const d="/saf-training/assets/SAF_Capabilities_Plan-1kgo-5aU.png",f="/saf-training/assets/SAF_Home-Fvv5vIy3.png",u="/saf-training/assets/SAF_Site_Validate-MMJEjefb.png",p={},m=o('The SAF's main pillars are Plan, Harden, Validate, Normalize, and Visualize. First, it is necessary to plan what components will be in your system and identify the security guidance available for those components.
Your software system is composed of multiple components. i.e., Cloud Services, Virtualization Platforms, Operating Systems, Databases, Application Logic, and Web Servers.
The first step of any assessment is identifying the components for the system you are assessing.
',6),g=o('There could be Security Technical Implementation Guides (STIGs), Security Requirements Guides (SRGs), Center for Internet Security (CIS) Benchmarks, or vendor guidance written for the components in your software stack. Being aware of these can help inform which profile to use. Additionally, note here any specific requirements for your organization that might differ from the specific published guidance. This will inform how to tailor the profiles later on.
As you saw in the previous section's SAF Site scavenger hunt, the SAF website hosts links, informaiton, and tools to ease the security process. To get a better idea of what may be in your software stack and what kind of content is available for automated testing, you can peruse the SAF's hardening and validation content libraries when you are making a plan.
Go to the SAF site to peruse the hardening and validation libraries. As the security community develops more automation content, we update this site as a landing page for all of the content. The site will look like this:
Once you've set up the necessary tools, you're ready to run the profile. The testing environment is determined by Test Kitchen using environment variables.
There are four testing environments to choose from:
The specifics of each environment's configuration are detailed in the following sections.
For each of these examples, you need to update the KITCHEN_LOCAL_YAML
environment variable to point to the correct kitchen.<TEST-TARGET>.yaml
file. Ensure that any required supporting environment settings, environment variables, profiles, etc., are in place. See Environment Variables and Testing Target Environments for more information.
Test Kitchen has four major steps: create
, converge
, verify
, and destroy
. Use these stages to create, configure, run tests, and destroy your testing target. When starting your testing, it's useful to run each of these in turn to ensure your environment, Test Kitchen, and credentials are set up and working correctly.
create
:create
stage sets up your testing instance and prepares the necessary login credentials and other components so you can use your testing target.converge
:converge
stage runs the provisioner of the Test Kitchen suite - the configuration management code set up in the test suite. This could be any configuration management script, such as Ansible, Chef, Puppet, Terraform, Shell, etc., that you and your team use.verify
:verify
stage runs the actual InSpec profile against your testing target. Test Kitchen supports multiple testing frameworks, which are well documented on the project website.destroy
:destroy
stage tears down your test target - like an EC2 instance, Docker container, or Vagrant Box.You can also isolate which of the 'target suites' - either vanilla
or hardened
in our case - to run by appending either hardened
or vanilla
to the end of your Test Kitchen command. For example, bundle exec kitchen verify
will run the Test Kitchen stages all the way through verify
on both the hardened
and vanilla
suites. However, if you say, bundle exec kitchen verify vanilla
, it will only run it on the vanilla
test target.
login
: Allows you to easily log in using the credentials created when you ran bundle exec kitchen create
.test
: Runs all the Test Kitchen stages starting with create through destroy to easily allow you to go through a full clean test run.Local resources are those that exist only in the profile in which they are developed. Local resources are put in the libraries
directory:
$ tree examples/profile
+examples/profile
+...
+├── libraries
+│ └── custom_resource.rb
+
Note that the libraries
directory is not created by default within a profile when we use inspec init
. We need to create the directory ourselves.
Once you create and populate a custom resource Ruby file inside the libraries
directory, it can be utilized inside your local profile just like the core resources.
Resources may be added to profiles in the libraries folder:
$ tree examples/profile
+examples/profile
+...
+├── libraries
+│ └── gordon_config.rb
+
The smallest possible InSpec resource takes this form:
class Tiny < Inspec.resource(1)
+ name 'tiny'
+end
+
This is easy to write, but not particularly useful for testing.
Resources are written as a regular Ruby class, which inherits from the base inspec.resource
class. The number (1) specifies the version this resource plugin targets. As Chef InSpec evolves, this interface may change and may require a higher version.
In addition to the resource name, the following attributes can be configured:
name
- Identifier of the resource (required)desc
- Description of the resource (optional)example
- Example usage of the resource (optional)supports
- (Chef InSpec 2.0+) Platform restrictions of the resource (optional)The following methods are available to the resource:
inspec
- Contains a registry of all other resources to interact with the operating system or target in general.skip_resource
- A resource may call this method to indicate that requirements aren’t met. All tests that use this resource will be marked as skipped.The following example shows a full resource using attributes and methods to provide simple access to a configuration file:
class GordonConfig < Inspec.resource(1)
+ name 'gordon_config'
+
+ # Restrict to only run on the below platforms (if none were given, all OS's supported)
+ supports platform_family: 'fedora'
+ supports platform: 'centos', release: '6.9'
+ # Supports \`*\` for wildcard matcher in the release
+ supports platform: 'centos', release: '7.*'
+
+ desc '
+ Resource description ...
+ '
+
+ example '
+ describe gordon_config do
+ its("signal") { should eq "on" }
+ end
+ '
+
+ # Load the configuration file on initialization
+ def initialize(path = nil)
+ @path = path || '/etc/gordon.conf'
+ @params = SimpleConfig.new( read_content )
+ end
+
+ # Expose all parameters of the configuration file.
+ def method_missing(name)
+ @params[name]
+ end
+
+ private
+
+ def read_content
+ f = inspec.file(@path)
+ # Test if the path exist and that it's a file
+ if f.file?
+ # Retrieve the file's contents
+ f.content
+ else
+ # If the file doesn't exist, skip all tests that use gordon_config
+ raise Inspec::Exceptions::ResourceSkipped, "Can't read config at #{@path}"
+ end
+ end
+end
+
Let's break down each component of the resource.
The class is where the Ruby file is defined.
The name is how we will call upon this resource within our controls, in the example above that would be gordon_config
.
Supports are used to define or restrict the Ruby resource to work in specific ways, as shown in the example above that is used to restrict our class to specific platforms.
A simple description of the purpose of this resource.
A simple use case example. The example is usually a describe
block using the resource, given as a multi-line comment.
An initilize method is required if your resource needs to be able to accept a parameter when called in a test (e.g. file('this/path/is/a/parameter')
)
These methods return data from the resource so that you can use it in tests.
`,34);function g(y,w){const t=i("RouterLink"),a=i("ExternalLinkIcon");return r(),l("div",null,[u,h,m,s("p",null,[e("As you saw in the "),n(t,{to:"/courses/beginner/"},{default:c(()=>[e("Beginner class")]),_:1}),e(", when writing InSpec code, many core resources are available because they are included in the main InSpec code base.")]),s("ul",null,[s("li",null,[e("You can "),s("a",v,[e("explore the core InSpec resources"),n(a)]),e(" to existing resources.")]),s("li",null,[e("You can also "),s("a",k,[e("examine the source code"),n(a)]),e(" to see what's available. For example, you can see how "),b,e(" and other InSpec resources are implemented.")])]),f])}const I=o(d,[["render",g],["__file","04.html.vue"]]);export{I as default}; diff --git a/assets/04.html-m68ZwoFB.js b/assets/04.html-m68ZwoFB.js new file mode 100644 index 000000000..f759d38e0 --- /dev/null +++ b/assets/04.html-m68ZwoFB.js @@ -0,0 +1,235 @@ +import{_ as u}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as d,c as p,d as e,e as n,b as l,w as s,f as c}from"./app-PAvzDPkc.js";const m={},b=e("h2",{id:"inspec-commands-and-documentation",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#inspec-commands-and-documentation","aria-hidden":"true"},"#"),n(" InSpec Commands and Documentation")],-1),h=e("p",null,"Before we test our NGINX configuration, let's take a look at the InSpec commands and documentation we can use to write tests.",-1),v=e("h3",{id:"how-to-run-inspec",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#how-to-run-inspec","aria-hidden":"true"},"#"),n(" How to Run InSpec")],-1),k=e("code",null,"inspec exec",-1),g={href:"https://mitre.github.io/saf-training/courses/user/06.html",target:"_blank",rel:"noopener noreferrer"},_=c(`inspec exec WHERE_IS_THE_PROFILE -t WHAT_IS_THE_TARGET --more-flags EXTRA_STUFF --reporter WHAT_SHOULD_INSPEC_DO_WITH_THE_RESULTS
+
You see file
and other resources listed.
Earlier, we saw this describe
block:
describe file('/tmp') do
+ it { should be_directory }
+end
+
The InSpec shell understands the structure of blocks. This enables you to run mutiline code. As an example, run the entire describe
block like this which will run the entire block of code in the InSpec Shell and return the result.
In practice, you don't typically run controls interactively this way for day to day use, but it is a great way to test out your ideas, find bugs or validate your approach before running a scan in its entirety on a target of evaluation.
What is the difference between InSpec and Ruby?
Inspec is a Domain Specific Language (DSL) on top of Ruby. In other words, InSpec is built on the Ruby programming language. For example, InSpec matchers are implemented as Ruby methods.
file
exampleLet's use the InSpec shell to explore some resources in InSpec. We will start with one of the most common elements on the system: a directory. In the InSpec Shell call the file.directory?
method.
file('/tmp').directory?
+
inspec> file('/tmp').directory?
+ => true
+
This will return true
, since /tmp
is a directory on the system and exists on your workstation container.
To make the tests easier to read, the InSpec language uses "syntactic sugar" to turn methods into English-like phrases. For example, the Ruby language contains boolean methods ending in ?
which evaluate to true
or false
(nil
is a type of false). InSpec changes the syntax of these methods to include be_
before the method rather than ?
after the method to make it more readable. For example, to check if a directory exists, Ruby would traditionally use directory?
while InSpec uses be_directory
.
nginx
exampleNow's a good time to define the requirements for our NGINX configuration. Let's say that you require:
1. NGINX version 1.10.3 or later.\n2. The following NGINX modules should be installed:\n * `http_ssl`\n * `stream_ssl`\n * `mail_ssl`\n3. The NGINX configuration file - `/etc/nginx/nginx.conf`- should exist as a file.\n4. The NGINX configuration file should:\n * be owned by the `root` user and group.\n * not be readable, writeable, or executable by others.\n5. The NGINX shell access should be restricted to admin users.\n
In the next section, we will start writing controls for my_nginx
profile.
Let's see what resources are available for nginx
.
Run help resources
a second time to identify InSpec's provided two built-in resources to support NGINX – nginx
and nginx_conf
.
As you can see we get false
- since nginx is not installed on your runner
.
We can instead run InSpec shell commands against the target that does have NGINX installed to see what results we find.
To do so, first start by exiting your InSpec shell session.
exit
+
Run docker ps
to see the running docker containers in your development lab environment that we can test:
To check whether the NGINX configuration file exists as a file, we want to test attributes of the file itself, so we use the file
resource.
Use the file
resource to check whether the NGINX configuration file is owned by root
and is not readable, writeable, or executable by others. You saw earlier how the file
resource provides the readable
, writeable
, and executable
methods. You would also see that the file
resource provides the owned_by
and grouped_into
methods.
Exit the InSpec shell session with the exit
command.
exit
+
Each STIG is a set of requirements and implementation guidance on how to meet them (which we will abbreviate as just a "STIG requirement"). Let's dive into the technical details that make up a STIG requirement.
Each requirement in a STIG will contain the following fields.
These fields are imported unchanged from the SRG, so we do not need to worry about them too much as STIG content authors.
These are the fields we will be responsible for.
Check and Fix
Much of your time writing STIGs will be researching how to fill out the Check and the Fix fields, since those comprise the bulk of the information in STIGs that is not copied from the SRG.
See DISA's Vendor STIG Process Guide section 4.1 for further details on these fields.
A piece of security guidance is not a formal STIG until it has been peer reviewed and published by DISA itself. Before publishing, your security guidance is considered STIG-ready content.
If all you are looking for is an Authority to Operate for the fancy system your team is making for your project, you may not need to undergo the formal process for peer reviewing and publishing your STIG-ready content through DISA. STIG-ready content, since it is tightly aligned to the requirements in its source SRG, is on its own an excellent foundation to use for security automation activities, such as building out automated hardening scripts and test suites.
If you are a software developer creating a product that you expect will be used by other projects within the DOD, it will likely be beneficial to you to formally publish a STIG for your product. Doing so will greatly lower the effort required for your software to be implemented by the Department -- you figure out security once, and no one else will have to reinvent the wheel.
',6),I=o('Vulcan exists because writing STIGs is very time-consuming for reasons that rarely have to do with actual security research.
STIG Authorship Challenges
Have you ever been part of an effort to write a STIG before?
Before Vulcan, vendors could expect to take anywhere from 18 months to 2 years to develop a STIG for a reasonably complex piece of software. An unacceptable amount of that time was locked up in document management activities -- simply keeping the author team all up-to-date with each other's work. STIG documents were often created using spreadsheets of requirements emailed back and forth between the authorship team.
Warning
If you've ever spent hours editing a document only to realize that the rest of your team was editing a completely separate version, take that feeling and multiply it by 300 requirements.
The MITRE SAF© team, acting in collaboration with VMWare (which maintains roughly four dozen STIGs for its software components, at time of writing) and DISA, built the Vulcan webapp to move the STIG authorship workflow into the browser.
Vulcan adds in systems for:
VMWare has reported that with experienced authors, Vulcan cut down the time to write a STIG down to a few weeks. It also makes the problem of tracking and managing content over time much easier.
Software developers tend to ask themselves this frequently.
Why bother creating STIGs in the first place if it takes this much effort? Even with Vulcan speeding things up and taking on many management functions, creating a STIG takes quite a bit of time. Furthermore, there is a time cost inherent to maintaining the STIG over time, because every major change to a software component requires an update to the STIG as per DOD policy. Why go through the headache?
Recall that if you want to use your software anywhere under the Department of Defense's umbrella (and even in many civilian agencies!) you are required to comply with the Security Requirements Guides that apply to your system in order to recieve your Authority to Operate.
Taking the long view, the easiest way to pass security assessments is to write up baseline security guidance early and stick to it. That is, you have to follow the SRG anyway, and STIGs are ultimately just checklists on how to make your software follow the SRG. You're just writing all the security documentation you'd have to keep around anyway into one place.
',15);function T(G,v){const n=a("ExternalLinkIcon"),r=a("Mermaid");return l(),h("div",null,[d,e("div",m,[p,f,g,e("ul",null,[e("li",null,[t("STIG Viewer: DISA's own application for STIG examination. You can download it fromo the "),e("a",y,[t("DOD Cyber Exchange"),i(n)]),t(" just like the STIGs themselves.")]),w,e("li",null,[e("a",S,[t("Heimdall"),i(n)]),t(" (coming soon!): The MITRE SAF team is working to implement all of STIG Viewer's functionality into the Heimdall application, so that policy documents and scan data can be examined by the same application.")])])]),b,i(r,{id:"mermaid-126",code:"eJxlUk1v2zAMvedXcNlhLTBjzUeTNIcVaz0XOWwrlmxFYfSg2EwsTBYzSZ7h/fpRruVm6U2PfCQfHxVF0cBJp3AJDwVqWBdUqRxWkJAphZJ/Eb40sN6s7qLvKPIGbkk71O56kJHeyf1yAOAKLLl+KywG9FMYKbYKrc8DHIwshWluSZFZwvBtMkomyXTY5pTU2CcW8/hzkgwHXtZOUZ0VwjjY3LRMmY/O0phYnEbMwRFkVB5UA7V0BcTfYjD4u5KG52tnr5/Ou6px+khVX5NTVnkCFFRDw4laKsVRFi7tU1cySTdEvFH+3mekA6GbWjQhPU2/er7Qv+wblpOTfuegFtzUUeBcpo9o22SIzNLhg5EOwWJW8aOBfSVzoTMEYXtdwknScPba8vNhaDT/z4biqCEbUlbav3jTLfflG/inwcoilC8TX9xZeJ2h8xUvFt6ji9RPFpmDeLX+9NxxLzXs2q/Rfgq+LGVo+/rRJL2vtkrawq90ROwJ02MC22+Aav52YYG7Tt+HG7QO7g1PlzygjWPf5DL9oXM0e3pWdkA0vOIfiXVPgSj66K9wgqfhgh2ehIN3eByud4LHHZ6HE3R4EVzs8Oiid6+LzE4qroLVJ/mZx363fsnQMmhsGa2Hg3/tXReD"}),I])}const x=s(u,[["render",T],["__file","04.html.vue"]]);export{x as default}; diff --git a/assets/04.html-xZbjtynr.js b/assets/04.html-xZbjtynr.js new file mode 100644 index 000000000..af5edf441 --- /dev/null +++ b/assets/04.html-xZbjtynr.js @@ -0,0 +1 @@ +const e=JSON.parse(`{"key":"v-7a0da8a4","path":"/courses/profile-dev-test/04.html","title":"Test your Test Environment","lang":"en-US","frontmatter":{"order":4,"next":"05.md","title":"Test your Test Environment","author":"Aaron Lippold","description":"Once you've set up the necessary tools, you're ready to run the profile. The testing environment is determined by Test Kitchen using environment variables. There are four testin...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/04.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Test your Test Environment"}],["meta",{"property":"og:description","content":"Once you've set up the necessary tools, you're ready to run the profile. The testing environment is determined by Test Kitchen using environment variables. There are four testin..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Test your Test Environment\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"Kitchen Stages","slug":"kitchen-stages","link":"#kitchen-stages","children":[]}],"git":{},"readingTime":{"minutes":1.76,"words":529},"filePathRelative":"courses/profile-dev-test/04.md","autoDesc":true}`);export{e as data}; diff --git a/assets/05.html-062HFELG.js b/assets/05.html-062HFELG.js new file mode 100644 index 000000000..909f4066a --- /dev/null +++ b/assets/05.html-062HFELG.js @@ -0,0 +1,231 @@ +import{_ as c}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as p,c as d,b as l,w as a,d as n,e as s,f as o}from"./app-PAvzDPkc.js";const u={},m=n("p",null,[s("Let's practice creating our own custom resource. Let's say we want to write tests that examine the current state of a local Git repository. We want to create a "),n("code",null,"git"),s(" resource to handle all of InSpec's interactions with the Git repo under the hood, so that we can focus on writing clean and easy-to-read profile code.")],-1),k=n("p",null,"Let's take a look at this InSpec video that walks through this example and then try it out ourselves.",-1),b=n("div",{class:"video-container"},[n("iframe",{width:"1028",height:"578",src:"https://www.youtube.com/embed/Xka2xT6Cngg?list=PLSZbtIlMt5rcbXOpMRucKzRMXR7HX7awy",title:"YouTube video player",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:""})],-1),v=n("h3",{id:"create-new-inspec-profile",tabindex:"-1"},[n("a",{class:"header-anchor",href:"#create-new-inspec-profile","aria-hidden":"true"},"#"),s(" Create new InSpec profile")],-1),g=n("p",null,"Let's start by creating a new profile:",-1),h=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("inspec init profile "),n("span",{class:"token function"},"git"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"})])],-1),f=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s(` ─────────────────────────── InSpec Code Generator ─────────────────────────── + +Creating new profile at /workspaces/saf-training-lab-environment/git + • Creating `),n("span",{class:"token function"},"file"),s(` inspec.yml + • Creating directory /workspaces/saf-training-lab-environment/git/controls + • Creating `),n("span",{class:"token function"},"file"),s(` controls/example.rb + • Creating `),n("span",{class:"token function"},"file"),s(` README.md +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),w=o(`To write tests, we first need to know and have what we are testing! In your Codespaces environment, there is a git repository that we will test under the resources
folder. The git repository will be the test target, similarly to how the docker containers acted as test targets in previous sections. Unzip the target git repository using the following command:
unzip ./resources/git_test.zip
+
This will generate a git_test
repository which we will use for these examples.
Now let's write some controls and test that they run. You can put these controls in the example.rb
file generated in the controls
folder of your git
InSpec profile. These controls are written using the command
resource which is provided by InSpec. We will write a git
resource in this section to improve this test. Note that you will need to put the full directory path of the .git
file from your git_test
repository as the git_dir
value on line 4 of example.rb
. To get the full path of your current location in the terminal, use pwd
.
# encoding: utf-8
+# copyright: 2018, The Authors
+
+git_dir = "/workspaces/saf-training-lab-environment/git_test/.git"
+
+# The following banches should exist
+describe command("git --git-dir #{git_dir} branch") do
+ its('stdout') { should match /master/ }
+end
+
+describe command("git --git-dir #{git_dir} branch") do
+ its('stdout') { should match /testBranch/ }
+end
+
+# What is the current branch
+describe command("git --git-dir #{git_dir} branch") do
+ its('stdout') { should match /^\\* master/ }
+end
+
+# What is the latest commit
+describe command("git --git-dir #{git_dir} log -1 --pretty=format:'%h'") do
+ its('stdout') { should match /edc207f/ }
+end
+
+# What is the second to last commit
+describe command("git --git-dir #{git_dir} log --skip=1 -1 --pretty=format:'%h'") do
+ its('stdout') { should match /8c30bff/ }
+end
+
Our tests pass, but they all use the command
resource. It's not best practice to do this -- for one thing, it makes our tests more complicated, and the output too long.
But What If I Don't Care About The Tests Being Complicated And The Output Being Too Long?
Some test writers like to wrap their favorite bash commands in a command
block and call it a day.
However, best practice is to write clean and maintainable InSpec code even if you yourself have no trouble using the command
resource to do everything.
Recall that other developers and assessors need to be able to understand how your tests function. Nobody likes trying to debug someone else's profile that assumes that the operator knows exactly how the profile writer's favorite terminal commands work.
Let's rewrite these tests in a way that abstracts away the complexity of working with the git
command into a resource.
Let's rewrite the first test in our example file to make it more readable with a git
resource as follows:
# The following banches should exist
+describe git(git_dir) do
+ its('branches') { should include 'master' }
+end
+
Now let's run the profile.
`,7),C=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("inspec "),n("span",{class:"token builtin class-name"},"exec"),s(),n("span",{class:"token function"},"git"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"})])],-1),T=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[n("span",{class:"token punctuation"},"["),n("span",{class:"token number"},"2023"),s("-02-22T03:21:41+00:00"),n("span",{class:"token punctuation"},"]"),s(" ERROR: Failed to load profile git: Failed to load "),n("span",{class:"token builtin class-name"},"source"),s(),n("span",{class:"token keyword"},"for"),s(" controls/example.rb: undefined method "),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),s("git' "),n("span",{class:"token keyword"},"for"),s(),n("span",{class:"token comment"},"#We should get an error because the git method and resource are not defined yet. We should fix that.
Let's start by creating a new file called git.rb
in the libraries
directory. If you do not already have a libraries
directory, you can make one in the git
InSpec profile directory. The content of the file should look like this:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class Git < Inspec.resource(1)
+ name 'git'
+
+end
+
Setting Up a Resource Using InSpec Init
Instead of just creating the git.rb
file in the libraries
directory, you can use InSpec to assist you in creating a resource. Run inspec init resource <your-resource-name>
and follow the prompts to create the foundation and see examples for a resource.
Now run the profile again.
`,6),I=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("inspec "),n("span",{class:"token builtin class-name"},"exec"),s(),n("span",{class:"token function"},"git"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"})])],-1),S=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[n("span",{class:"token punctuation"},"["),n("span",{class:"token number"},"2023"),s("-02-22T03:25:57+00:00"),n("span",{class:"token punctuation"},"]"),s(" ERROR: Failed to load profile git: Failed to load "),n("span",{class:"token builtin class-name"},"source"),s(),n("span",{class:"token keyword"},"for"),s(" controls/example.rb: wrong number of arguments "),n("span",{class:"token punctuation"},"("),s("given "),n("span",{class:"token number"},"1"),s(", expected "),n("span",{class:"token number"},"0"),n("span",{class:"token punctuation"},")"),s(` + +Profile: InSpec Profile `),n("span",{class:"token punctuation"},"("),s("git"),n("span",{class:"token punctuation"},")"),s(` +Version: `),n("span",{class:"token number"},"0.1"),s(`.0 +Failure Message: Failed to load `),n("span",{class:"token builtin class-name"},"source"),s(),n("span",{class:"token keyword"},"for"),s(" controls/example.rb: wrong number of arguments "),n("span",{class:"token punctuation"},"("),s("given "),n("span",{class:"token number"},"1"),s(", expected "),n("span",{class:"token number"},"0"),n("span",{class:"token punctuation"},")"),s(` +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + No tests executed. + +Test Summary: `),n("span",{class:"token number"},"0"),s(" successful, "),n("span",{class:"token number"},"0"),s(" failures, "),n("span",{class:"token number"},"0"),s(` skipped +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),q=o(`This time we get another error letting us know that we have a resource that has been given the incorrect number of arguments. This means we have given an additional parameter to this resource that we have not yet accepted.
Each resource will require an initialization method.
For our git.rb file let's add that initialization method:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class Git < Inspec.resource(1)
+ name 'git'
+
+ def initialize(path)
+ @path = path
+ end
+
+end
+
This is saving the path we are passing in from the control into an instance method called path
.
Now when we run the profile.
`,6),O=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("inspec "),n("span",{class:"token builtin class-name"},"exec"),s(),n("span",{class:"token function"},"git"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"})])],-1),N=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("Profile: InSpec Profile "),n("span",{class:"token punctuation"},"("),s("git"),n("span",{class:"token punctuation"},")"),s(` +Version: `),n("span",{class:"token number"},"0.1"),s(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + `),n("span",{class:"token function"},"git"),s(` + × branches + undefined method `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),s("branches' "),n("span",{class:"token keyword"},"for"),s(),n("span",{class:"token comment"},"#<#The test will run but we will get an error saying we do not have a branches
method. Remember that the other 4 tests are still passing because they are not yet using the git
resource, but are still relying on InSpec's command
resource.
Let's go back to our git.rb file to fix that by adding a branches
method:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class Git < Inspec.resource(1)
+ name 'git'
+
+ def initialize(path)
+ @path = path
+ end
+
+ def branches
+
+ end
+
+end
+
We have now defined the branches method. Let's see what the test output shows us.
`,4),R=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("inspec "),n("span",{class:"token builtin class-name"},"exec"),s(),n("span",{class:"token function"},"git"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"})])],-1),B=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("Profile: InSpec Profile "),n("span",{class:"token punctuation"},"("),s("git"),n("span",{class:"token punctuation"},")"),s(` +Version: `),n("span",{class:"token number"},"0.1"),s(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + `),n("span",{class:"token function"},"git"),s(` + × branches is expected to include `),n("span",{class:"token string"},'"master"'),s(` + expected nil to include `),n("span",{class:"token string"},'"master"'),s(", but it does not respond to "),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),s("include?"),n("span",{class:"token variable"},"`")]),s(` + Command: `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),n("span",{class:"token function"},"git"),s(" --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch"),n("span",{class:"token variable"},"`")]),s(` + ✔ stdout is expected to match /testBranch/ + Command: `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),n("span",{class:"token function"},"git"),s(" --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch"),n("span",{class:"token variable"},"`")]),s(` + ✔ stdout is expected to match /^`),n("span",{class:"token punctuation"},"\\"),s(`* master/ + Command: `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),n("span",{class:"token function"},"git"),s(" --git-dir /workspaces/saf-training-lab-environment/git_test/.git log "),n("span",{class:"token parameter variable"},"-1"),s(),n("span",{class:"token parameter variable"},"--pretty"),n("span",{class:"token operator"},"="),s("format:"),n("span",{class:"token string"},"'%h'"),n("span",{class:"token variable"},"`")]),s(` + ✔ stdout is expected to match /edc207f/ + Command: `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),n("span",{class:"token function"},"git"),s(" --git-dir /workspaces/saf-training-lab-environment/git_test/.git log "),n("span",{class:"token parameter variable"},"--skip"),n("span",{class:"token operator"},"="),n("span",{class:"token number"},"1"),s(),n("span",{class:"token parameter variable"},"-1"),s(),n("span",{class:"token parameter variable"},"--pretty"),n("span",{class:"token operator"},"="),s("format:"),n("span",{class:"token string"},"'%h'"),n("span",{class:"token variable"},"`")]),s(` + ✔ stdout is expected to match /8c30bff/ + +Test Summary: `),n("span",{class:"token number"},"4"),s(" successful, "),n("span",{class:"token number"},"1"),s(" failure, "),n("span",{class:"token number"},"0"),s(` skipped +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),L=o(`Now the error message says that the branches
method is returning a null value when it's expecting an array or something that is able to accept the include method invoked on it.
We can use the InSpec helper method which enables you to invoke any other inspec resource as seen below:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class Git < Inspec.resource(1)
+ name 'git'
+
+ def initialize(path)
+ @path = path
+ end
+
+ def branches
+ inspec.command("git --git-dir #{@path} branch").stdout
+ end
+
+end
+
We have borrowed the built-in command
resource to handle running Git's CLI commands.
Now we see that we get a passing test!
Now let's adjust our test to also check for our second branch that we created earlier as well as check our current branch:
# The following banches should exist
+describe git(git_dir) do
+ its('branches') { should include 'master' }
+ its('branches') { should include 'testBranch' }
+ its('current_branch') { should cmp 'master' }
+end
+
Let's head over to the git.rb file to create the current_branch
method we are invoking in the above test:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class Git < Inspec.resource(1)
+ name 'git'
+
+ def initialize(path)
+ @path = path
+ end
+
+ def branches
+ inspec.command("git --git-dir #{@path} branch").stdout
+ end
+
+ def current_branch
+ branch_name = inspec.command("git --git-dir #{@path} branch").stdout.strip.split("\\n").find do |name|
+ name.start_with?('*')
+ end
+ branch_name.gsub(/^\\*/,'').strip
+ end
+
+end
+
Now we can run the profile again.
`,3),z=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("inspec "),n("span",{class:"token builtin class-name"},"exec"),s(),n("span",{class:"token function"},"git"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"})])],-1),E=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("Profile: InSpec Profile "),n("span",{class:"token punctuation"},"("),s("git"),n("span",{class:"token punctuation"},")"),s(` +Version: `),n("span",{class:"token number"},"0.1"),s(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + `),n("span",{class:"token function"},"git"),s(` + ✔ branches is expected to include `),n("span",{class:"token string"},'"master"'),s(` + ✔ branches is expected to include `),n("span",{class:"token string"},'"testBranch"'),s(` + ✔ current_branch is expected to `),n("span",{class:"token function"},"cmp"),s(),n("span",{class:"token operator"},"=="),s(),n("span",{class:"token string"},'"master"'),s(` + Command: `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),n("span",{class:"token function"},"git"),s(" --git-dir /workspaces/saf-training-lab-environment/git_test/.git log "),n("span",{class:"token parameter variable"},"-1"),s(),n("span",{class:"token parameter variable"},"--pretty"),n("span",{class:"token operator"},"="),s("format:"),n("span",{class:"token string"},"'%h'"),n("span",{class:"token variable"},"`")]),s(` + ✔ stdout is expected to match /edc207f/ + Command: `),n("span",{class:"token variable"},[n("span",{class:"token variable"},"`"),n("span",{class:"token function"},"git"),s(" --git-dir /workspaces/saf-training-lab-environment/git_test/.git log "),n("span",{class:"token parameter variable"},"--skip"),n("span",{class:"token operator"},"="),n("span",{class:"token number"},"1"),s(),n("span",{class:"token parameter variable"},"-1"),s(),n("span",{class:"token parameter variable"},"--pretty"),n("span",{class:"token operator"},"="),s("format:"),n("span",{class:"token string"},"'%h'"),n("span",{class:"token variable"},"`")]),s(` + ✔ stdout is expected to match /8c30bff/ + +Test Summary: `),n("span",{class:"token number"},"7"),s(" successful, "),n("span",{class:"token number"},"0"),s(" failures, "),n("span",{class:"token number"},"0"),s(` skipped +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),F=n("p",null,"All the tests should pass!",-1),G=n("div",{class:"hint-container tip"},[n("p",{class:"hint-container-title"},"Exercise!"),n("p",null,"As a solo exercise, try to create a method in the git.rb file to check what the last commit is.")],-1);function M(X,U){const i=r("CodeTabs");return p(),d("div",null,[m,k,b,v,g,l(i,{id:"14",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[h]),tab1:a(({value:e,isActive:t})=>[f]),_:1}),w,l(i,{id:"36",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[y]),tab1:a(({value:e,isActive:t})=>[_]),_:1}),x,l(i,{id:"71",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[C]),tab1:a(({value:e,isActive:t})=>[T]),_:1}),A,l(i,{id:"97",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[I]),tab1:a(({value:e,isActive:t})=>[S]),_:1}),q,l(i,{id:"121",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[O]),tab1:a(({value:e,isActive:t})=>[N]),_:1}),P,l(i,{id:"139",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[R]),tab1:a(({value:e,isActive:t})=>[B]),_:1}),L,l(i,{id:"164",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[D]),tab1:a(({value:e,isActive:t})=>[V]),_:1}),W,l(i,{id:"179",data:[{id:"Command"},{id:"Output"}]},{title0:a(({value:e,isActive:t})=>[s("Command")]),title1:a(({value:e,isActive:t})=>[s("Output")]),tab0:a(({value:e,isActive:t})=>[z]),tab1:a(({value:e,isActive:t})=>[E]),_:1}),F,G])}const Y=c(u,[["render",M],["__file","05.html.vue"]]);export{Y as default}; diff --git a/assets/05.html-0lRQQ4nw.js b/assets/05.html-0lRQQ4nw.js new file mode 100644 index 000000000..c7953aad6 --- /dev/null +++ b/assets/05.html-0lRQQ4nw.js @@ -0,0 +1,163 @@ +import{_ as u}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as d,c as p,d as n,b as t,e as s,w as e,f as c}from"./app-PAvzDPkc.js";const m={},b=c(`Let's work through some example requirements to write InSpec controls.
We write InSpec controls to test some target based on security guidance. Here, let's verify that the NGINX instance had been configured to meet the following requirements:
1. NGINX version 1.10.3 or later.
+2. The following NGINX modules should be installed:
+ * \`http_ssl\`
+ * \`stream_ssl\`
+ * \`mail_ssl\`
+3. The NGINX configuration file - \`/etc/nginx/nginx.conf\`- should exist as a file.
+4. The NGINX configuration file should:
+ * be owned by the \`root\` user and group.
+ * not be readable, writeable, or executable by others.
+5. The NGINX shell access should be restricted to admin users.
+
The first requirement is for the NGINX version to be 1.10.3 or later
.
We can check this using the InSpec cmp
matcher.
Replace the contents of my_nginx/controls/example.rb
with this:
control 'nginx-version' do
+ impact 1.0
+ title 'NGINX version'
+ desc 'The required version of NGINX should be installed.'
+ describe nginx do
+ its('version') { should cmp >= '1.10.3' }
+ end
+end
+
You see that the test passes.
The second requirement verifies that our required modules are installed.
Append your control file to add this describe block:
control 'nginx-modules' do
+ impact 1.0
+ title 'NGINX modules'
+ desc 'The required NGINX modules should be installed.'
+ describe nginx do
+ its('modules') { should include 'http_ssl' }
+ its('modules') { should include 'stream_ssl' }
+ its('modules') { should include 'mail_ssl' }
+ end
+end
+
The second control resembles the first; however, this version uses multiple its
statements and the nginx.modules
method. The nginx.modules
method returns a list; the built-in include
matcher verifies whether a value belongs to a given list.
Run inspec exec
on the target.
This time, both controls pass.
nginx_conf
fileThe third requirement verifies that the NGINX configuration file - /etc/nginx/nginx.conf
- exists as a file.
Append this describe block to your control file:
control 'nginx-conf-file' do
+ impact 1.0
+ title 'NGINX configuration file'
+ desc 'The NGINX config file should exist as a file.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_file }
+ end
+end
+
Run inspec exec
on the target.
nginx_conf
fileThe fourth requirement verifies that the NGINX configuration file, /etc/nginx/nginx.conf
:
Append your control file to add this describe block:
control 'nginx-conf-perms' do
+ impact 1.0
+ title 'NGINX configuration'
+ desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_owned_by 'root' }
+ it { should be_grouped_into 'root' }
+ it { should_not be_readable.by('others') }
+ it { should_not be_writable.by('others') }
+ it { should_not be_executable.by('others') }
+ end
+end
+
This time you see a failure. You discover that /etc/nginx/nginx.conf
is potentially readable by others. Because this control also has an impact of 1.0, your team may need to investigate further.
The last requirement checks whether NGINX shell access is provided to non-admin users. In this case, access to bash
needs to be restricted to admin users.
Append this describe block to your control file:
control 'nginx-shell-access' do
+ impact 1.0
+ title 'NGINX shell access'
+ desc 'The NGINX shell access should be restricted to admin users.'
+ describe users.shells(/bash/).usernames do
+ it { should be_in ['admin']}
+ end
+end
+
Run inspec exec
on the target.
bundle exec kitchen list
. You should see something like this: Instance Driver Provisioner Verifier Transport Last Action Last Error
+ vanilla-rhel-8 Ec2 AnsiblePlaybook Inspec Ssh Verified None
+ hardened-rhel-8 Ec2 AnsiblePlaybook Inspec Ssh Verified None
+
bundle exec kitchen create vanilla
.➜ redhat-enterprise-linux-8-stig-baseline git:(main*)bundle exec kitchen create vanilla
+-----> Starting Test Kitchen (v3.5.1)
+-----> Creating <vanilla-rhel-8>...
+ < OTHER OUTPUT >
+ Finished creating <vanilla-rhel-8> (0m0.00s).
+-----> Test Kitchen is finished. (0m1.21s)
+
bundle exec kitchen converge
.➜ redhat-enterprise-linux-8-stig-baseline git:(main*)bundle exec kitchen converge vanilla
+-----> Starting Test Kitchen (v3.5.1)
+ NOTICE - Installing needed packages
+ Updating Subscription Management repositories.
+ Unable to read consumer identity
+
+ This system is not registered with an entitlement server. You can use subscription-manager to register.
+
+ 39 files removed
+ < LOTS OF OTHER OUTPUT >
+ Downloading files from <vanilla-rhel-8>
+ Finished converging <vanilla-rhel-8> (0m21.36s).
+-----> Test Kitchen is finished. (1m13.52s)
+
bundle exec kitchen verify
. ➜ redhat-enterprise-linux-8-stig-baseline git:(main*)bundle exec kitchen verify vanilla
+ -----> Starting Test Kitchen (v3.5.1)
+ -----> Setting up <vanilla-rhel-8>...
+ Finished setting up <vanilla-rhel-8> (0m0.00s).
+ -----> Verifying <vanilla-rhel-8>...
+ Loaded redhat-enterprise-linux-8-stig-baseline
+ Could not determine patch status.
+ Profile: redhat-enterprise-linux-8-stig-baseline (redhat-enterprise-linux-8-stig-baseline)
+ Version: 1.12.0
+ Target: ssh://ec2-user@34.229.216.179:22
+ Target ID: 4c62a305-69eb-5ed6-9ee7-723cdc21c578
+
+ ✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date.
+ ✔ List of out-of-date packages is expected to be empty
+ Profile Summary: 1 successful control, 0 control failures, 0 controls skipped
+ Test Summary: 1 successful, 0 failures, 0 skipped
+ Finished verifying <vanilla-rhel-8> (0m5.38s).
+ -----> Test Kitchen is finished. (0m6.62s)
+
For the purposes of this class, let's assume the role of a security engineer who has been tasked to write STIG-ready content for the Red Hat Enterprise Linux 9 (RHEL9) operating system.
DISA has already published a RHEL9 STIG, so we will be able to compare our content to the real thing if we wish.
If you haven't yet registered an account with this instance, do so now.
After logging in, you will reach the Projects screen.
Vulcan categorizes security guidance content into Projects. Each project can include multiple Components, where a component is a single piece of security guidance (for instance, a single STIG document). A Project can contain multiple versions of the same component (for instance, multiple releases of the STIG for the same software).
We need a new Project as a workspace to write our STIG-ready content.
Click it and begin to fill out the details for our project. You can make the Title and Description of your project whatever you want, but be sure to set the Visibility of the project to "discoverable," because you'll want your colleagues to be able to peer review your work later.
Before we create a Component, though, let's configure Role-Based Access Control (RBAC).
In a new Project, you'll be the only member at first. You can add a new member with a Role of:
Read only access to the Project or Component
Edit, comment, and mark Controls as requiring review. Cannot sign-off or approve changes to a Control. Great for individual contributors.
Write and approve changes to a Control.
Full control of a Project or Component. Lock Controls, revert controls, and manage members. You'll note that the Project's creator is automatically an admin.
Adding Colleagues
If you have any colleagues taking the class with you, you may want to add them as a reviewer now (note that you can only add members to a project if they have registered to the Vulcan instance already).
Should I Be An Author Or A Reviewer?
Reviewers are able to approve requirements written by other members. Depending on how your team operates, you may want to have many authors with one final reviewer role, or you may want to have every member be a reviewer. It's up to you.
Only the Admin role can bypass the peer review process to lock (finalize) their own requirements. Try not to dole out the Admin role too often; it's best practice to force all requirements to undergo peer review.
After identifying software components for your environment and knowing what security guidance exists for those components, a great next step is validation, or in other words, testing.
WAIT!
But what about the "Harden" pillar? Why would we focus on testing that a software component is secure before we secure it?
Actually, starting with the tests, rather than the changes to be tested, can level-set the expectations and see what the current state of the software is while giving a clear understanding of the goal or measurement of success.
When using this mindset in software development, this kind of development can be referred to as Test Driven Development.
The idea of using Test Driven Development (in other words, having the code driven by tests and therefore, the requirements) helps the humans confirm that the software does exactly what it is supposed to do - not more and not less.
This process starts with a FAILING test. Then, the minimal amount of change required is done to fix the code so that the test passes. Once the test passes, the code can be refactored to be cleaner, more readable, etc. This is a cycle, and returns to the top to create a new failing test. As development continues, all tests should be run to confirm that all tests still pass! These tests can be put in an automated suite to validate the code set whenever there are changes overall.
The SAF team values this methodology and helps teams implement security compliance tests using InSpec so they can understand the state of the system and the goal state of a secured system, using automated tests to get this information easier, quicker, and more often.
To run InSpec, you must have:
./lab-setup.sh
script) Check out the Installation Tab for more information on installing InSpec in a different environment.You run InSpec from the command line. There are many different options for this command, but let's break down the simple formula based on the requirements above.
inspec exec WHERE_IS_THE_PROFILE -t WHAT_IS_THE_TARGET --more-flags EXTRA_STUFF --reporter WHAT_SHOULD_INSPEC_DO_WITH_THE_RESULTS
+
You need to start with inspec exec
so that your terminal knows what it is trying to do in the first place.
Then, you can give the location of the InSpec profile, in other words, the code for the tests themselves. If the InSpec profile is stored locally, you can write a path to that file location, such as /root/path/to/InSpecProfiles/nginx-profile
. If you are hoping to directly access the profile from GitHub, you can enter the url of the GitHub profile, such as https://github.com/mitre/nginx-stigready-baseline
.
Next, you need to tell your computer what the target is. You add this information after the -t
flag. You could test against your local machine (which is less common), you could test a Virtual Machine, you could test a Docker container, or more. You could connect to that machine via SSH, WinRM, or more. We will talk more about these options later.
There are MANY more options that you can specify when running the InSpec command. The next most common one is specifying inputs for your profile, for example --input-file /path/to/inputs.yml
where you can add inputs that tailor the profile to your environmnent's needs. You can find more information on inputs in the Tailoring Inputs section.
And of course you probably want to see the results. You can specify where those results are displayed or saved based on what you enter after the --reporter
flag at the end of your command. For example, the following would print the results on the command line and save it to a file (by creating or overwriting) the file at /path/to/results.json: --reporter cli json:/path/to/results.json
. If you do not add this information, the command will default to providing results on the command line, but it will not save those into a file unless you specify the --reporter
flag like the example.
We will go more in depth on this example in the next two sections, but if you want a head start, you can give running InSpec a try by running this command in your Codespace terminal.
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx --reporter cli
+
In the above example, we are testing an NGINX server. We get the InSpec profile (all of the tests) from GitHub by stating https://github.com/mitre/nginx-stigready-baseline
. We use the NGINX target that is running via docker in our Codespace environment by stating docker://nginx
, we do not put any extra flags in this example, and lastly, we only report the results to the terminal (in other words, cli output). Later we will refine this command and talk through it in more detail.
Note: The first time you run InSpec, it will likely ask you to accept Chef's license like this:
+---------------------------------------------+
+ Chef License Acceptance
+
+Before you can continue, 1 product license
+must be accepted. View the license at
+https://www.chef.io/end-user-license-agreement/
+
+License that need accepting:
+ * Chef InSpec
+
+Do you accept the 1 product license (yes/no)?
+
+>
+
You can type yes
and press enter. This will only happen one time.
The -t
flag (or --target
flag in long form) specifies what target you want InSpec to scan. How you connect to that target is via a transport. Transports use standard ports and protocols. Some examples are SSH, WinRM, AWS SSM, Docker, and Kubernetes.
inspec exec https://github.com/mitre/nginx-stigready-baseline
+ -t docker://instance_id
+ --input-file <path_to_your_input_file/name_of_your_input_file.yml>
+ --reporter json:<path_to_your_output_file/name_of_your_output_file.json>
+
inspec exec https://github.com/mitre/nginx-stigready-baseline
+ -t ssh://Username:Password@IP
+ --input-file <path_to_your_input_file/name_of_your_input_file.yml>
+ --reporter json:<path_to_your_output_file/name_of_your_output_file.json>
+
Defaults
Note that if you do not provide one of the required flags in the InSpec exec command, there is default behavior.
Missing Flag | Default Behavior |
---|---|
No target (-t or --target) | Uses your local machine (where InSpec is running) as the target. |
No --reporter flag | Prints results to the terminal on the InSpec runner machine |
Your my_nginx
profile is off to a great start. As your requirements evolve, you can add additional controls. You can also run this profile as often as you need to verify whether your systems remain in compliance.
Let's review the control file, example.rb
.
control 'nginx-version' do
+ impact 1.0
+ title 'NGINX version'
+ desc 'The required version of NGINX should be installed.'
+ describe nginx do
+ its('version') { should cmp >= '1.10.3' }
+ end
+end
+
+control 'nginx-modules' do
+ impact 1.0
+ title 'NGINX modules'
+ desc 'The required NGINX modules should be installed.'
+ describe nginx do
+ its('modules') { should include 'http_ssl' }
+ its('modules') { should include 'stream_ssl' }
+ its('modules') { should include 'mail_ssl' }
+ end
+end
+
+control 'nginx-conf-file' do
+ impact 1.0
+ title 'NGINX configuration file'
+ desc 'The NGINX config file should exist.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_file }
+ end
+end
+
+control 'nginx-conf-perms' do
+ impact 1.0
+ title 'NGINX configuration'
+ desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_owned_by 'root' }
+ it { should be_grouped_into 'root' }
+ it { should_not be_readable.by('others') }
+ it { should_not be_writable.by('others') }
+ it { should_not be_executable.by('others') }
+ end
+end
+
+control 'nginx-shell-access' do
+ impact 1.0
+ title 'NGINX shell access'
+ desc 'The NGINX shell access should be restricted to admin users.'
+ describe users.shells(/bash/).usernames do
+ it { should be_in ['admin']}
+ end
+end
+
Although these rules do what you expect, imagine your control file contains dozens or hundreds of tests. As the data you check for, such as the version or which modules are installed, evolve, it can become tedious to locate and update your tests. You may also find that you repeat the same data across multiple control files.
One way to improve these tests is to use inputs
. Inputs
enable you to separate the logic of your tests from the data of your tests. Input files
are typically expressed as a YAML
file (files ending in .yaml
or .yml
).
Profile Inputs
exist in your profile's main directory within the inspec.yml
for global inputs
to be used across all the controls in your profile.
Let's create the inspec.yml
file for our profile:
name: my_nginx
+title: InSpec Profile
+maintainer: The Authors
+copyright: The Authors
+copyright_email: you@example.com
+license: Apache-2.0
+summary: An InSpec Compliance Profile
+version: 0.1.0
+supports:
+ platform: os
+
+inputs:
+ - name: nginx_version
+ type: String
+ value: 1.10.3
+
To access an input you will use the input keyword. You can use this anywhere in your control code.
For example:
control 'nginx-version' do
+ impact 1.0
+ title 'NGINX version'
+ desc 'The required version of NGINX should be installed.'
+ describe nginx do
+ its('version') { should cmp >= input('nginx_version') }
+ end
+end
+
For our next control we require specific modules
Example of adding an array object of servers:
`,14),b=n("div",{class:"language-yaml line-numbers-mode","data-ext":"yml"},[n("pre",{class:"language-yaml"},[n("code",null,[s(" "),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` servers + `),n("span",{class:"token key atrule"},"type"),n("span",{class:"token punctuation"},":"),s(` Array + `),n("span",{class:"token key atrule"},"value"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token punctuation"},"-"),s(` server1 + `),n("span",{class:"token punctuation"},"-"),s(` server2 + `),n("span",{class:"token punctuation"},"-"),s(` server3 +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),y=n("div",{class:"language-yaml line-numbers-mode","data-ext":"yml"},[n("pre",{class:"language-yaml"},[n("code",null,[s(" "),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` nginx_modules + `),n("span",{class:"token key atrule"},"type"),n("span",{class:"token punctuation"},":"),s(` Array + `),n("span",{class:"token key atrule"},"value"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token punctuation"},"-"),s(` http_ssl + `),n("span",{class:"token punctuation"},"-"),s(` stream_ssl + `),n("span",{class:"token punctuation"},"-"),s(` mail_ssl +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),g=n("div",{class:"language-yaml line-numbers-mode","data-ext":"yml"},[n("pre",{class:"language-yaml"},[n("code",null,[n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` my_nginx +`),n("span",{class:"token key atrule"},"title"),n("span",{class:"token punctuation"},":"),s(` InSpec Profile +`),n("span",{class:"token key atrule"},"maintainer"),n("span",{class:"token punctuation"},":"),s(` The Authors +`),n("span",{class:"token key atrule"},"copyright"),n("span",{class:"token punctuation"},":"),s(` The Authors +`),n("span",{class:"token key atrule"},"copyright_email"),n("span",{class:"token punctuation"},":"),s(` you@example.com +`),n("span",{class:"token key atrule"},"license"),n("span",{class:"token punctuation"},":"),s(" Apache"),n("span",{class:"token punctuation"},"-"),n("span",{class:"token number"},"2.0"),s(` +`),n("span",{class:"token key atrule"},"summary"),n("span",{class:"token punctuation"},":"),s(` An InSpec Compliance Profile +`),n("span",{class:"token key atrule"},"version"),n("span",{class:"token punctuation"},":"),s(` 0.1.0 +`),n("span",{class:"token key atrule"},"supports"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"platform"),n("span",{class:"token punctuation"},":"),s(` os + +`),n("span",{class:"token key atrule"},"inputs"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` nginx_version + `),n("span",{class:"token key atrule"},"type"),n("span",{class:"token punctuation"},":"),s(` String + `),n("span",{class:"token key atrule"},"value"),n("span",{class:"token punctuation"},":"),s(` 1.10.3 + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` nginx_modules + `),n("span",{class:"token key atrule"},"type"),n("span",{class:"token punctuation"},":"),s(` Array + `),n("span",{class:"token key atrule"},"value"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token punctuation"},"-"),s(` http_ssl + `),n("span",{class:"token punctuation"},"-"),s(` stream_ssl + `),n("span",{class:"token punctuation"},"-"),s(` mail_ssl +`)])]),n("div",{class:"highlight-lines"},[n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("br"),n("div",{class:"highlight-line"}," "),n("div",{class:"highlight-line"}," "),n("div",{class:"highlight-line"}," "),n("div",{class:"highlight-line"}," "),n("div",{class:"highlight-line"}," "),n("div",{class:"highlight-line"}," ")]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),h=i(`Your control can be changed to look like this:
control 'nginx-modules' do
+ impact 1.0
+ title 'NGINX modules'
+ desc 'The required NGINX modules should be installed.'
+
+ nginx_modules = input('nginx_modules')
+
+ describe nginx do
+ nginx_modules.each do |current_module|
+ its('modules') { should include current_module }
+ end
+ end
+end
+
Lastly, we can edit our inspec.yml
file to create a list of admin users:
Your fifth control can be changed to look like this:
control 'nginx-shell-access' do
+ impact 1.0
+ title 'NGINX shell access'
+ desc 'The NGINX shell access should be restricted to admin users.'
+ describe users.shells(/bash/).usernames do
+ it { should be_in input('admin_users')}
+ end
+end
+
To change your inputs for platform specific cases you can setup multiple input files.
For example, an NGINX web server could be run on a Windows or Linux machine, which may require different admin users for each context. The inputs can be tailored for each system. You can create the inputs-windows.yml
and inputs-linux.yml
files in your home directory.
Note
Another example is that a production and development environment may require different inputs.
The following command runs the tests and applies the inputs specified, first, on the Linux system:
inspec exec ./my_nginx -t docker://nginx --input-file inputs-linux.yml
+
And, on our Windows systems:
inspec exec ./my_nginx -t docker://nginx --input-file inputs-windows.yml
+
bundle exec kitchen list
➜ redhat-enterprise-linux-8-stig-baseline git:(main*)bundle exec kitchen list
+Instance Driver Provisioner Verifier Transport Last Action Last Error
+vanilla-ubi8 Dokken Dummy Inspec Dokken <Not Created> <None>
+hardened-ubi8 Dokken Dummy Inspec Dokken <Not Created> <None>
+
bundle exec kitchen create vanilla
-----> Starting Test Kitchen (v3.5.1)
+-----> Creating <vanilla-ubi8>...
+ Creating kitchen sandbox at /Users/alippold/.dokken/kitchen_sandbox/de2da32d73-vanilla-ubi8
+ Creating verifier sandbox at /Users/alippold/.dokken/verifier_sandbox/de2da32d73-vanilla-ubi8
+ Building work image..
+ Creating container de2da32d73-vanilla-ubi8
+ Finished creating <vanilla-ubi8> (0m0.88s).
+-----> Test Kitchen is finished. (0m1.77s)
+
bundle exec kitchen converge vanilla
➜ redhat-enterprise-linux-8-stig-baseline git:(main*)bundle exec kitchen converge vanilla
+-----> Starting Test Kitchen (v3.5.1)
+-----> Converging <vanilla-ubi8>...
+ ...
+ Finished converging <vanilla-ubi8> (0m0.00s).
+-----> Test Kitchen is finished. (0m0.88s)
+
bundle exec kitchen verify vanilla
-----> Starting Test Kitchen (v3.5.1)
+-----> Verifying <vanilla-ubi8>...
+ Loaded redhat-enterprise-linux-8-stig-baseline
+
+Profile: redhat-enterprise-linux-8-stig-baseline (redhat-enterprise-linux-8-stig-baseline)
+Version: 1.12.0
+Target: docker://c4e89b7406dc0ebf8658fe90d6384d69885a7f07ab9bfbc91c85c64483868c44
+Target ID: da39a3ee-5e6b-5b0d-b255-bfef95601890
+
+ × SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date. (4 failed)
+...
+
+Profile Summary: 0 successful controls, 1 control failure, 0 controls skipped
+Test Summary: 0 successful, 4 failures, 0 skipped
+
The error below is just Test Kitchen telling you that not all of the Contrls in the profile passed.
>>>>>> ------Exception-------
+>>>>>> Class: Kitchen::ActionFailed
+>>>>>> Message: 1 actions failed.
+>>>>>> Verify failed on instance <vanilla-ubi8>. Please see .kitchen/logs/vanilla-ubi8.log for more details
+>>>>>> ----------------------
+>>>>>> Please see .kitchen/logs/kitchen.log for more details
+>>>>>> Also try running \`kitchen diagnose --all\` for configuration
+
./spec/results/
directory, named ./spec/results/ubi-8_*.
hardened
and vanilla
results to ensure your changes and updates, "failed as expected and passed as expected and covered your corner cases."We have our project created and have allowed access to everyone who needs it. Now let's create ourselves a Component.
The first thing we need to do when building a Component is determine what set of requirements applies to it.
A helpful question to keep in mind for this decision is "What is the purpose of the software you are securing? What is its role?" This will determine what guidance document we want to use as a foundation for our content.
Let's take a look at the options we have for a foundation.
You'll see options in the top navbar of Vulcan for "SRGs" and "STIGs." These links lead to the lists of security guidance documents already saved to Vulcan. We can use any of these as a template for our own content.
We have pre-loaded this Vulcan instance with a few SRGs to get us started. Since we're writing content for the RHEL9 operating system, we're going to want to use the General Purpose Operating System Security Requirements Guide.
DISA and SRG selection
Note that if you intend to formally publish your STIG, DISA will tell you which one to use based off the description of your software that you give them.
Application STIGs
Applications (as opposed to software like operating systems, webservers and routers) that undergo the STIG process all should be using the Application Security and Development STIG as a foundation document.
Remember that a STIG can itself be used as the foundation for a tailored security baseline document!
Let's create a new Component to track our RHEL9 work.
Here, you'll select the SRG we decided on earlier.
*Optional.
STIG ID Prefixes
If you intend to formally publish your STIG, DISA will eventually assign these for you. These are just a placeholder value for now to allow us to track requirements inside Vulcan itself.
Other Ways of Loading Components
Vulcan allows you to import Components as well as creating them brand-new. You are able to load from your own released Components in your Vulcan instance, or even from a spreadsheet.
Let's crack open what we just created.
The page should look something like this:
On the right-hand side of the Vulcan window, if we don't have a requirement selected, we can see metadata about the overall Component, including an edit history.
The left-hand side of the Vulcan window shows us the list of each requirement in the Component, and can be filtered by keyword, control status (which we will discuss in the next section) or review status. Note that each control is labeled with the STIG ID prefix that you gave this Component earlier. You can click on the requirement IDs in this view to see their contents.
When first created, a new Component's requirements will all be exact copies of the SRG or other underlying document we used as a foundation. Our job is to edit these controls to make them specific, actionable implementation guidance.
',35),h=[p];function u(d,m){return o(),n("div",null,h)}const y=t(c,[["render",u],["__file","06.html.vue"]]);export{y as default}; diff --git a/assets/06.html-lc3SD2e2.js b/assets/06.html-lc3SD2e2.js new file mode 100644 index 000000000..ca9f9ac63 --- /dev/null +++ b/assets/06.html-lc3SD2e2.js @@ -0,0 +1,255 @@ +import{_ as c}from"./TestDrivenDevelopment-M1Sg-EUL.js";import{_ as r}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as p,o as u,c as d,b as o,w as e,d as s,f as l,e as n}from"./app-PAvzDPkc.js";const k={},m=l(`Now let's try a more complicated example. Let's say we want to create a resource that can parse a docker-compose
file.
First, we need a test target! Check out the resources/docker-compose.yml
file in Codespaces for what we can test. It looks like this:
version: '3'
+services:
+ workstation:
+ container_name: workstation
+ image: learnchef/inspec_workstation
+ stdin_open: true
+ tty: true
+ links:
+ - target
+ volumes:
+ - .:/root
+ target:
+ image: learnchef/inspec_target
+ stdin_open: true
+ tty: true
+
We will continue writing our controls to check against this docker file:
`,6),v=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,`inspec init profile docker-workstations +`)]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"})])],-1),b=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n(` ─────────────────────────── InSpec Code Generator ─────────────────────────── + +Creating new profile at /workspaces/saf-training-lab-environment/docker-workstations + • Creating `),s("span",{class:"token function"},"file"),n(` inspec.yml + • Creating directory /workspaces/saf-training-lab-environment/docker-workstations/controls + • Creating `),s("span",{class:"token function"},"file"),n(` controls/example.rb + • Creating `),s("span",{class:"token function"},"file"),n(` README.md +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"})])],-1),g=l(`Conceptually, we want to write tests with this profile that will check different settings in a docker-compose file. If you are not sure what the InSpec code looks like for a particular test, start by writing what conceptually you want to achieve, then modify it to be correct syntax. We can do that with the idea of checking a setting in a docker-compose file, which we know is a YAML file, as such:
In the docker-workstations/controls/example.rb
file, write the control:
describe yaml('file_name') do
+ its('setting') { should_not eq 'value' }
+end
+
We test early and often. We know that the test we wrote is not complete, but we can see if we are on the right track. Remember that the command line output can help guide your development!
`,5),h=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("inspec "),s("span",{class:"token builtin class-name"},"exec"),n(` docker-workstations +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"})])],-1),f=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("Profile: InSpec Profile "),s("span",{class:"token punctuation"},"("),n("docker-workstations"),s("span",{class:"token punctuation"},")"),n(` +Version: `),s("span",{class:"token number"},"0.1"),n(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + YAML file_name + ↺ Can't `),s("span",{class:"token function"},"find"),n(` file: file_name + +Test Summary: `),s("span",{class:"token number"},"0"),n(" successful, "),s("span",{class:"token number"},"0"),n(" failures, "),s("span",{class:"token number"},"1"),n(` skipped +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"})])],-1),w=l(`We need to replace the file_name
above with the location of the docker-compose.yml
file. We also need to change the setting
to grab the tag we want to retrieve. Finally we need to change value
with the actual value as shown in the docker compose file. You can write multiple expectation statements in the describe block.
describe yaml('/path/to/docker-compose.yml') do
+ its(['services', 'workstation', 'image']) { should eq 'learnchef/inspec_workstation' }
+ its(['services', 'workstation', 'volumes']) { should cmp '.:/root' }
+end
+
Now if we test this control using the following command we should see all the tests pass.
`,3),y=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("inspec "),s("span",{class:"token builtin class-name"},"exec"),n(` docker-workstations +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"})])],-1),_=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("Profile: InSpec Profile "),s("span",{class:"token punctuation"},"("),n("docker-workstations"),s("span",{class:"token punctuation"},")"),n(` +Version: `),s("span",{class:"token number"},"0.1"),n(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + YAML /workspaces/saf-training-lab-environment/resources/docker-compose.yml + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"image"'),s("span",{class:"token punctuation"},"]"),n(" is expected to eq "),s("span",{class:"token string"},'"learnchef/inspec_workstation"'),n(` + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"volumes"'),s("span",{class:"token punctuation"},"]"),n(" is expected to "),s("span",{class:"token function"},"cmp"),n(),s("span",{class:"token operator"},"=="),n(),s("span",{class:"token string"},'".:/root"'),n(` + +Test Summary: `),s("span",{class:"token number"},"2"),n(" successful, "),s("span",{class:"token number"},"0"),n(" failures, "),s("span",{class:"token number"},"0"),n(` skipped +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"})])],-1),x=l(`If you received an error above! - Concept Check
If you saw this as your output:
Profile: InSpec Profile (docker-workstations)
+Version: 0.1.0
+Target: local://
+Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce
+
+ YAML /path/to/docker-compose.yml
+ ↺ Can't find file: /path/to/docker-compose.yml
+
+Test Summary: 0 successful, 0 failures, 1 skipped
+
It is because you did not give YOUR path to the docker-compose file. You need to replace the path in your example.rb
file to be something like this:
describe yaml('/workspaces/saf-training-lab-environment/resources/docker-compose.yml') do
+ its(['services', 'workstation', 'image']) { should eq 'learnchef/inspec_workstation' }
+ its(['services', 'workstation', 'volumes']) { should cmp '.:/root' }
+end
+
Going back to the control, we will write it using a resource that doesn't exist called docker-compose-config that is going to take a path as a parameter.
Remember the idea of Test Driven Development (TDD), the red-green-clean cycle. This way of development is driven by the tests. In this way, you know when you have succeeded while developing something new! In other words, before writing a solution, first write the test (which will fail - red), so that you know exactly what the expectation should be and when you have achieved it. Then you can write the solution to make the test pass (green). Finally, clean up the solution to make it easy to read and efficient!
In the libraries
directory of the profile we will make a docker_compose_config.rb
file, , the content of the file should look like this:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class DockerComposeConfig < Inspec.resource(1)
+
+ name 'docker_compose_config'
+
+end
+
We will get an error saying we gave it the wrong number of arguments: was given 1 but expected 0
. This is because every class in Ruby that has a parameter must have an initialize function to accept that parameter.
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class DockerComposeConfig < Inspec.resource(1)
+
+ name 'docker_compose_config'
+
+ def initialize(path)
+ @path = path
+ end
+
+end
+
Now let's run the profile once more.
`,3),W=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("inspec "),s("span",{class:"token builtin class-name"},"exec"),n(` docker-workstations +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"})])],-1),z=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("Profile: InSpec Profile "),s("span",{class:"token punctuation"},"("),n("docker-workstations"),s("span",{class:"token punctuation"},")"),n(` +Version: `),s("span",{class:"token number"},"0.1"),n(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + YAML /workspaces/saf-training-lab-environment/resources/docker-compose.yml + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"image"'),s("span",{class:"token punctuation"},"]"),n(" is expected to eq "),s("span",{class:"token string"},'"learnchef/inspec_workstation"'),n(` + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"volumes"'),s("span",{class:"token punctuation"},"]"),n(" is expected to "),s("span",{class:"token function"},"cmp"),n(),s("span",{class:"token operator"},"=="),n(),s("span",{class:"token string"},'".:/root"'),n(` + docker_compose_config + × services.workstation.image + undefined method `),s("span",{class:"token variable"},[s("span",{class:"token variable"},"`"),n("services' "),s("span",{class:"token keyword"},"for"),n(),s("span",{class:"token comment"},"#<#This time the profile runs, but we get a message that the docker_compose_config
resource does not have the services
method. So let's define that method now:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class DockerComposeConfig < Inspec.resource(1)
+
+ name 'docker_compose_config'
+
+ def initialize(path)
+ @path = path
+ end
+
+ def services
+
+ end
+
+end
+
Start by just defining the services
method. Then, let's run the profile once more.
Now we got a different failure that tells us a nil
value was returned. So now we will go ahead and define the services method. We will use an already existing InSpec resource to parse the path file.
# encoding: utf-8
+# copyright: 2019, The Authors
+
+class DockerComposeConfig < Inspec.resource(1)
+
+ name 'docker_compose_config'
+
+ def initialize(path)
+ @path = path
+ @yaml = inspec.yaml(path)
+ end
+
+ def services
+ @yaml['services']
+ end
+
+end
+
Now let's run the profile once more.
`,3),j=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("inspec "),s("span",{class:"token builtin class-name"},"exec"),n(` docker-workstations +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"})])],-1),B=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("Profile: InSpec Profile "),s("span",{class:"token punctuation"},"("),n("docker-workstations"),s("span",{class:"token punctuation"},")"),n(` +Version: `),s("span",{class:"token number"},"0.1"),n(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + YAML /workspaces/saf-training-lab-environment/resources/docker-compose.yml + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"image"'),s("span",{class:"token punctuation"},"]"),n(" is expected to eq "),s("span",{class:"token string"},'"learnchef/inspec_workstation"'),n(` + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"volumes"'),s("span",{class:"token punctuation"},"]"),n(" is expected to "),s("span",{class:"token function"},"cmp"),n(),s("span",{class:"token operator"},"=="),n(),s("span",{class:"token string"},'".:/root"'),n(` + docker_compose_config + × services.workstation.image + undefined method `),s("span",{class:"token variable"},[s("span",{class:"token variable"},"`"),n("workstation' "),s("span",{class:"token keyword"},"for"),n(),s("span",{class:"token operator"},"<"),n("Hash:0x0000000003abada"),s("span",{class:"token operator"},[s("span",{class:"token file-descriptor important"},"8"),n(">")]),n(` + × services.workstation.volumes + undefined method `),s("span",{class:"token variable"},"`")]),n("workstation' "),s("span",{class:"token keyword"},"for"),n(),s("span",{class:"token operator"},"<"),n("Hash:0x0000000003abada"),s("span",{class:"token operator"},[s("span",{class:"token file-descriptor important"},"8"),n(">")]),n(` + +Test Summary: `),s("span",{class:"token number"},"2"),n(" successful, "),s("span",{class:"token number"},"2"),n(" failures, "),s("span",{class:"token number"},"0"),n(` skipped +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"})])],-1),X=l(`You will notice that it parses correctly, but instead of our result we end up getting a hash. We need to convert the hash into an object that appears like other objects so that we may use our dot notation. So we will wrap our hash in a Ruby class called a Hashie::Mash
. This gives us a quick way to convert a hash into a Ruby object with a number of methods attached to it. You will have to import the Hashie library by running gem install hashie
and import it in the resource file to be used. It and is written as follows:
# encoding: utf-8
+# copyright: 2019, The Authors
+
+require "hashie/mash"
+
+class DockerComposeConfig < Inspec.resource(1)
+
+ name 'docker_compose_config'
+
+ def initialize(path)
+ @path = path
+ @yaml = inspec.yaml(path)
+ end
+
+ def services
+ Hashie::Mash.new(@yaml['services'])
+ end
+
+end
+
Lets run the profile again.
`,3),U=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("inspec "),s("span",{class:"token builtin class-name"},"exec"),n(` docker-workstations +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"})])],-1),K=s("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[s("pre",{class:"language-bash"},[s("code",null,[n("Profile: InSpec Profile "),s("span",{class:"token punctuation"},"("),n("docker-workstations"),s("span",{class:"token punctuation"},")"),n(` +Version: `),s("span",{class:"token number"},"0.1"),n(`.0 +Target: local:// +Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce + + YAML /workspaces/saf-training-lab-environment/resources/docker-compose.yml + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"image"'),s("span",{class:"token punctuation"},"]"),n(" is expected to eq "),s("span",{class:"token string"},'"learnchef/inspec_workstation"'),n(` + ✔ `),s("span",{class:"token punctuation"},"["),s("span",{class:"token string"},'"services"'),n(", "),s("span",{class:"token string"},'"workstation"'),n(", "),s("span",{class:"token string"},'"volumes"'),s("span",{class:"token punctuation"},"]"),n(" is expected to "),s("span",{class:"token function"},"cmp"),n(),s("span",{class:"token operator"},"=="),n(),s("span",{class:"token string"},'".:/root"'),n(` + docker_compose_config + ✔ services.workstation.image is expected to eq `),s("span",{class:"token string"},'"learnchef/inspec_workstation"'),n(` + ✔ services.workstation.volumes is expected to `),s("span",{class:"token function"},"cmp"),n(),s("span",{class:"token operator"},"=="),n(),s("span",{class:"token string"},'".:/root"'),n(` + +Test Summary: `),s("span",{class:"token number"},"4"),n(" successful, "),s("span",{class:"token number"},"0"),n(" failures, "),s("span",{class:"token number"},"0"),n(` skipped +`)])]),s("div",{class:"line-numbers","aria-hidden":"true"},[s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"}),s("div",{class:"line-number"})])],-1),Z=s("p",null,"Everything passed!",-1),J=s("div",{class:"hint-container info"},[s("p",{class:"hint-container-title"},"Check your work"),s("p",null,"Check your work with the InSpec video below that walks through this docker resource example!")],-1),Q=s("div",{class:"video-container"},[s("iframe",{width:"1462",height:"762",src:"https://www.youtube.com/embed/9rbb2RWa9Oo?list=PLSZbtIlMt5rcbXOpMRucKzRMXR7HX7awy",title:"YouTube video player",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:""})],-1);function $(ss,ns){const i=p("CodeTabs");return u(),d("div",null,[m,o(i,{id:"16",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[v]),tab1:e(({value:a,isActive:t})=>[b]),_:1}),g,o(i,{id:"37",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[h]),tab1:e(({value:a,isActive:t})=>[f]),_:1}),w,o(i,{id:"52",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[y]),tab1:e(({value:a,isActive:t})=>[_]),_:1}),x,o(i,{id:"81",data:[{id:"Tests"},{id:"Generic Tests"}]},{title0:e(({value:a,isActive:t})=>[n("Tests")]),title1:e(({value:a,isActive:t})=>[n("Generic Tests")]),tab0:e(({value:a,isActive:t})=>[C]),tab1:e(({value:a,isActive:t})=>[A]),_:1}),T,o(i,{id:"92",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[I]),tab1:e(({value:a,isActive:t})=>[S]),_:1}),D,s("div",O,[R,P,o(i,{id:"111",data:[{id:"Command"},{id:"Options"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Options")]),tab0:e(({value:a,isActive:t})=>[q]),tab1:e(({value:a,isActive:t})=>[M]),_:1})]),N,o(i,{id:"123",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[Y]),tab1:e(({value:a,isActive:t})=>[V]),_:1}),F,o(i,{id:"138",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[W]),tab1:e(({value:a,isActive:t})=>[z]),_:1}),E,o(i,{id:"153",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[L]),tab1:e(({value:a,isActive:t})=>[H]),_:1}),G,o(i,{id:"168",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[j]),tab1:e(({value:a,isActive:t})=>[B]),_:1}),X,o(i,{id:"183",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[n("Command")]),title1:e(({value:a,isActive:t})=>[n("Output")]),tab0:e(({value:a,isActive:t})=>[U]),tab1:e(({value:a,isActive:t})=>[K]),_:1}),Z,J,Q])}const is=r(k,[["render",$],["__file","06.html.vue"]]);export{is as default}; diff --git a/assets/06.html-mS6M1fnL.js b/assets/06.html-mS6M1fnL.js new file mode 100644 index 000000000..80067258e --- /dev/null +++ b/assets/06.html-mS6M1fnL.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-0f54d1bf","path":"/courses/user/06.html","title":"6. How to Run InSpec","lang":"en-US","frontmatter":{"order":6,"next":"07.md","title":"6. How to Run InSpec","author":"Aaron Lippold","headerDepth":3,"description":"6. How to Run InSpec In this section, we will talk about how to run InSpec. In Section 8 (./08.md), you will put this into practice! 6.1 Requirements To run InSpec, you must hav...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/user/06.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"6. How to Run InSpec"}],["meta",{"property":"og:description","content":"6. How to Run InSpec In this section, we will talk about how to run InSpec. In Section 8 (./08.md), you will put this into practice! 6.1 Requirements To run InSpec, you must hav..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"6. How to Run InSpec\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"6. How to Run InSpec","slug":"_6-how-to-run-inspec","link":"#_6-how-to-run-inspec","children":[{"level":3,"title":"6.1 Requirements","slug":"_6-1-requirements","link":"#_6-1-requirements","children":[]},{"level":3,"title":"6.2 The InSpec Command Formula","slug":"_6-2-the-inspec-command-formula","link":"#_6-2-the-inspec-command-formula","children":[]},{"level":3,"title":"6.3 How to Deploy InSpec","slug":"_6-3-how-to-deploy-inspec","link":"#_6-3-how-to-deploy-inspec","children":[]}]}],"git":{},"readingTime":{"minutes":3.46,"words":1039},"filePathRelative":"courses/user/06.md","autoDesc":true}');export{e as data}; diff --git a/assets/06.html-ubMevqRY.js b/assets/06.html-ubMevqRY.js new file mode 100644 index 000000000..94d8dec53 --- /dev/null +++ b/assets/06.html-ubMevqRY.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-2abb3b46","path":"/courses/beginner/06.html","title":"6. Inputs in InSpec","lang":"en-US","frontmatter":{"order":6,"next":"07.md","title":"6. Inputs in InSpec","author":"Aaron Lippold","description":"Refactoring the code to use Inputs Your my_nginx profile is off to a great start. As your requirements evolve, you can add additional controls. You can also run this profile as ...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/beginner/06.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"6. Inputs in InSpec"}],["meta",{"property":"og:description","content":"Refactoring the code to use Inputs Your my_nginx profile is off to a great start. As your requirements evolve, you can add additional controls. You can also run this profile as ..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"6. Inputs in InSpec\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"Refactoring the code to use Inputs","slug":"refactoring-the-code-to-use-inputs","link":"#refactoring-the-code-to-use-inputs","children":[]},{"level":2,"title":"Input File Example","slug":"input-file-example","link":"#input-file-example","children":[]}],"git":{},"readingTime":{"minutes":3.39,"words":1017},"filePathRelative":"courses/beginner/06.md","autoDesc":true}');export{e as data}; diff --git a/assets/07.html-AUiZ2lcd.js b/assets/07.html-AUiZ2lcd.js new file mode 100644 index 000000000..7a5cdc152 --- /dev/null +++ b/assets/07.html-AUiZ2lcd.js @@ -0,0 +1,54 @@ +import{_ as s}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as a,o,c as r,d as t,e,b as l,a as p,f as n}from"./app-PAvzDPkc.js";const c="/saf-training/assets/Codespaces_InputFile_NGINX-fuC7bzWS.png",u={},d=n('Every InSpec profile on the SAF site is written to comply with some security guidance. However, every team's environment may be just a little bit different. For example, the path to a file may be different in different environments, or the list of permitted users for a certain system may vary with the environment.
To accomodate for these kinds of differences, InSpec profiles utilize inputs. In the previous section, we ran the InSpec profile on the NGINX component without specifying any inputs. This means that it just used the defaults. Now, let's review these variables and decide which inputs we want to change for our environment.
Best Practice
It is best practice to always run profiles with inputs so that the profile is properly tailored to your environment.
inspec.yml
file)This profile uses InSpec Inputs to make the tests more flexible. You are able to provide inputs at runtime either via the cli or via YAML files to help the profile work best in your deployment.
Caution
DO NOT change the inputs in the inspec.yml
file. This is where the variables and their defaults are defined.
DO create a separate file (often named inputs.yml
) or pass values via the command line to overwrite default values to tailor the profile.
The inputs
configured in the inspec.yml
file are profile definition and defaults for the profile and not for the user. Automated InSpec scans are frequently run from a script, inside a pipeline or some kind of task scheduler where the runner will often not have access to the inspec.yml
. However, those scripts or pipelines can easily pass an inputs file or command line arguments to modify the default values defined in the inspec.yml
file.
To tailor the tested values for your deployment or organizationally defined values, you may update the inputs.
',9),h={href:"https://docs.chef.io/inspec/inputs/",target:"_blank",rel:"noopener noreferrer"},f=n(`For the NGINX example, we are going to add the following inputs. Make a new file called inputs.yml
in your lab environment:
inputs.yml
file.---
+key_file_path: /etc/ssl/nginx-selfsigned.pem
+org_allowed_nginx_version: 1.23.1
+nginx_owner: "nginx"
+uses_pki: false
+sys_admin: ["root"]
+sys_admin_group: ["root"]
+
In your codespaces, it should look like this:
How do I find the values that should be in the input file?
Start by checking the README on the GitHub repository for that InSpec profile. Most of the profiles have a "Tailoring to Your Environment" section that leads you through what variables are available as inputs.
To determine the value itself, you should think about the environment, talk to your assessor, and explore the target to see if you can find the necessary information.
If the profile does not have a "Tailoring to Your Environment" section in their README, then you can reference the inspec.yml
file to see what inputs are defined and available and what their default values are. However, remember not to modify the inspec.yml
file itself.
What is the difference between tailoring an InSpec profile with inputs vs. overlays?
Inputs are meant to tailor the profile while still complying to the guidance document for which the profile is based.
Overlays are used in the case that the organization requirements differ from the security guidance. For example, if there are additional controls required or some controls not available for the organization's requirements.
In software development, the decision between making many small pull requests (micro PRs) or fewer, larger pull requests (massive PRs) often depends on the context. Both approaches have their benefits and challenges.
Micro PRs involve making frequent, small changes to the codebase. Each PR is focused on a single task or feature.
Benefits:
Challenges:
Massive PRs involve making larger, more comprehensive changes to the codebase. Each PR may encompass multiple tasks or features.
Benefits:
Challenges:
The choice between micro and massive PRs can significantly impact the workflows in our Patch Update
, Release Update
, and Major Version Update
.
In conclusion, the choice between micro and massive PRs depends on the specific needs and circumstances of your project. It's important to strike a balance that maximizes efficiency while minimizing risk, and fosters effective collaboration within your team.
',18);function g(f,p){const t=i("ExternalLinkIcon");return a(),n("div",null,[s("p",null,[e("This project follows the "),s("a",h,[e("GitFlow model"),o(t)]),e(" for managing the repository, accepting pull requests (PRs), and merging changes into the profile.")]),d])}const v=r(l,[["render",g],["__file","07.html.vue"]]);export{v as default}; diff --git a/assets/07.html-VaK3TpAa.js b/assets/07.html-VaK3TpAa.js new file mode 100644 index 000000000..3a5f8731b --- /dev/null +++ b/assets/07.html-VaK3TpAa.js @@ -0,0 +1 @@ +import{_ as i}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as a,o as n,c as o,b as s,f as e}from"./app-PAvzDPkc.js";const r="/saf-training/assets/edit_controls-MHnoPDv9.png",l="/saf-training/assets/selecting_controls-Pm5Uy1WO.png",c="/saf-training/assets/selected_control-HEYLnC55.png",h="/saf-training/assets/assigning_status-7fj2pa_U.png",u="/saf-training/assets/inherently_met_control-Rzkm4J9Q.png",d="/saf-training/assets/inherently_met_control_picking_status-KMCfe_FE.png",p="/saf-training/assets/justification-QYpJb_Oi.png",m="/saf-training/assets/saving_requirement-nk4uDv2L.png",f="/saf-training/assets/saving_requirement_comment-3OcjGmwr.png",g="/saf-training/assets/revision_history-7KQ-L5o4.png",y="/saf-training/assets/before_and_after-1suZerLp.png",b={},w=e('For your Component, you'll need to decide what requirements are appliable to your specific Component (hint: not all of them will be). Of the applicable requirements, you'll need to tailor them to give specific implementation guidance.
Controls vs. Requirements
You may note that Vulcan refers to the STIG requirements as "controls." A security control is an action taken by an organization in order to meet a security requirement.
STIGs are technically comprised of a set of requirements, but each requirement's main focus is describing a control to meet that requirement (i.e. the Check and Fix content).
You'll see a view of the requirement's text fields, like the vulnerability discussion, the check text, and the fix text.
Note how all of these text fields are:
We can't edit these text fields yet because we haven't yet told Vulcan if this requirement is even applicable to our Component. Let's fix that.
The process of tailoring SRG requirements into specific STIG controls first requires you to determine which of the following statuses applies to each requirement[1]:
Applicable – Configurable: The product requires configuration or the application of policy settings to achieve compliance.
Applicable – Inherently Meets: The product is compliant in its initial state and cannot be subsequently reconfigured to a noncompliant state.
Applicable – Does Not Meet: There are no technical means to achieve compliance.
Not Applicable: The requirement addresses a capability or use case that the product does not support.
If you select any status other than "Applicable - Configurable" for a requirement, you'll need to fill out a few fields explaining why you did so. We'll take a look at a requirement like that in a moment.
Based on the above definitions, we can use the following workflow to determine the right status.
',19),q=e('The requirement's title is "The operating system must audit all account creations."
This requirement does apply. RHEL9, like any other operating system, must have a functioning auditing system; no inherent aspect of RHEL would change this.
RHEL9, like all operating systems, has a built in auditing capability. The auditing capability is configurable (i.e. it is possible to have the system configured to either meet or not meet this requirement).
How do we know all this about the system?
If you are not familiar with the RHEL9 auditing system, don't worry; it's just an example we're using for the class. We promise we will not quiz you on how the auditd
service works.
If you have to develop STIG content for a project, it will concern a Component that you are familiar with enough to answer these questions (or are at least in a position to research).
We would consider this requirement Applicable - Configurable. The system is capable of complying with the SRG requirement, but only if properly configured.
Hint: Most SRG requirements wind up being applicable to Components. A handful may be either Not Applicable, Inherently Met or Inherently Not Met. We still have to check.
Note that once we select the status, the text fields become editable. Now we can tailor the general guidance from the SRG into specific guidance.
Before we do that, let's investigate a the Status field a bit more.
Our title here is "The operating system must obscure feedback of authentication information during the authentication process to protect the information from possible exploitation/use by unauthorized individuals."
Yes, this requirement still applies. Like most requirements, RHEL9 doesn't have any quirks that would make this requirement not apply.
However, you may know that RHEL (and all Linux OSes) obscure user passwords when they are entered either into the GUI or on the terminal. This behavior is baked into the RHEL source code -- there is no way for a user to configure the system to not do this.
As such, the status should be Applicable - Inherently Meets.
Notice that this time, several of the fields in the Vulcan editing window changed.
If a requirement is flagged as any of Not Applicable, Inherently Met, or Inherently Not Met, then we need to offer proof that this is the case, and give the end users guidance on how to mitigate the vulnerability if the requirement cannot be met.
If we select any of those statuses in Vulcan, we therefore get a different set of fields to complete. We can't describe them better than DISA can, so we will refer you to the Vendor STIG guide[2] for the definitions of the new fields.
Per Per the Vendor STIG Process Guide section 4.1.15 -
"The Mitigation offers a method for minimizing risk. Mitigations do not eliminate the need for the requirement.
The “Mitigation” field must be populated if the status of the requirement is Applicable – Does Not Meet.
After the mitigation, include a summary statement to address any impact to the overall risk associated with this requirement.
Example summary statements:
An “Applicable – Does Not Meet” vulnerability may be fully mitigated by the application of another STIG check or by the underlying operating system. In these instances, include a statement in the Mitigation as shown in the example below.
Examples of risk mitigated by other STIG requirements:
Per the Vendor STIG Process Guide section 4.1.16 -
"Populate this information for requirements that have a status of Applicable – Inherently Meets. The Artifact Description describes the artifacts or substantiating information that shows how the product inherently meets the requirement.
All self-certification claims must be accompanied by supporting vendor documentation, which taken as a whole, provides DISA with reasonable assurance that the particular requirement has been met.
This field provides citations to the documentary evidence that describe how each requirement is satisfied. Examples of artifacts include:
Note: Blogs and email messages are not sufficient documentation to support an Applicable – Inherently Meets status."
Per the Vendor STIG Process Guide section 4.1.17 -
"For requirements that have a status of Not Applicable:
For requirements that have a status of Applicable – Does Not Meet:
For requirements that have a status of Applicable – Inherently Meets
What about the Vendor Comments field?
You may note that picking any status in Vulcan opens the Vendor Comments field for editing.
That field is purely stored in Vulcan and will not be included in any exports. It exists for the content authors to leave comments on the process of writing the requirement.
For example, an author may add their references for a control's Check or Fix text in the Vendor Comment field, since a reviewer might like to know how they arrived at their conclusions.
We will not complete the Artifact field in RHEL-09-000045 because digging around in the RHEL9 source code is beyond the scope of this course.
RHEL9 automatically obfuscates user passwords in the graphical user interface and at the command line interface.
Like so:
Hit "Save Control" when finished.
You'll be asked to fill out a short description of the change you made. If you are familiar with a source code manager like Git, this process is analogous to adding a commit message when you add code to version control.
Once we save edits, we will see the Revision History on the right side of the screen automatically update:
Clicking the "Rule Was Updated" button will show a before and after view of what was changed by the edit.
You can even select changes to revert, again similar to how a source code manager lets you roll back changes.
Why would I revert changes?
Remember that writing requirements in Vulcan is intended to be done in groups. Authors can and do often disagree about how a requirement should be completed.
Having the ability to granularly revert edits -- and even just track what change was made, when, and by who -- is an important part of a collaborative workflow.
Most of the time you won't need to use RSpec syntax to write a good test. But we want to show you a few neat tricks you can accomplish with RSpec.
We will write a few tests in this section to demonstrate the difference between InSpec's default syntax and RSpec syntax.
Let's pretend we have a new requirement for NGINX:
6. NGINX's /etc/nginx directory should not be empty.
+
(It's a bit of an odd requirement, but bear with us for the sake of this example.)
First, we'll try a test that does not use RSpec syntax to illustrate the problem we want to solve:
`,7),w=e("div",{class:"language-ruby line-numbers-mode","data-ext":"rb"},[e("pre",{class:"language-ruby"},[e("code",null,[s("control "),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'Requirement 6'")]),s(),e("span",{class:"token keyword"},"do"),s(` + impact `),e("span",{class:"token number"},"1.0"),s(` + title `),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'Checking that /etc/nginx does not return empty'")]),s(` + desc `),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'Let\\'s do this the ugly way.'")]),s(` + describe command`),e("span",{class:"token punctuation"},"("),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'ls -al'")]),e("span",{class:"token punctuation"},")"),e("span",{class:"token punctuation"},"."),s("stdout"),e("span",{class:"token punctuation"},"."),s("strip "),e("span",{class:"token keyword"},"do"),s(` + it `),e("span",{class:"token punctuation"},"{"),s(" should_not be_empty "),e("span",{class:"token punctuation"},"}"),s(` + `),e("span",{class:"token keyword"},"end"),s(` +`),e("span",{class:"token keyword"},"end"),s(` +`)])]),e("div",{class:"line-numbers","aria-hidden":"true"},[e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"})])],-1),x=e("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[e("pre",{class:"language-bash"},[e("code",null,[s(" ✔ Requirement "),e("span",{class:"token number"},"6"),s(": Checking that /etc/nginx does not "),e("span",{class:"token builtin class-name"},"return"),s(` empty + ✔ total `),e("span",{class:"token number"},"76"),s(` + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(":21 "),e("span",{class:"token builtin class-name"},"."),s(` + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(":21 "),e("span",{class:"token punctuation"},".."),s(` + -rwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"0"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(`:21 .dockerenv + lrwxrwxrwx `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"7"),s(" Oct "),e("span",{class:"token number"},"30"),s(" 00:00 bin -"),e("span",{class:"token operator"},">"),s(` usr/bin + drwxr-xr-x `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Sep "),e("span",{class:"token number"},"29"),s(),e("span",{class:"token number"},"20"),s(`:04 boot + drwxr-xr-x `),e("span",{class:"token number"},"5"),s(" root root "),e("span",{class:"token number"},"360"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(`:21 dev + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Nov "),e("span",{class:"token number"},"1"),s(` 05:12 docker-entrypoint.d + -rwxrwxr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"1620"),s(" Nov "),e("span",{class:"token number"},"1"),s(` 05:11 docker-entrypoint.sh + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(`:21 etc + drwxr-xr-x `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Sep "),e("span",{class:"token number"},"29"),s(),e("span",{class:"token number"},"20"),s(`:04 home + lrwxrwxrwx `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"7"),s(" Oct "),e("span",{class:"token number"},"30"),s(" 00:00 lib -"),e("span",{class:"token operator"},">"),s(` usr/lib + lrwxrwxrwx `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"9"),s(" Oct "),e("span",{class:"token number"},"30"),s(" 00:00 lib32 -"),e("span",{class:"token operator"},">"),s(` usr/lib32 + lrwxrwxrwx `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"9"),s(" Oct "),e("span",{class:"token number"},"30"),s(" 00:00 lib64 -"),e("span",{class:"token operator"},">"),s(` usr/lib64 + lrwxrwxrwx `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"10"),s(" Oct "),e("span",{class:"token number"},"30"),s(" 00:00 libx32 -"),e("span",{class:"token operator"},">"),s(` usr/libx32 + drwxr-xr-x `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 media + drwxr-xr-x `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 mnt + drwxr-xr-x `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 opt + dr-xr-xr-x `),e("span",{class:"token number"},"228"),s(" root root "),e("span",{class:"token number"},"0"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(`:21 proc + drwx------ `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 root + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(`:21 run + lrwxrwxrwx `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"8"),s(" Oct "),e("span",{class:"token number"},"30"),s(" 00:00 sbin -"),e("span",{class:"token operator"},">"),s(` usr/sbin + drwxr-xr-x `),e("span",{class:"token number"},"2"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 srv + dr-xr-xr-x `),e("span",{class:"token number"},"12"),s(" root root "),e("span",{class:"token number"},"0"),s(" Nov "),e("span",{class:"token number"},"8"),s(),e("span",{class:"token number"},"20"),s(`:21 sys + drwxrwxrwt `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Nov "),e("span",{class:"token number"},"1"),s(` 05:12 tmp + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 usr + drwxr-xr-x `),e("span",{class:"token number"},"1"),s(" root root "),e("span",{class:"token number"},"4096"),s(" Oct "),e("span",{class:"token number"},"30"),s(` 00:00 var is expected not to be empty +`)])]),e("div",{class:"line-numbers","aria-hidden":"true"},[e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"})])],-1),f=e("p",null,[s("Well. . . it "),e("em",null,"sort of"),s(" works.")],-1),y=e("p",null,`Notice how much output InSpec printed here to answer the simple question of "did this command return empty?" Imagine if we had done this on a directory with many files in it. We'd just be cluttering up the screen (and our report files).`,-1),_=e("div",{class:"hint-container warning"},[e("p",{class:"hint-container-title"},"Wait, couldn't we have just used the directory resource for this?"),e("p",null,[s(`Correct. That would have been a much better way of doing this, and illustrates the general principle of "don't use raw shell commands with the `),e("code",null,"command"),s(' resource unless you have to."')]),e("p",null,"We're just doing it this way for the example.")],-1),A={href:"https://relishapp.com/rspec/rspec-core/docs/subject/explicit-subject",target:"_blank",rel:"noopener noreferrer"},S=e("div",{class:"language-ruby line-numbers-mode","data-ext":"rb"},[e("pre",{class:"language-ruby"},[e("code",null,[s("control "),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'Requirement 6'")]),s(),e("span",{class:"token keyword"},"do"),s(` + impact `),e("span",{class:"token number"},"1.0"),s(` + title `),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'Checking that /etc/nginx does not return empty'")]),s(` + desc `),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'Let\\'s do this the concise way.'")]),s(` + describe `),e("span",{class:"token string-literal"},[e("span",{class:"token string"},'"The /etc/nginx directory"')]),s(),e("span",{class:"token keyword"},"do"),s(` + subject `),e("span",{class:"token punctuation"},"{"),s(" command"),e("span",{class:"token punctuation"},"("),e("span",{class:"token string-literal"},[e("span",{class:"token string"},"'ls -al'")]),e("span",{class:"token punctuation"},")"),e("span",{class:"token punctuation"},"."),s("stdout"),e("span",{class:"token punctuation"},"."),s("strip "),e("span",{class:"token punctuation"},"}"),s(` + it `),e("span",{class:"token punctuation"},"{"),s(" should_not be_empty "),e("span",{class:"token punctuation"},"}"),s(` + `),e("span",{class:"token keyword"},"end"),s(` +`),e("span",{class:"token keyword"},"end"),s(` +`)])]),e("div",{class:"line-numbers","aria-hidden":"true"},[e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"}),e("div",{class:"line-number"})])],-1),I=e("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[e("pre",{class:"language-bash"},[e("code",null,[s(" ✔ Requirement "),e("span",{class:"token number"},"6"),s(": Checking that /etc/nginx does not "),e("span",{class:"token builtin class-name"},"return"),s(` empty + ✔ The /etc/nginx directory is expected not to be empty +`)])]),e("div",{class:"line-numbers","aria-hidden":"true"},[e("div",{class:"line-number"}),e("div",{class:"line-number"})])],-1),O=r('Much better, right? We can override InSpec's default output to print a message that is actually useful.
Info
Another benefit to using subject
is preventing command output from being stored in the report.
should
vs. expect
syntaxUsers familiar with the RSpec testing framework may know that there are two ways to write test statements: should
and expect
. The RSpec community decided that expect
is the preferred syntax.
InSpec recommends the should
syntax as it tends to read more easily. However, there are times when the expect
syntax will communicate much more clearly to the end-user. InSpec will continue to support both methods of writing tests.
Let's copy the describe
shown below directly into our example.rb
file (we don't need to wrap them in a control
block for this section). Consider this describe
block from your my_nginx
profile:
The failure_message
variable in the above describe
block is assigned a value by pure Ruby assignment. Remember how we said that, since InSpec is built on Ruby, any Ruby syntax will work inside an InSpec test? Ruby's string formatting syntax (the #{non_admin_users.join(", ")}
) can create a string that lists the users who fail the test by having shell access when they shouldn't.
Writing good failure messages
The trick to writing useful failure messages is to use Ruby to find the subset of all elements we are testing (here, the users) that actually fail the test. We don't need to print a statement for every array element we tested; we only need to print a statement that shows the elements that failed.
Expect
syntax and Password HashesHere's another example -- we have an InSpec test that checks if passwords are SHA512 hashes.
As we said, when possible, and when there is a high change of a large set only having a few offending items, attempt to find only those items that could be outside our requirements. If there are none -- wonderful! We met our requirement.
bad_users = inspec.shadow.where { password != "*" && password != "!" && password !~ /\\$6\\$/ }.users # note that SHA12-encrypted passwords are marked by starting with '$6$' in /etc/shadow
+
+describe 'Password hashes in /etc/shadow' do
+ it 'should only contain SHA512 hashes' do
+ failure_message = "Users without SHA512 hashes: #{bad_users.join(', ')}"
+ expect(bad_users).to be_empty, failure_message
+ end
+end
+
The file
resource is perfect for looking at single files and their properties. However, it does not look at groups of files. To do that, we need to use multiple resources in concert.
Take a look at this example from a profile for use in AWS virtual machines. We use the command
resource to run the find
command and then use the file
resource to investigate each result. Using multiple resources together is one of the key values InSpec provides, allowing you to get at just the data you need when you need it.
command('find ~/* -type f -maxdepth 0 -xdev').stdout.split.each do |fname| # we need to be careful about using 'find' --
+ # there could be a LOT of output if we are not specific enough with the search!
+ describe file(fname) do
+ its('owner') { should cmp 'ec2-user' }
+ end
+end
+
Avoid Large Sets or 'Check Everyone at the Door' Approaches
For IO intensive (full filesystem, or global scans) or large scale processes, try to be as specific as possible with your searches. Think about using 'negative logic' vs 'positive logic' - "Find me all the items outside my target set" vs "Look at each item in the set and ensure it has these properties".
This 'find the outsiders' vs 'check everyone at the door' approach can really speed things along. Again, keep your data set as small as possible, don't inspect more than the requirements require!
Try writing your own resources and think about how you could implement them in a profile!
Suggested Resources to start on (Simple):
id
retrieves container idimage
retrieves image namerepo
retrieves the repotag
retrieves the tagports
retrieves the portscommand
retrieves commandbranches
checks if branch existscurrent_branch
retrieves current branchlast_commit
retrieves last commit from loggit_log
retrieve log of all commitstag
retrieve tag for repoSuggested Resources to start on (Medium):
owner
tests if the owner of the file matches the specified value.group
tests if the group to which a file belongs matches the specified value.size
tests if a file’s size matches, is greater than, or is less than the specified value.contents
tests if contents in the file match the value specified in a regular expression.path
retrieves path to fileowner
tests if the owner of the file matches the specified value.group
tests if the group to which a file belongs matches the specified value.size
tests if a file’s size matches, is greater than, or is less than the specified value.contents
tests if contents in the file match the value specified in a regular expression.path
retrieves path to directoryexist
tests if the named user existsgid
tests the group identifiergroup
tests the group to which the user belongsgroups
tests two (or more) groups to which the user belongshome
tests the home directory path for the usermaxdays
tests the maximum number of days between password changesmindays
tests the minimum number of days between password changesshell
tests the path to the default shell for the useruid
tests the user identifierwarndays
tests the number of days a user is warned before a password must be changeddaemon
daemon returns a string containing the daemon that is allowed in the rule.client_list
client_list returns a 2d string array where each entry contains the clients specified for the rule.options
options returns a 2d string array where each entry contains any options specified for the rule.Suggested Resources to start on (Hard):
users
A list of strings, representing the usernames matched by the filterpasswords
A list of strings, representing the encrypted password strings for entries matched by the where filter. Each string may not be an encrypted password, but rather a * or similar which indicates that direct logins are not allowed.last_changes
A list of integers, indicating the number of days since Jan 1 1970 since the password for each matching entry was changed.min_days
A list of integers reflecting the minimum number of days a password must exist, before it may be changed, for the users that matched the filter.max_days
A list of integers reflecting the maximum number of days after which the password must be changed for each user matching the filter.warn_days
A list of integers reflecting the number of days a user is warned about an expiring password for each user matching the filter.inactive_days
A list of integers reflecting the number of days a user must be inactive before the user account is disabled for each user matching the filter.expiry_dates
A list of integers reflecting the number of days since Jan 1 1970 that a user account has been disabled, for each user matching the filter. Value is nil if the account has not expired.count
The count property tests the number of records that the filter matched.device_name
is the name associated with the device.mount_point
is the directory at which the file system is configured to be mounted.file_system_type
is the type of file system of the device or partition.mount_options
is the options for the device or partition.dump_options
is a number used by dump to decide if a file system should be backed up.file_system_options
is a number that specifies the order the file system should be checked.parse_conf
parse the conf filefetch_connectors
retrieves keys port
, protocol
, timeout
, redirect
, sslprotocol
, scheme
, sslenable
, clientauth
, secure
inspec exec my_nginx -t docker://nginx --reporter cli json:baseline_output.json
+
When using InSpec in practice, most users aggregate report files from multiple systems over time, so we recommend that you generate reports that specify:
inspec exec my_nginx --reporter json:nginx-$(date +"%Y-%m-%d-%H-%M-%S").json
+
Here we add a bash eval
(the $(date +"%Y-%m-%d-%H-%M-%S")
) to our filename when we invoke inspec exec
. Now we can run tests multiple times with the same command and get a different filename each time.
Caution
Note that if you save InSpec results to a file (such as with the json
reporter), and then re-run the same command, you will overwrite the original contents of that file with the more recent results. Be sure that all of your reports have unique names.
inspec exec my_nginx --reporter cli json:tmp/output.json
+
inspec exec my_nginx --reporter junit2:tmp/junit.xml html:www/index.html
+
inspec exec my_nginx --reporter json junit2:tmp/junit.xml | tee out.json
+
InSpec also lets you capture all of these reporter options in a configuration file:
{
+ "reporter": {
+ "cli": {
+ "stdout": true
+ },
+ "json": {
+ "file": "tmp/output.json",
+ "stdout": false
+ }
+ }
+}
+
The following are supported reporters:
InSpec includes the --enhanced-outcomes
flag to enrich the output format slightly if more detail is needed.
When this flag is passed, the control level status outcomes of the profile execution are Passed
, Failed
, Not Applicable (N/A)
, Not Reviewed (N/R)
, or Error (ERR)
.
So far, we have been executing InSpec profiles that we have written ourselves and saved to the local machine. InSpec also gives you the ability to execute a profile that lives on the other end of an HTTP/S link or a .git
link.
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx
+
Wait, what if I can't publish to GitHub?
Not everyone can open source their code, or make it available on the open Internet. Your organization or environment may be more suited to using a private code repository (e.g. GitLab or BitBucket) to store profiles. InSpec supports passing authentication tokens as part of profile locations:
inspec exec https://API_TOKEN@gitlab.supersecret.com/profiles/inspec_baseline.git
+
Let's try running an already-complete profile and generating a report.
The following command will run the SAF Validation Library's NGINX baseline profile from MITRE GitHub, and use the reporter to output a json file. You will need this JSON file for the next section, where we'll load our results into Heimdall:
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx --reporter cli json:nginx-full-baseline-$(date +"%Y-%m-%d-%H-%M-%S").json
+
https://github.com/mitre/nginx-stigready-baseline
to specify the profile.-t docker://nginx
.--reporter cli json:./results/nginx_vanilla_results.json
--input-file inputs.yml
To execute this command to run the GitHub profile on your target system, run this inspec exec
command.
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx --input-file inputs.yml --reporter cli json:./results/nginx_vanilla_results.json
+
Enter the command from the previous step in your terminal and press enter. It will take a moment to start running.
You should see output similar to that below. The whole profile should execute in only a couple minutes.
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx --input-file inputs.yml --reporter cli json:./results/nginx_vanilla_results.json
+[2023-11-01T02:41:29+00:00] WARN: URL target https://github.com/mitre/nginx-stigready-baseline transformed to https://github.com/mitre/nginx-stigready-baseline/archive/master.tar.gz. Consider using the git fetcher
+ ...
+ × is expected not to be nil
+ expected: not nil
+ got: nil
+ ↺ This test is NA because the ssl_client_certificate directive has not been configured.
+ ↺ V-56029: The NGINX web server must augment re-creation to a stable and known
+ baseline.
+ ↺ This test requires a Manual Review: Interview the SA and ask for documentation on the
+ disaster recovery methods for the NGINX web server in the event of the necessity for rollback.
+ ↺ V-56031: The NGINX web server must encrypt user identifiers and passwords.
+ ↺ This check is NA because NGINX does not manage authentication.
+ ✔ V-56033: The web server must install security-relevant software updates within
+ the configured time period directed by an authoritative source (e.g., IAVM,
+ CTOs, DTMs, and STIGs).
+ ✔ NGINX version v1.25.3 installed is not more then one patch level behind v1.25.2 is expected to cmp >= "1.25.2"
+ ✔ NGINX version v1.25.3 installed is greater then or equal to the organization approved version v1.23.1 is expected to cmp >= "1.23.1"
+ ✔ V-56035: The NGINX web server must display a default hosted application web page, not
+ a directory listing, when a requested web page cannot be found.
+ ✔ The root directory /usr/share/nginx/html should include the default index.html file.
+ ↺ V-61353: The web server must remove all export ciphers to protect the
+ confidentiality and integrity of transmitted information. (2 skipped)
+ ↺ This test is NA because the ssl_prefer_server_ciphers directive has not been configured.
+ ↺ This test is NA because the ssl_ciphers directive has not been configured.
+
+
+Profile Summary: 27 successful controls, 26 control failures, 36 controls skipped
+Test Summary: 137 successful, 91 failures, 55 skipped
+
You see that many of the tests pass, while others fail and may require investigation.
`,10),_=e("h4",{id:"_8-3-2-results-saved-to-a-file",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_8-3-2-results-saved-to-a-file","aria-hidden":"true"},"#"),n(" 8.3.2 Results saved to a file")],-1),w=e("p",null,[n("You should also see your results in a JSON file located in "),e("code",null,"/results"),n(" folder with the name that you specified in the command, for example "),e("code",null,"nginx_vanilla_results.json"),n('. If you are using the lab environment GitHub codespaces, you should see it on the top left of your screen under files. Right click that file and click "Download" so that you have this file locally for the next steps.'),e("br"),e("img",{src:p,alt:"The Results Folder",loading:"lazy"})],-1),j={class:"hint-container details"},I=a(`InSpec allows you to output your test results to one or more reporters. You can configure the reporter(s) using either the --json-config option or the --reporter option. While you can configure multiple reporters to write to different files, only one reporter can output to the screen(stdout).
$ inspec exec /root/my_nginx -t ssh://TARGET_USERNAME:TARGET_PASSWORD@TARGET_IP --reporter cli json:baseline_output.json
+
You can specify one or more reporters using the --reporter cli flag. You can also specify a output by appending a path separated by a colon.
Output json to screen.
inspec exec /root/my_nginx --reporter json
+or
+inspec exec /root/my_nginx --reporter json:-
+
Output yaml to screen
inspec exec /root/my_nginx --reporter yaml
+or
+inspec exec /root/my_nginx --reporter yaml:-
+
Output cli to screen and write json to a file.
inspec exec /root/my_nginx --reporter cli json:/tmp/output.json
Output nothing to screen and write junit and html to a file.
inspec exec /root/my_nginx --reporter junit:/tmp/junit.xml html:www/index.html
Output json to screen and write to a file. Write junit to a file.
inspec exec /root/my_nginx --reporter json junit:/tmp/junit.xml | tee out.json
If you wish to pass the profiles directly after specifying the reporters you will need to use the end of options flag --.
inspec exec --reporter json junit:/tmp/junit.xml -- profile1 profile2
Output cli to screen and write json to a file.
{
+ "reporter": {
+ "cli": {
+ "stdout": true
+ },
+ "json": {
+ "file": "/tmp/output.json",
+ "stdout": false
+ }
+ }
+}
+
The following are the current supported reporters:
Let's go back to our requirement for earlier that we said was "Applicable - Configurable." It's time to fill it out completely.
The Check and Fix fields are the ones that acually tell the user:
As such, these fields represent the bulk of what you will need to research and modify when constructing your security guidance.
DISA requires that STIG authors use very specific language for these sections. Again, we will leverage the official guidance[1] for instructions.
Per the Vendor STIG Process Guide section 4.1.11 -
"The Check is used to provide specific instruction on how to validate product configuration settings. It must include any information and procedures necessary for validating the configured value.
The Check should also state:
If the vendor is leveraging third-party tools to satisfy a requirement, identify in the Check the product and the specific steps to check compliance.
If the product is expected to be compatible with a number of third-party tools, include in the Check general instructions that would enable a systems administrator with reasonable familiarity with the third-party tool to perform the necessary procedure.
For example, if the requirement is to block certain TCP ports on a firewall, a general instruction to this effect may suffice.
"
"Write the Check so the user can easily follow the steps to assess and determine compliance.
Check text example:
If Bluetooth connectivity is required to facilitate use of approved external devices, this is not applicable.
+
+To determine if any hardware components for Bluetooth are loaded in the system, run the following command:
+
+# sudo kextstat | grep -i Bluetooth
+
+If a result is returned, this is a finding.
+
+In some cases, determining when an item is NOT a finding might be appropriate.
+
Check text example:
If the "xyz" parameter is set to "5", this is not a finding.
+
+When using a command to inspect the status of a host, listing example output can be helpful. The output must comply with STIG requirements unless an example of a failure is needed and is clearly explained.
+
Check text example:
Find the file systems that contain the directories being exported with the following command:
+
+# cat /etc/fstab | grep nfs
+UUID=e06097bb-cfcd-437b-9e4d-a691f5662a7d /store nfs rw,nosuid 0 0
+
+If a file system found in "/etc/fstab" refers to NFS and does not have the "nosuid" option set, this is a finding.
+
"
The Finding Statement
We want to call out the Finding statement as particularly important. STIG content must be very clear on when exactly a misconfiguration becomes a finding, or non-compliance with the requirement.
Recall that a STIG is intended to be something that can be followed by someone who is not an expert in the system at hand; recall also that we want to eventually automate these checks, and as such we want to make it easier for us as well!
Per the Vendor STIG Process Guide section 4.1.11 -
"The Fix is used to provide specific instructions on how to configure the product to comply with the requirement.
After steps in the Fix text are implemented, the resulting system state should be the same no matter how many times the instructions are followed."
"When writing the Fix content, the vendor must include all steps needed to configure the product to comply with the requirement.
Let's go back and try this for requirement RHEL-09-000003. Right now, the requirement is only populated by the original SRG text. We need to tailor this to RHEL9.
Remember that STIG writing is an open-book test. We encourage authors to go back and take a look at how other authors filled out their requirements for similar systems. In fact, the best place to look for reference is usually in a prior major version of the same software. That is, the best place to start for security guidance for RHEL9 is to see what they did for other RHEL versions!
Luckily, Vulcan has access to every STIG and SRG you have uploaded to the instance for cross-referencing.
You'll see a view of every requirement Vulcan can find in its content library that also refers back to the same SRG ID.
You can filter and search through this library for keywords if you like, or even restrict the results to only show content your team has written inside this Vulcan instance's Components. For now, though, we are likely interested only in the published STIGs.
Warning
The real, published RHEL9 STIG is uploaded to this Vulcan instance. For the purposes of this exercise, though, we will use an earlier version of RHEL.
Great! Now we have a Check and Fix field that actually have content. Note also that this content is already following STIG syntax; the commands are very direct, and the line on what counts as a finding is clearly drawn.
Is It Always This Easy?
Prior STIGs are always an excellent starting point, but new STIG content does require research and testing to ensure that guidance from the prior STIGs is still accurate for our current Component.
The Original SRG content
If you scroll down in the requirement window, you can expand out the original SRG content that this STIG requirement was sourced from. This can be useful to reference if you want to make sure your Check and Fix are still addressing the SRG requirement.
Sections 4.1.11 and 4.1.13 of the "Vendor STIG Process", Version 4 Release 1. ↩︎
Software developers create pipelines for the same reason that factory designers create assembly lines. They break processes into logical units and make them repeatable, consistent, and automated.
Pipelines also enable several paradigms in modern DevSecOps development, including continuous integration and continuous delivery (CD).
Continuous Integration (CI) is the practice of requiring all changes to a codebase to pass a test suite before they are committed. CI is implemented on a codebase to make sure that any time a bug is introduced to a codebase, it is caught and corrected as soon as someone tries to commit it, instead of months or years later in operations when it is much more difficult to fix.
Continuous Delivery is the practice of automatically delivering software (such as, for example, by pushing code to live deployment) once it passes a test suite. This is a core practice of DevSecOps -- code should be developed incrementally and small units of functionality should be delivered as soon as they are complete and pass all tests.
A fully mature DevSecOps pipeline will implement both strategies. Note that both CI and CD both presuppose that you have a high-quality, easy to use test suite available. We will create our demo pipeline using an InSpec profile as our test suite.
Have you used pipeline orchestration software other than that mentioned here?
Most of the general concepts discusses in this portion of the class will be covered by any pipeline orchestrator tool, though they wil likely have different terminology for individual features.
Let's learn how to build pipelines by taking on the role of a developer who needs to create a pipeline for a hardened NGINX container image. We can borrow the InSpec profile we've already written for our container to make sure that any time we update the container image, we do not accidentally break any security controls.
We need to:
Real-world pipelines are often used this way in a gold image pipeline, which run on a defined frequency to continuously deliver a secure, updated machine image that can be used as a base for further applications. In this use case, we would take the hardened and validated image we produced as part of the pipeline and save it as our new gold image. This way, developers can grab a "known-good" image to host their applications without having to configure or keep it up to date themselves.
Pipelines are conceptually broken down into a series of individual tasks. The tasks we need to complete in our pipeline include:
',9),x=e("li",null,"Prep - configure our runner for later work (in our case, make sure InSpec is installed and ready to go)",-1),C=e("li",null,[t("Lint - make sure code passes style requirements (in our case, "),e("code",null,"inspec check ."),t(")")],-1),A=e("li",null,"Deploy the test suite (in our case, an NGINX container we want to use as a test system)",-1),S=e("li",null,"Validate - check configuration (in our case, run InSpec against our test system and generate a report)",-1),G=e("li",null,"Verify - confirm if the validation run passed our expectations (in our case, use the SAF CLI to check that the Validation report met our threshold)",-1),D=e("li",null,"Do something with results - e.g. publish our image if it met our expectations and passed the tests",-1),H=e("p",null,[t("GitHub Actions organizes the tasks inside a pipeline into "),e("strong",null,"jobs"),t(". A given pipeline can trigger multiple jobs, but for our sample gold image pipeline we really only need one for storing all of our tasks.")],-1),N=e("h3",{id:"runners",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#runners","aria-hidden":"true"},"#"),t(" Runners")],-1),L=e("p",null,[t("Pipeline orchestrators all have some system for selecting a "),e("strong",null,"runner"),t(" node that will be assigned to handle the tasks we define for the pipeline. Runners are any system -- containers or full virtual machines in a cloud environment -- that handle the actual task execution for a pipeline.")],-1),P={href:"https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners",target:"_blank",rel:"noopener noreferrer"},W=e("p",null,"In the next sections we will create a GitHub Action workflow to handle these jobs for us. We will commit the workflow file to our repository and watch it work!",-1);function V(T,j){const o=a("RouterLink"),n=a("ExternalLinkIcon");return c(),u("div",null,[p,f,m,e("p",null,[t("If you have taken the "),i(o,{to:"/courses/user/"},{default:s(()=>[t("SAF User")]),_:1}),t(" class, you will be familiar with many of the activities we will be doing as part of the sample pipeline in the next few sections, including using Ansible to harden a test image, validating it with InSpec, and using the SAF CLI to assess our results. We will be bundling all those activities together into a pipeline workflow file so that we can automate them.")]),g,e("p",null,[t("We will be building our sample pipeline using "),e("a",b,[t("GitHub Actions"),i(n)]),t(", the pipeline orchestration tool that is built into GitHub. We are using this feature because it is free to use unless we exceed usage limits, and because we can write up a pipeline workflow file right from our GitHub Codespaces lab environment.")]),w,e("ul",null,[e("li",null,[t("GitLab's GitLab "),e("a",_,[t("CI/CD"),i(n)])]),e("li",null,[e("a",y,[t("DroneCI"),i(n)])]),e("li",null,[t("Atlassian's "),e("a",k,[t("BitBucket Pipelines"),i(n)])]),e("li",null,[e("a",v,[t("Jenkins"),i(n)])])]),I,e("ul",null,[x,C,A,e("li",null,[t("Harden the test suite (we will use Ansible like we do in the "),i(o,{to:"/courses/user/10.html"},{default:s(()=>[t("SAF User class")]),_:1}),t(")")]),S,G,D]),H,N,L,e("p",null,[t("In the case of GitHub actions, when we trigger a pipeline, GitHub by default sends the jobs to its cloud environment to hosted runner nodes. The operating system of the runner for a particular job can be specified in the workflow file. See the "),e("a",P,[t("docs"),i(n)]),t(" for details.")]),W])}const O=l(d,[["render",V],["__file","08.html.vue"]]);export{O as default}; diff --git a/assets/08.html-svY7cBD7.js b/assets/08.html-svY7cBD7.js new file mode 100644 index 000000000..a1f006cf3 --- /dev/null +++ b/assets/08.html-svY7cBD7.js @@ -0,0 +1 @@ +import{_ as e}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as a,c as r,f as o}from"./app-PAvzDPkc.js";const t={},c=o('When planning your team's approach, remember that a Security Benchmark is only considered 'complete and valid' when all requirements for that specific Release or Major Version are met. This differs from traditional software projects where features and capabilities can be incrementally added.
A Security Benchmark and its corresponding InSpec Profile are only applicable within the context of a specific 'Release' of that Benchmark.
The choice between a micro
or massive
approach depends more on your team's work style preference.
Regardless of the approach, the final release of the Benchmark will be the same. The deciding factors for its readiness for release are the expected thresholds, hardening, and validation results.
out of scope
for a BenchmarkBenchmarks do not accommodate 'incremental requirements'. Therefore, your team should always work off a fork of the last release. If there is a 'main' or 'development' branch in your profile, it should be considered as a candidate for merging into the next patch or update release.
',8),n=[c];function i(s,d){return a(),r("div",null,n)}const l=e(t,[["render",i],["__file","08.html.vue"]]);export{l as default}; diff --git a/assets/09.html-1G4jKolf.js b/assets/09.html-1G4jKolf.js new file mode 100644 index 000000000..4ead1559b --- /dev/null +++ b/assets/09.html-1G4jKolf.js @@ -0,0 +1,7 @@ +import{_ as e}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as s,c as n,f as a}from"./app-PAvzDPkc.js";const t={},r=a(`When updating the profile, you'll be making one of three types of changes:
v1.12.4
to v1.12.5
. We aim to release these updates on a regular schedule, either weekly, bi-weekly, or monthly.The STIGs and CIS Benchmarks are scoped within the Major Version of the software products they represent.
Updates or amendments to a Benchmark's requirements are tracked within the 'Releases' of the Benchmark.
As we mentioned in the previous section, there is no concept of 'back-patching'; it is a 'forward-only' process.
Each requirement is indexed from their source SRG document, aligned to a CCI, and then given a unique Rule ID
and STIG ID
in the respective XCCDF Benchmark document.
Here is an example of various indices you may recognize:
tag gtitle: 'SRG-OS-000480-GPOS-00227'
+tag gid: 'V-230221'
+tag rid: 'SV-230221r858734_rule'
+tag stig_id: 'RHEL-08-010000'
+tag fix_id: 'F-32865r567410_fix'
+tag cci: ['CCI-000366']
+
Now that we have written up some Check and Fix text, it's time to use one of Vulcan's other features -- the InSpec code pane.
We've already used Vulcan to generate a document that has most of the metadata in place that we would need to properly label an automated validation test. Now we can go the last mile or so to a complete security test.
You'll see a code editing window directly in Vulcan. What we can do now is write in the test code we want to use for testing the check we just wrote.
',7),f={class:"hint-container warning"},g=e("p",{class:"hint-container-title"},"Wait, what if I have no idea how to write InSpec code?!",-1),w=e("p",null,"But if you don't have the time for those, don't sweat it; we just want you to know that this is something you can do with Vulcan.",-1),_=c(`That code is as follows:
audit_command = '/etc/passwd'
+
+if virtualization.system.eql?('docker')
+ impact 0.0
+ describe 'Control not applicable - audit config must be done on the host' do
+ skip 'Control not applicable - audit config must be done on the host'
+ end
+else
+ describe 'Command' do
+ it "#{audit_command} is audited properly" do
+ audit_rule = auditd.file(audit_command)
+ expect(audit_rule).to exist
+ expect(audit_rule.key).to cmp 'identity'
+ expect(audit_rule.permissions.flatten).to include('w', 'a')
+ end
+ end
+end
+
Note
If you have taken the SAF User class, you have used inspec exec
to run code that looks like the above against a target system.
Save the requirement.
Now check the "InSpec Control (Read-Only)" tab. It has used the contents of the other two tabs to assemble a completed InSpec control from your requirement, including the complete context of your STIG control as metadata tags in the test.
We can export this content and start using it immediately if we wish (we'll discuss how in a later section).
Vulcan includes the ability to write InSpec control code right alongside original guidance because we need a tight binding between the human-readable guidance and the machine-readable automation code.
',11),b={href:"https://saf-cli.mitre.org/#xccdf-benchmark-to-inspec-stub",target:"_blank",rel:"noopener noreferrer"},y=e("p",null,[n("You can think of this process as recording the pedigree of your tests into the code, so that you dont lose it as your code moves down the pipeline, and so that you "),e("em",null,"always know why you are running a check"),n(".")],-1),v=e("p",null,"Furthermore, another reason we added the InSpec control editing window is because in most cases, you are writing security guidance because you want to write security validation code! Recall that the whole point of Vulcan is to help us define the security posture target so that we can automate reaching it!",-1),I=e("div",{class:"hint-container note"},[e("p",{class:"hint-container-title"},"Do I need to use InSpec for my ATO process?"),e("p",null,"DOD does not and will not require teams to use any one particlar security validation tool."),e("p",null,"MITRE SAF favors InSpec because it favors our use cases nicely, but there are many different security tools on the market, some of which are better suited to particular tasks.")],-1),S=e("hr",{class:"footnotes-sep"},null,-1),x={class:"footnotes"},C={class:"footnotes-list"},V={id:"footnote1",class:"footnote-item"},q={href:"https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline",target:"_blank",rel:"noopener noreferrer"},T={href:"https://saf.mitre.org/libs/validate",target:"_blank",rel:"noopener noreferrer"},E=e("a",{href:"#footnote-ref1",class:"footnote-backref"},"↩︎",-1);function N(A,B){const a=o("RouterLink"),s=o("ExternalLinkIcon");return r(),p("div",null,[k,e("div",f,[g,e("p",null,[n("Great news, we have an in-depth "),t(a,{to:"/courses/beginner/"},{default:i(()=>[n("training class")]),_:1}),n(" on how to do this ("),t(a,{to:"/courses/advanced/"},{default:i(()=>[n("two of them")]),_:1}),n(", actually).")]),w]),_,e("p",null,[n("Using Vulcan will ensure that all of your guidance is included in your test code as metadata (you can also do this by creating a profile stub with the "),e("a",b,[n("SAF CLI"),t(s)]),n(".)")]),y,v,I,S,e("section",x,[e("ol",C,[e("li",V,[e("p",null,[n("See the full profile code "),e("a",q,[n("here"),t(s)]),n(". Or see many more examples of InSpec profiles at "),e("a",T,[n("https://saf.mitre.org/libs/validate"),t(s)]),n(". "),E])])])])])}const L=l(m,[["render",N],["__file","09.html.vue"]]);export{L as default}; diff --git a/assets/09.html-7jlh-VKk.js b/assets/09.html-7jlh-VKk.js new file mode 100644 index 000000000..7ac497f77 --- /dev/null +++ b/assets/09.html-7jlh-VKk.js @@ -0,0 +1,7 @@ +import{_ as i}from"./Heimdall_Load-a_OCE2Lf.js";import{_ as o}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as s,o as l,c as n,d as e,e as t,b as r,f as d}from"./app-PAvzDPkc.js";const u="/saf-training/assets/SAF_Capabilities_Visualize-syiJ9qWZ.png",c="/saf-training/assets/Heimdall_NGINX_Vanilla_With_Inputs-fih6JUyh.png",h="/saf-training/assets/Heimdall_Filter_Failure-3zncmq1J.png",m="/saf-training/assets/Heimdall_TreeMap_Failures-bQaGeLB0.png",f="/saf-training/assets/Heimdall_V-41670_ResultsDetails-SaI3gEeI.png",p="/saf-training/assets/Heimdall_V-41670_ResultsDetails_Code-1jGKfuJ4.png",g="/saf-training/assets/Heimdall_NGINX_Inputs-dC9Mvg7o.png",_={},y=e("h2",{id:"_9-visualize-mitre-heimdall",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_9-visualize-mitre-heimdall","aria-hidden":"true"},"#"),t(" 9. Visualize - MITRE Heimdall")],-1),v=e("p",null,"Now we want to SEE our results in a more meaningful way!",-1),b=e("figure",null,[e("img",{src:u,alt:"The Visualize Capability",tabindex:"0",loading:"lazy"}),e("figcaption",null,"The Visualize Capability")],-1),w={href:"https://heimdall-lite.mitre.org/",target:"_blank",rel:"noopener noreferrer"},x=d('Click on the button Upload
and navigate to your json output file that you saved from your previous step and select that file then click open.
This will allow you to view the InSpec results in the Heimdall viewer.
Your visualization should look similar to the following:
Heimdall can allow you to see a lot of different information based on the available data. See if you can find the following information from your uploaded results!
In this example, the overall compliance is 46.77%. As you can see, the compliance formula is written in the GUI. This is the number of passed controls divided by the total passed, failed, not reviewed, and errors. Not Applicable controls are not included in the overall compliance total.
You can interact with many different icons to filter for the specific results you want to see. For example, you can filter on the status, you can filter from the the severity level, you can filter from the search bar at the top, or you can filter using the tree map as some ways to drill down on a particular category or control. Here is a view of filtering with for failures.
How can expand different sections of the NIST SP 800-53 Control Tree Map to see your coverage based on the NIST controls. In this image, the filter on failed controls is still applied, but you can clear that filter to see the overall tree map for your system.
As you continue to scroll down, you can see the Results View Data. You can expand the results for a given control to see the individual subcontrols or subtests that were run to test the requirement. If any subtests fail, the control overall will be recorded as a failed control. In this case, we can see that a subtest looking at the permission of the /var/log/nginx
file was more permissive than it should be to meet the security requirements.
You can click on the "Details" or "Code" tab to see more information about how to check or fix a particular control. In this case, the fix text is written as follows:
Fix:
+
+To protect the integrity of the data that is being captured in the
+ log files, ensure that only the members of the Auditors group, Administrators,
+ and the user assigned to run the web server software is granted permissions to
+ read the log files.
+
You can view file information by clicking "File Info" on the top of the application where the file name is listed. This can show you things like the platform that this scan was completed on, how long it took, the date of the scan, and more. If you click on the "Inputs" tab, this will show what values were used for different variables in the profile's automated tests. This will show the inputs that we specified in the inputs file, and the default values for any variables that we did not put in the inputs file.
Let's create a GitHub Action workflow to define our pipeline.
Pipeline orchestration tools are usually configured in a predefined workflow file, which defines a set of tasks and the order they should run in. Workflow files live in the .github
folder for GitHub Actions (the equivalent is the gitlab-ci
file for GitLab CI, for example).
Let's create a new file to store our workflow.
mkdir .github
+mkdir .github/workflows
+touch .github/workflows/pipeline.yml
+
Neither command has output, but you should see a new file if you examine your .github
directory:
Open that file up for editing.
For reference, this is the complete workflow file we will end up with at the end of the class:
name: Demo Security Validation Gold Image Pipeline
+
+on:
+ push:
+ branches: [ main, pipeline ] # trigger this action on any push to main branch
+
+jobs:
+ gold-image:
+ name: Gold Image NGINX
+ runs-on: ubuntu-20.04
+ env:
+ CHEF_LICENSE: accept # so that we can use InSpec without manually accepting the license
+ PROFILE: my_nginx # path to our profile
+ steps:
+ - name: PREP - Update runner # updating all dependencies is always a good start
+ run: sudo apt-get update
+ - name: PREP - Install InSpec executable
+ run: curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P inspec -v 5
+
+ - name: PREP - Check out this repository # because that's where our profile is!
+ uses: actions/checkout@v3
+
+ - name: LINT - Run InSpec Check # double-check that we don't have any serious issues in our profile code
+ run: inspec check $PROFILE
+
+ - name: DEPLOY - Run a Docker container from nginx
+ run: docker run -dit --name nginx nginx:latest
+
+ - name: DEPLOY - Install Python for our nginx container
+ run: |
+ docker exec nginx apt-get update -y
+ docker exec nginx apt-get install -y python3
+
+ - name: HARDEN - Fetch Ansible role
+ run: |
+ git clone --branch docker https://github.com/mitre/ansible-nginx-stigready-hardening.git || true
+ chmod 755 ansible-nginx-stigready-hardening
+
+ - name: HARDEN - Fetch Ansible requirements
+ run: ansible-galaxy install -r ansible-nginx-stigready-hardening/requirements.yml
+
+ - name: HARDEN - Run Ansible hardening
+ run: ansible-playbook --inventory=nginx, --connection=docker ansible-nginx-stigready-hardening/hardening-playbook.yml
+
+ - name: VALIDATE - Run InSpec
+ continue-on-error: true # we dont want to stop if our InSpec run finds failures, we want to continue and record the result
+ run: |
+ inspec exec $PROFILE \\
+ --input-file=$PROFILE/inputs.yml \\
+ --target docker://nginx \\
+ --reporter cli json:results/pipeline_run.json
+
+ - name: VALIDATE - Save Test Result JSON # save our results to the pipeline artifacts, even if the InSpec run found failing tests
+ uses: actions/upload-artifact@v3
+ with:
+ path: results/pipeline_run.json
+
+ - name: VERIFY - Display our results summary
+ uses: mitre/saf_action@v1
+ with:
+ command_string: "view summary -i results/pipeline_run.json"
+
+ - name: VERIFY - Ensure the scan meets our results threshold
+ uses: mitre/saf_action@v1 # check if the pipeline passes our defined threshold
+ with:
+ command_string: "validate threshold -i results/pipeline_run.json -F threshold.yml"
+
This is a bit much all in one bite, so let's construct this full pipeline piece by piece.
Pipeline orchestrators need you to define some set of events that should trigger the pipeline to run. The first thing we want to define in a new pipeline is what triggers it.
In our case, we want this pipeline to be a continuous integration pipeline, which should trigger every time we push code to the repository. Other options include "trigger this pipeline when a pull request is opened on a branch," or "trigger this pipeline when someone opens an issue on our repository," or even "trigger this pipeline when I hit the manual trigger button."
Saving Files vs. Pushing Code
In all class content so far, we have been taking advantage of Codespaces' autosave feature. We have been saving our many edits to our profiles locally.
Pushing code, by contrast, means taking your saved code and officially adding it to your base repository's committed codebase, making it a permanent change. Codespaces won't do that automatically.
Let's give our pipeline a name and add a workflow trigger. Add the following into the pipeline.yml
file:
name: Demo Security Validation Gold Image Pipeline
+
+on:
+ push:
+ branches: [main] # trigger this action on any push to main branch
+
YAML Syntax
We will be heavily editing pipeline.yml
throughout this part of the class. Recall that YAML files like this are whitespace-delimited. If you hit confusing errors when we run these pipelines, always be sure to double-check your code lines up with the examples.
Why Is `[main]` in brackets?
The branches
attribute in a workflow file can accept an array of branches we want to trigger the pipeline if they see a commit. We are only concerned with main
at present, so we wind up with '[main]
'.
Next, we need to define some kind of task to complete when the event triggers.
First, we'll define a job
, the logical group for our tasks. In our pipeline.yml
file, add:
gold-image
is an arbitrary name we gave this job. It would be more useful if we were running more than one.name
is a simple title for this job.runs-on
declares what operating system we want our runner node to be. We picked Ubuntu (and we suggest you do to to make sure the rest of the workflow commands work correctly).env
declares environment variables for use by any step of this job. We will go ahead and set a few variables for running InSpec later on: CHEF_LICENSE
will automatically accept the license prompt when you run InSpec the first time so that we don' hang waiting for input!PROFILE
is set to the path of the InSpec profile we will use to test. This will make it easier to refer to the profile multiple times and still make it easy to swap out.Now that we have our job metadata in place, let's add an actual task for the runner to complete, which GitHub Actions refer to as steps -- a quick update on our runner node's dependencies (this shouldn't be strictly necessary, but it's always good to practice good dependency hygiene!). In our pipeline.yml
file, add:
Once we push our code, you can go to another tab in our browser, load up your personal code repository for the class content that you forked earlier, and check out the Actions tab to see your pipeline executing.
Note the little green checkmark next to your pipeline run. This indicates that the pipeline has finished running. You may also see a yellow circle to indicate that the pipeline has not completed yet, or a red X mark to indicate an errr, depending on the status of your pipeline when you examine it.
If we click on the card for our pipeline run, we get more detail:
You can see some info on the triggered run, including a card showing the job that we defined earlier. Clicking it gives us a view of the step we've worked into our pipeline -- we can even see the stdout (terminal output) of running that step on the runner.
Congratulations, you've run a pipeline! Now we just need to make it do something useful for us.
It's up to you.
Some orchestration tools let you run pipelines locally, and in a real repo, you'd probably want to do this on a branch other than the main
one to keep it clean. But in practice it has been the authors' experience that everyone winds up simply creating dozens of commits to the repo to trigger the pipeline and watch for the next spot where it breaks. There's nothing wrong with doing this.
For example, consider how many failed pipelines the author had while designing the test pipeline for this class, and how many of them involve fixing simple typos. . .
pipeline.yml
after adding a job"}],"tab-id":"shell"},{title0:s(({value:a,isActive:t})=>[e("Adding a Job")]),title1:s(({value:a,isActive:t})=>[E,e(" after adding a job")]),tab0:s(({value:a,isActive:t})=>[A]),tab1:s(({value:a,isActive:t})=>[P]),_:1},8,["data"]),C,i(l,{id:"131",data:[{id:"Adding a Step"},{id:"pipeline.yml
after adding a step"}],"tab-id":"shell"},{title0:s(({value:a,isActive:t})=>[e("Adding a Step")]),title1:s(({value:a,isActive:t})=>[S,e(" after adding a step")]),tab0:s(({value:a,isActive:t})=>[N]),tab1:s(({value:a,isActive:t})=>[L]),_:1},8,["data"]),R,T,i(l,{id:"144",data:[{id:"Committing And Pushing Code"},{id:"Output of Pushing Code"}],"tab-id":"shell"},{title0:s(({value:a,isActive:t})=>[e("Committing And Pushing Code")]),title1:s(({value:a,isActive:t})=>[e("Output of Pushing Code")]),tab0:s(({value:a,isActive:t})=>[j]),tab1:s(({value:a,isActive:t})=>[F]),_:1}),O])}const V=u(v,[["render",G],["__file","09.html.vue"]]);export{V as default};
diff --git a/assets/09.html-TNAq4b_z.js b/assets/09.html-TNAq4b_z.js
new file mode 100644
index 000000000..2bbc128e9
--- /dev/null
+++ b/assets/09.html-TNAq4b_z.js
@@ -0,0 +1 @@
+const e=JSON.parse('{"key":"v-14735b9c","path":"/courses/user/09.html","title":"9. Visualize Results - Heimdall","lang":"en-US","frontmatter":{"order":9,"next":"10.md","title":"9. Visualize Results - Heimdall","author":"Aaron Lippold","headerDepth":3,"description":"9. Visualize - MITRE Heimdall Now we want to SEE our results in a more meaningful way! The Visualize Capability Navigate to the our online version of the Heimdall application, t...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/user/09.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"9. Visualize Results - Heimdall"}],["meta",{"property":"og:description","content":"9. Visualize - MITRE Heimdall Now we want to SEE our results in a more meaningful way! The Visualize Capability Navigate to the our online version of the Heimdall application, t..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"9. Visualize Results - Heimdall\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"9. Visualize - MITRE Heimdall","slug":"_9-visualize-mitre-heimdall","link":"#_9-visualize-mitre-heimdall","children":[{"level":3,"title":"9.1 Upload Results","slug":"_9-1-upload-results","link":"#_9-1-upload-results","children":[]},{"level":3,"title":"9.2 Visualize Results","slug":"_9-2-visualize-results","link":"#_9-2-visualize-results","children":[]},{"level":3,"title":"9.3 Explore Heimdall","slug":"_9-3-explore-heimdall","link":"#_9-3-explore-heimdall","children":[]}]}],"git":{},"readingTime":{"minutes":2.25,"words":674},"filePathRelative":"courses/user/09.md","autoDesc":true}');export{e as data};
diff --git a/assets/09.html-TYobHnIk.js b/assets/09.html-TYobHnIk.js
new file mode 100644
index 000000000..4a4666b2c
--- /dev/null
+++ b/assets/09.html-TYobHnIk.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-3ac158f3","path":"/courses/advanced/09.html","title":"9. GitHub Actions","lang":"en-US","frontmatter":{"order":9,"next":"10.md","title":"9. GitHub Actions","author":"Will Dower","headerDepth":3,"description":"GitHub Actions Let's create a GitHub Action workflow to define our pipeline. The Workflow file Pipeline orchestration tools are usually configured in a predefined workflow file,...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/advanced/09.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"9. GitHub Actions"}],["meta",{"property":"og:description","content":"GitHub Actions Let's create a GitHub Action workflow to define our pipeline. The Workflow file Pipeline orchestration tools are usually configured in a predefined workflow file,..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Will Dower"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"9. GitHub Actions\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Will Dower\\"}]}"]]},"headers":[{"level":2,"title":"GitHub Actions","slug":"github-actions","link":"#github-actions","children":[{"level":3,"title":"The Workflow file","slug":"the-workflow-file","link":"#the-workflow-file","children":[]},{"level":3,"title":"Workflow File - Complete Example","slug":"workflow-file-complete-example","link":"#workflow-file-complete-example","children":[]},{"level":3,"title":"Workflow Triggers","slug":"workflow-triggers","link":"#workflow-triggers","children":[]},{"level":3,"title":"Our First Step","slug":"our-first-step","link":"#our-first-step","children":[]},{"level":3,"title":"The Next Step","slug":"the-next-step","link":"#the-next-step","children":[]}]}],"git":{},"readingTime":{"minutes":5.98,"words":1793},"filePathRelative":"courses/advanced/09.md","autoDesc":true}`);export{e as data};
diff --git a/assets/09.html-Ti7khI6G.js b/assets/09.html-Ti7khI6G.js
new file mode 100644
index 000000000..856211d02
--- /dev/null
+++ b/assets/09.html-Ti7khI6G.js
@@ -0,0 +1 @@
+const t=JSON.parse(`{"key":"v-7c844629","path":"/courses/guidance/09.html","title":"9. Automated InSpec Testing","lang":"en-US","frontmatter":{"order":9,"next":"10.md","title":"9. Automated InSpec Testing","author":"Sumaa Sayed","headerDepth":3,"description":"9.1 Automated Validation Tests Now that we have written up some Check and Fix text, it's time to use one of Vulcan's other features -- the InSpec code pane. We've already used V...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/guidance/09.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"9. Automated InSpec Testing"}],["meta",{"property":"og:description","content":"9.1 Automated Validation Tests Now that we have written up some Check and Fix text, it's time to use one of Vulcan's other features -- the InSpec code pane. We've already used V..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:image","content":"https://mitre.github.io/saf-training/saf-training/"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"name":"twitter:card","content":"summary_large_image"}],["meta",{"name":"twitter:image:alt","content":"9. Automated InSpec Testing"}],["meta",{"property":"article:author","content":"Sumaa Sayed"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"9. Automated InSpec Testing\\",\\"image\\":[\\"https://mitre.github.io/saf-training/saf-training/\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Sumaa Sayed\\"}]}"]]},"headers":[{"level":2,"title":"9.1 Automated Validation Tests","slug":"_9-1-automated-validation-tests","link":"#_9-1-automated-validation-tests","children":[{"level":3,"title":"9.1.1 The InSpec Control Body","slug":"_9-1-1-the-inspec-control-body","link":"#_9-1-1-the-inspec-control-body","children":[]},{"level":3,"title":"9.1.2 Why am I writing test code inside Vulcan?","slug":"_9-1-2-why-am-i-writing-test-code-inside-vulcan","link":"#_9-1-2-why-am-i-writing-test-code-inside-vulcan","children":[]}]}],"git":{},"readingTime":{"minutes":2.21,"words":663},"filePathRelative":"courses/guidance/09.md","autoDesc":true}`);export{t as data};
diff --git a/assets/09.html-pI5nuU8N.js b/assets/09.html-pI5nuU8N.js
new file mode 100644
index 000000000..46e9c90b4
--- /dev/null
+++ b/assets/09.html-pI5nuU8N.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-2fd9c523","path":"/courses/beginner/09.html","title":"9. Viewing and Analyzing Results","lang":"en-US","frontmatter":{"order":9,"next":"10.md","title":"9. Viewing and Analyzing Results","author":"Aaron Lippold","headerDepth":3,"description":"Viewing and Analyzing Results We discussed using reporters in the last section to capture InSpec's output in convenient JSON files. JSON reports like these are a transport mediu...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/beginner/09.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"9. Viewing and Analyzing Results"}],["meta",{"property":"og:description","content":"Viewing and Analyzing Results We discussed using reporters in the last section to capture InSpec's output in convenient JSON files. JSON reports like these are a transport mediu..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"9. Viewing and Analyzing Results\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"Viewing and Analyzing Results","slug":"viewing-and-analyzing-results","link":"#viewing-and-analyzing-results","children":[{"level":3,"title":"Heimdall","slug":"heimdall","link":"#heimdall","children":[]},{"level":3,"title":"Heimdall Lite","slug":"heimdall-lite","link":"#heimdall-lite","children":[]}]}],"git":{},"readingTime":{"minutes":1.3,"words":390},"filePathRelative":"courses/beginner/09.md","autoDesc":true}`);export{e as data};
diff --git a/assets/09.html-qiGyptHL.js b/assets/09.html-qiGyptHL.js
new file mode 100644
index 000000000..90eef9dba
--- /dev/null
+++ b/assets/09.html-qiGyptHL.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-fad43882","path":"/courses/profile-dev-test/09.html","title":"Types of Profile Updates","lang":"en-US","frontmatter":{"order":9,"next":"10.md","title":"Types of Profile Updates","author":"Aaron Lippold","description":"When updating the profile, you'll be making one of three types of changes: 1. Patch Update: These frequent updates cover missing corner cases of testing for one or more benchmar...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/09.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Types of Profile Updates"}],["meta",{"property":"og:description","content":"When updating the profile, you'll be making one of three types of changes: 1. Patch Update: These frequent updates cover missing corner cases of testing for one or more benchmar..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Types of Profile Updates\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":3,"title":"Scope of the Update Patterns","slug":"scope-of-the-update-patterns","link":"#scope-of-the-update-patterns","children":[]}],"git":{},"readingTime":{"minutes":0.98,"words":293},"filePathRelative":"courses/profile-dev-test/09.md","autoDesc":true}`);export{e as data};
diff --git a/assets/10.html-2qHElRRB.js b/assets/10.html-2qHElRRB.js
new file mode 100644
index 000000000..965e215d8
--- /dev/null
+++ b/assets/10.html-2qHElRRB.js
@@ -0,0 +1,95 @@
+import{_ as d}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as c,o as p,c as u,b as o,w as s,d as n,e,f as i}from"./app-PAvzDPkc.js";const m={},v=i(`In addition to its own controls, an InSpec profile can leverage controls from one or more other InSpec profiles.
When a profile depends on controls from other profiles, it can be referred to as an "overlay" or "wrapper" profile. We'll use the term overlay profile in this section.
An overlay can include all, select specific, skip some, or modify controls it uses from the profiles it is depending on.
overlay=>operation: my_nginx_overlay
+e=>end: my_nginx
+
+e->overlay
+
To recap, here are the controls that are in the my_nginx
profile:
control 'nginx-version' do
+ impact 1.0
+ title 'NGINX version'
+ desc 'The required version of NGINX should be installed.'
+ describe nginx do
+ its('version') { should cmp >= input('nginx_version') }
+ end
+end
+
+control 'nginx-modules' do
+ impact 1.0
+ title 'NGINX modules'
+ desc 'The required NGINX modules should be installed.'
+ required_modules = input('nginx_modules')
+ describe nginx do
+ required_modules.each do |required_module|
+ its('modules') { should include required_module }
+ end
+ end
+end
+
+control 'nginx-conf-file' do
+ impact 1.0
+ title 'NGINX configuration file'
+ desc 'The NGINX config file should exist.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_file }
+ end
+end
+
+control 'nginx-conf-perms' do
+ impact 1.0
+ title 'NGINX configuration permissions'
+ desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
+ describe file('/etc/nginx/nginx.conf') do
+ it { should be_owned_by 'root' }
+ it { should be_grouped_into 'root' }
+ it { should_not be_readable.by('others') }
+ it { should_not be_writable.by('others') }
+ it { should_not be_executable.by('others') }
+ end
+end
+
+control 'nginx-shell-access' do
+ impact 1.0
+ title 'NGINX shell access'
+ desc 'The NGINX shell access should be restricted to admin users.'
+ describe users.shells(/bash/).usernames do
+ it { should be_in input('admin_users')}
+ end
+end
+
inspec init profile my_nginx_overlay
+
The terminal output should look like the following:
Create new profile at /<pwd>/my_nginx_overlay
+ * Create directory controls
+ * Create file controls/example.rb
+ * Create file inspec.yml
+ * Create file README.md
+
In this example, we will rename the example.rb
to overlay.rb
to avoid confusion.
tree my_nginx_overlay
+
Which should look like:
my_nginx_overlay
+ ├── README.md
+ ├── controls
+ │ └── overlay.rb
+ └── inspec.yml
+
+ 1 directory, 3 files
+
For a profile to use controls from another profile, the dependency needs to be included in the depends
section of the overlay's inspec.yml
file. For example, you can develop my_nginx_overlay
that uses controls from the my_nginx
profile. In this case, the depends
section of inspec.yml
of my_nginx_overlay
should list the name and location of my_nginx
. One way of declaring the dependency is:
After defining the dependency in the inspec.yml
of my_nginx_overlay
, controls from my_nginx
are available to be used in the overlay. By using include_controls <profile>
in the overlay.rb
of the overlay profile, all controls from the named profile will be executed every time the overlay is executed. Below you can see an example of an overlay.rb
file in the controls
folder of the overlay.
In the example above, every time my_nginx_overlay
profile is executed, all the controls from my_nginx
profile are also executed. Therefore, the following controls would be executed for my_nginx_overlay
:
Controls | Executed |
---|---|
nginx-version | ✓ |
nginx-modules | ✓ |
nginx-conf-file | ✓ |
nginx-conf-perms | ✓ |
nginx-shell-access | ✓ |
What if you don't want to run one of the controls from the included profile? Luckily, it is not necessary to maintain a slightly-modified copy of the included profile just to delete a control. The skip_control
command tells InSpec not to run a particular control.
In the above example, all controls from my_nginx
profile will be executed, except for control nginx-conf-perms
, every time my_nginx_overlay
is executed. Therefore, the following controls will be executed for my_nginx_overlay
:
Controls | Executed |
---|---|
nginx-version | ✓ |
nginx-modules | ✓ |
nginx-conf-file | ✓ |
nginx-conf-perms | ✗ |
nginx-shell-access | ✓ |
If there are only a handful of controls that should be executed from an included profile, it’s not necessary to skip all the unneeded controls, or worse, copy/paste those controls bit-for-bit into your profile[1]. Instead, use the require_controls
command.
Whenever my_nginx_overlay
is executed, it will run only the controls from my_nginx
that are specified in the require_controls
block. In the case, the following controls would be executed:
Controls | Executed |
---|---|
nginx-version | ✓ |
nginx-modules | ✓ |
nginx-conf-file | ✗ |
nginx-conf-perms | ✗ |
nginx-shell-access | ✗ |
Controls nginx-conf-file
, nginx-conf-perms
, and nginx-shell-access
would not be executed, just as if they were manually skipped. This method of including specific controls ensures only the controls specified are executed.
Warning
If new controls are added to a later version of my_nginx
, they would not be executed unless explicitly required in this scenario.
Let’s say a particular control from an included profile should still run, but the impact level set in the control isn’t appropriate. When a control is included or required, it can also be modified!
',6),C=n("div",{class:"language-ruby line-numbers-mode","data-ext":"rb"},[n("pre",{class:"language-ruby"},[n("code",null,[e("include_controls "),n("span",{class:"token string-literal"},[n("span",{class:"token string"},"'my_nginx'")]),e(),n("span",{class:"token keyword"},"do"),e(` + control `),n("span",{class:"token string-literal"},[n("span",{class:"token string"},"'nginx-modules'")]),e(),n("span",{class:"token keyword"},"do"),e(` + impact `),n("span",{class:"token number"},"0.5"),e(` + `),n("span",{class:"token keyword"},"end"),e(` +`),n("span",{class:"token keyword"},"end"),e(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),S=n("div",{class:"language-ruby line-numbers-mode","data-ext":"rb"},[n("pre",{class:"language-ruby"},[n("code",null,[e("require_controls "),n("span",{class:"token string-literal"},[n("span",{class:"token string"},"'my_nginx'")]),e(),n("span",{class:"token keyword"},"do"),e(` + control `),n("span",{class:"token string-literal"},[n("span",{class:"token string"},"'nginx-modules'")]),e(),n("span",{class:"token keyword"},"do"),e(` + impact `),n("span",{class:"token number"},"0.5"),e(` + `),n("span",{class:"token keyword"},"end"),e(` + control `),n("span",{class:"token string-literal"},[n("span",{class:"token string"},"'nginx-conf-file'")]),e(` +`),n("span",{class:"token keyword"},"end"),e(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),T=i('In the above example, all included or required controls from my_nginx
profile are executed. However, should control nginx-modules
fail, it will be raised with an impact of 0.5
instead of the originally-intended impact of 1.0
.
Note
Any fields that you do not explicitly modify in an included control will not be changed from the baseline.
Therefore, you can import a control and only override a single field like the impact
while leaving the actual control code and the rest of the metadata tags untouched.
Let's poke around a few more examples of inheritance.
',4),E={href:"https://github.com/mitre/helloworld-web-baseline",target:"_blank",rel:"noopener noreferrer"},W={href:"https://github.com/mitre/sample-rhel8-overlay",target:"_blank",rel:"noopener noreferrer"},D={href:"https://github.com/mitre/sample-mysql-overlay",target:"_blank",rel:"noopener noreferrer"},G={href:"https://github.com/mitre/aws-rds-oracle-database-19c-cis-baseline",target:"_blank",rel:"noopener noreferrer"},X={class:"hint-container note"},M=n("p",{class:"hint-container-title"},"Cloud environment overlays",-1),O={href:"https://saf.mitre.org/libs/validation",target:"_blank",rel:"noopener noreferrer"},z=n("code",null,"ec2-user",-1),L=i('Copying and pasting controls from a profile, instead of creating an overlay, can lead to important updates not being reflected. Overlays keep the profile changes in sync as they pull the latest updates. ↩︎
We now have inspec exec
and the my_nginx
profile available in our pipeline. Now we need the image we're going to harden.
Luckily, the Ubuntu runner we are using already has the Docker Engine installed, so we can deploy a container easily. We will deploy the same container image we have been using in this class so far. We will also name it nginx
to keep things consistent, but recall that this container is running on a GitHub cloud runner, not inside your codespace like your local containers we've been using for prior classwork.
We'll also need to make sure that our test target has Python installed, since that's how Ansible will connect to it later to harden it.
(You didn't have to do that for your local NGINX container because the build-lab.sh
script did all that config for you.)
You may notice that the step that runs InSpec sets an attribute called continue-on-error
to true
. We'll discuss why we do that in the next section.
Where are we in the directory structure right now?!
Remember that we used the checkout
action earlier, so the pipeline is currently running inside the root of our repo as it exists on the runner system. That's why we can refer to files in this repo by local paths (like the profile repo itself, and the results
subdirectory).
We used the --reporter json
flag when we invoked InSpec, so we should now have a report file sitting on the runner. We want to be able to access that file -- both so that we can read it ourselves, and so that we can do some later processing on it in later jobs if we want to.
That's why we used upload-artifact
, another extremely common Action. This one makes whatever file or files you pass it available for download through the browser when we examine the pipeline run later, and also makes those files available to later jobs even if they take place on different runners in this workflow (by default, any files created by a runner do not persist when the workflow ends).
Let's do some brainstorming -- are there any other steps you'd like to insert into the pipeline? What else do you want to know about the profile or do with it?
pipeline.yml
after adding more steps"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Adding More Steps")]),title1:e(({value:a,isActive:t})=>[g,s(" after adding more steps")]),tab0:e(({value:a,isActive:t})=>[f]),tab1:e(({value:a,isActive:t})=>[_]),_:1},8,["data"]),n("p",null,[s("The first new step installs the InSpec executable using the install instructions for Ubuntu as given "),n("a",w,[s("here"),l(o)]),s(". Remember that GitHub gives us a brand-new runner node every time we execute the pipeline; if we don't install it and it isn't on the pre-installed software list, it won't be available!")]),x,n("p",null,[s('The next step ("PREP - Check out this repository") is our first one to use an Action. Actions are pre-packaged pipeline steps published to the '),n("a",I,[s("GitHub Marketplace"),l(o)]),s(". Any project or developer can publish an Action to the Marketplace as part of the GitHub Actions ecosystem. Most other orchestration tools for pipelines have a similar plugin system.")]),P,n("p",null,[s("This Action in particular is one of the most common -- "),n("a",A,[E,l(o)]),s(". If called with no other attributes attached to it, it simply checks out and changes directory into the repository where the workflow file lives to the runner that is currently executing the workflow. We need to do this to make sure we have access to InSpec profile you created earlier!")]),R,S,L,N,l(c,{id:"47",data:[{id:"Adding Lint Step"},{id:"pipeline.yml
after adding lint step"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Adding Lint Step")]),title1:e(({value:a,isActive:t})=>[D,s(" after adding lint step")]),tab0:e(({value:a,isActive:t})=>[C]),tab1:e(({value:a,isActive:t})=>[F]),_:1},8,["data"]),O,l(c,{id:"70",data:[{id:"Adding Deploy Steps"},{id:"pipeline.yml
after adding deploy steps"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Adding Deploy Steps")]),title1:e(({value:a,isActive:t})=>[H,s(" after adding deploy steps")]),tab0:e(({value:a,isActive:t})=>[G]),tab1:e(({value:a,isActive:t})=>[T]),_:1},8,["data"]),j,V,W,n("p",null,[s("In our case, we're going to borrow an open-source Ansible role for NGINX that is part of the "),n("a",Y,[s("SAF Hardening Library"),l(o)]),s(". If you took the "),l(u,{to:"/courses/user/"},{default:e(()=>[s("SAF User Class")]),_:1}),s(", you might recognize this role as what you ran manually during the "),l(u,{to:"/courses/user/10.html"},{default:e(()=>[s("Hardening")]),_:1}),s(" section of that class. Again, we are borrowing some of the steps from the lab setup script and executing them against our runner system, for convenience.")]),U,l(c,{id:"95",data:[{id:"Adding Harden Steps"},{id:"pipeline.yml
after adding hardening steps"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Adding Harden Steps")]),title1:e(({value:a,isActive:t})=>[$,s(" after adding hardening steps")]),tab0:e(({value:a,isActive:t})=>[q]),tab1:e(({value:a,isActive:t})=>[M]),_:1},8,["data"]),X,B,z,l(c,{id:"112",data:[{id:"Adding Validate Steps"},{id:"pipeline.yml
after adding validate steps"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Adding Validate Steps")]),title1:e(({value:a,isActive:t})=>[J,s(" after adding validate steps")]),tab0:e(({value:a,isActive:t})=>[K]),tab1:e(({value:a,isActive:t})=>[Q]),_:1},8,["data"]),Z])}const ln=r(m,[["render",nn],["__file","10.html.vue"]]);export{ln as default};
diff --git a/assets/10.html-UQrjWLRc.js b/assets/10.html-UQrjWLRc.js
new file mode 100644
index 000000000..b5fbe0f4f
--- /dev/null
+++ b/assets/10.html-UQrjWLRc.js
@@ -0,0 +1 @@
+import{_ as t}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as i,c as r,f as o}from"./app-PAvzDPkc.js";const a="/saf-training/assets/review_status-9yKCc7j9.png",n="/saf-training/assets/filling_out_request_for_review-JJUnYRBj.png",s="/saf-training/assets/r_and_c-hpGDg1Bl.png",e="/saf-training/assets/review_status_filter-0cOnnA6s.png",u={},l=o('With that, we've done two full requirements -- one "Applicable - Configurable" and one "Applicable - Inherently Meets." There are only 189 or so more requirements to go!
In a real project of this size, you will be part of a team of authors each taking a subset of these requirements to complete. However, in a project of any size, you will also be peer reviewing the content your colleagues write to ensure quality standards are met.
Let's flag our completed RHEL-09-000003 requirement as Ready for Review.
Note that we, as the primary author, are not able to approve our own requirement; the option is grayed out. All we can do is mark the requirement for review (or, as an admin, we can simply lock the control against further edits).
Note that the "Reviews & Comments" section on the right hand side of the Component view has updated.
Note also that the control is locked from further editing now. We can reverse this using the Review Status menu if we want.
If you want to conduct a peer review on another author's requirements, you can do so by filtering the requirements using the filter bar on the left side of the Component view.
If you enter a control that has been marked for review, and you are at least the role of Reviewer on the project, you will be able to:
Finally! We get to secure the software. After starting with a plan, then seeing the requirements and current state through validation and visualiztion, let's harden the component and revalidate it after the changes.
You could peruse this GitHub repository, including the README and inputs to find out more information, but for this training, we have put any preparation needed for running this hardening content into a pre-script.
Just like we saw some requirements for running an InSpec scan, there are also some requirements to run the hardening script on the NGINX container in your Codespaces. We are going to run another setup script for those.
In your Codespace terminal from your main workspace directory, run the following commands:
source ./install-nginxHardeningTools.sh
+
This command will make sure that the NGINX docker container has the required software dependencies such as Python to run the Ansible hardening content. This script also downloads the hardening content locally. Unlike the InSpec scan, we will run the hardening content from a local folder rather than from GitHub. Therefore, you should see the ansible-nginx-stigready-hardening
folder in your files when the script completes.
You should see the following results from the hardening script. If you run this hardening content multiple times, the numbers in the results may be different because the starting configuration will be different and the script will not have to change the same numbers of settings.
Note
Make sure you are in the ansible content's directory before running the following command. You can run the commandcd ansible-nginx-stigready-hardening
to enter the directory. That means your current working directory path will look something like /workspaces/saf-training-lab-environment/ansible-nginx-stigready-hardening
with variation if you named your repository differently in the lab setup.
Run this command:
ansible-playbook -i hosts.yml hardening-playbook.yml
+
To see the following results:
ansible-playbook -i hosts.yml hardening-playbook.yml
+[WARNING]: Ansible is being run in a world writable directory (/workspaces/saf-training-lab-environment/ansible-nginx-stigready-hardening), ignoring
+it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-
+dir
+
+PLAY [all] ******************************************************************************************************************************************
+
+TASK [Gathering Facts] ******************************************************************************************************************************
+ok: [docker]
+
+TASK [../ansible-nginx-stigready-hardening : Ensure the "/usr/sbin/nginx" binary is not worldwide read- or writeable] *******************************
+changed: [docker]
+
+...
+
+TASK [../ansible-nginx-stigready-hardening : Generate a '/C=US/O=U.S. Government/OU=DoD/CN=DoD' self-signed ssl certificate and key] ****************
+changed: [docker]
+
+TASK [../ansible-nginx-stigready-hardening : Ensure the private key is only readable by 'root'] *****************************************************
+changed: [docker]
+
+TASK [../ansible-nginx-stigready-hardening : Ensure the crt should only be readable by 'root'] ******************************************************
+changed: [docker]
+
+TASK [../ansible-nginx-stigready-hardening : Post Task] *********************************************************************************************
+changed: [docker]
+
+PLAY RECAP ******************************************************************************************************************************************
+docker : ok=38 changed=35 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+
You and your team might be wondering what 'done' means for a security control in your profile. Here are a few things to consider:
The 'is it done' litmus test is not solely determined by a perfect InSpec control or describe and expect blocks. It also heavily relies on you, the security automation engineer. Your experience, understanding of the platform you're working on, and the processes that you and your team have collectively agreed upon are all vital components.
Trust your established expected test outcomes, the guidance document, and the CI/CD testing framework. They will help you know that, to the best of your ability, you have captured the spirit of the testing required by the Benchmark.
We consider a control effectively tested when:
only_if
block vs 'if/else' logic when possible to ensure that the control is as clear, direct, and maintainable as possible from a coding perspective.'Passing as expected' is the most straightforward concept as it directly corresponds to the test conditions. When the test asserts a condition, it validates that assertion and reports it to the end user in a clear and concise manner.
We strive to ensure that when we report a 'pass', we do so in a language that is direct, simple, and easy to understand.
For example:
✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date.
+ ✔ All system security patches and updates are up to date and have been applied
+
Passes as Expected
also encompasses:
Fails as Expected
'Failing as expected' is a less straightforward concept as it doesn't always directly correspond to the test conditions. When the test asserts a condition and it fails, the reason for that failure should be communicated to the end user in a clear and concise manner.
However, as we all know, a test may fail for more than one reason. Sometimes, the reason for failure might be connected to human error, conditions on the system such as extra lines, files, or housekeeping on the system that was not done, etc. All these factors may need to be accounted for in your tests and perhaps captured in your output and 'reasons' for failure.
This is where the above 'best practices' come into play. You don't just test in optional 'pass' and 'fail' conditions only, but also 'dirty things up' a bit and make sure that your 'failure' cases are robust enough to handle the real world and semi-perfect conditions.
For example:
✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date.
+ x The following packages have security patches and need to be updated:
+ - package 1
+ - package 2
+ - package 3
+ - package 4
+
Fails as Expected
also encompasses:
Communicates Effectively
Clear communication from your testing suite may require you to use a combination of approaches, but the extra time and effort is well worth it.
Here are some methods you can employ and things to consider:
expect
vs describe
statements in cases where you have multi-part or multi-phase test cases.describe
statements into multiple layers so that the final output to the end user is clear and concise.At this point, we have tailored two requirements from the original SRG into STIG-ready content, and discussed how to peer review them.
Let's discuss what we can do with our components after we finish the requirement writing and reviewing.
We have a few ways of exporting the content from Vulcan once it is written.
Notice on the right-hand side that the overall project summary now reflects that we made edits.
From left to right, those buttons will:
A component can only be released if every requirement is locked. Requirements are only locked from further editing when they have undergone peer review. Therefore, releasing a component should only happen when all authorship is complete.
Importing a Released Component
Once a Component is released, it can be imported into a different Vulcan project as a building block.
One of our purposes with Vulcan is to avoid duplicating effort wherever possible; if the guidannce is already written, we want to be able to access it!
Unlike formal release, the Component does not need to be locked to be exported (in any format). You are not required to keep your editing workflow inside Vulcan (though we do, of course, recommend keeping your workflow inside Vulcan where you can).
Note
This means, for example, that we can export our InSpec profile regularly to test our code against test systems throughout the authorship process.
We have spent quite a bit of time discussing how to use Vulcan to make initial authorship easier. However, Vulcan also has features intended to make the maintenace of your guidance documentation easier.
This is why we have a Duplicate Component button available on the Component card, for example.
We will not be releasing the Component in this class because that would require us to at least have made a Status Determination for each requirement. We can, however, create a Duplicate of the component. Let's do that now.
Why are we duplicating the Component?
In a complex system with many software components, it may make sense to create multiple Components in one project, all of which have a shared source SRG.
For this example, we are duplicating the Component to mimic the release process.
This time, we want to mark this Component as Version 1, Release 2.
Let's open up the component and edit another one of the controls. Any will do.
Save the edit, just like you would in the first release.
Go back to the Project page for RHEL9 and click the Diff Viewer tab. You'll get a menu asking you which two Components you want to compare.
Note
Duplicating a Component automatically generates empty InSpec stubs inside the new component, which is why the Diff viewer believes that every requirement has a change.
How do I use this?
The Diff Viewer enables you to do several things.
For one, you can compare releases to tell at a glance what has changed in a piece of guidance, which traditionally would require tracking everything in a changelog manually.
For another, we can easily compare the Components we make with published STIGs.
At this point, we have gone over most of the processes you would use in Vulcan to develop your own security guidance content. To close us off, let's review the process for foally publishing a STIG.
',37),d=[h];function u(f,m){return t(),o("div",null,d)}const y=e(c,[["render",u],["__file","11.html.vue"]]);export{y as default}; diff --git a/assets/11.html-BW22Z0ue.js b/assets/11.html-BW22Z0ue.js new file mode 100644 index 000000000..7d08b332a --- /dev/null +++ b/assets/11.html-BW22Z0ue.js @@ -0,0 +1,172 @@ +import{_ as d}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as u,o as m,c as k,d as n,e as s,b as i,w as e,f as l}from"./app-PAvzDPkc.js";const r="/saf-training/assets/the_completed_pipeline_run-Wpx-ZqJy.png",h="/saf-training/assets/summary_data-nKGFtr64.png",b={},v=l('At this point we have a much more mature workflow file. We have one more activity we need to do -- verification, or checking that the output of our validation run met our expectations.
Note that "meeting our expectations" does not automatically mean that there are no failing tests. In many real-world use cases, security tests fail, but the software is still considered worth the risk to deploy because of mitigations for that risk, or perhaps the requirement is inapplicable due to the details of the deployment. With that said, we still want to run our tests to make sure we are continually collecting data; we just don't want our pipeline to halt if it finds a test that we were always expecting to fail.
By default, the InSpec executable returns a code 100 if any tests in a profile run fail. Pipeline orchestrators, like most software, interpret any non-zero return code as a serious failure, and will halt the pipeline run accordingly unless we explicitly tell it to ignore errors. This is why the "VALIDATE - Run InSpec" step has the continue-on-error: true
attribute specified.
Our goal is to complete our InSpec scan, collect the result as a report file, and then parse that file to determine if we met our own threshold of security. We can do this with the SAF CLI.
You can get more information on a specific topic by running:
saf [TOPIC] -h
+
Let's add two steps to our pipeline to use the SAF CLI to understand our InSpec scan results before we verify them against a threshold.
`,4),S=n("code",null,"pipeline.yml",-1),C=n("div",{class:"language-yaml line-numbers-mode","data-ext":"yml"},[n("pre",{class:"language-yaml"},[n("code",null,[n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" VERIFY "),n("span",{class:"token punctuation"},"-"),s(` Display our results summary + `),n("span",{class:"token key atrule"},"uses"),n("span",{class:"token punctuation"},":"),s(` mitre/saf_action@v1 + `),n("span",{class:"token key atrule"},"with"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"command_string"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token string"},'"view summary -i results/pipeline_run.json"'),s(` + +`),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" VERIFY "),n("span",{class:"token punctuation"},"-"),s(` Ensure the scan meets our results threshold + `),n("span",{class:"token key atrule"},"uses"),n("span",{class:"token punctuation"},":"),s(" mitre/saf_action@v1 "),n("span",{class:"token comment"},"# check if the pipeline passes our defined threshold"),s(` + `),n("span",{class:"token key atrule"},"with"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"command_string"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token string"},'"validate threshold -i results/pipeline_run.json -F threshold.yml"'),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),T=n("div",{class:"language-yaml line-numbers-mode","data-ext":"yml"},[n("pre",{class:"language-yaml"},[n("code",null,[n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` Demo Security Validation Gold Image Pipeline + +`),n("span",{class:"token key atrule"},"on"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"push"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"branches"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token punctuation"},"["),s("main"),n("span",{class:"token punctuation"},"]"),s(),n("span",{class:"token comment"},"# trigger this action on any push to main branch"),s(` + +`),n("span",{class:"token key atrule"},"jobs"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"gold-image"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(` Gold Image NGINX + `),n("span",{class:"token key atrule"},"runs-on"),n("span",{class:"token punctuation"},":"),s(" ubuntu"),n("span",{class:"token punctuation"},"-"),n("span",{class:"token number"},"20.04"),s(` + `),n("span",{class:"token key atrule"},"env"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"CHEF_LICENSE"),n("span",{class:"token punctuation"},":"),s(" accept "),n("span",{class:"token comment"},"# so that we can use InSpec without manually accepting the license"),s(` + `),n("span",{class:"token key atrule"},"PROFILE"),n("span",{class:"token punctuation"},":"),s(" my_nginx "),n("span",{class:"token comment"},"# path to our profile"),s(` + `),n("span",{class:"token key atrule"},"steps"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" PREP "),n("span",{class:"token punctuation"},"-"),s(" Update runner "),n("span",{class:"token comment"},"# updating all dependencies is always a good start"),s(` + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(" sudo apt"),n("span",{class:"token punctuation"},"-"),s(`get update + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" PREP "),n("span",{class:"token punctuation"},"-"),s(` Install InSpec executable + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(" curl https"),n("span",{class:"token punctuation"},":"),s("//omnitruck.chef.io/install.sh "),n("span",{class:"token punctuation"},"|"),s(" sudo bash "),n("span",{class:"token punctuation"},"-"),s("s "),n("span",{class:"token punctuation"},"-"),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token punctuation"},"-"),s("P inspec "),n("span",{class:"token punctuation"},"-"),s(`v 5 + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" PREP "),n("span",{class:"token punctuation"},"-"),s(" Check out this repository "),n("span",{class:"token comment"},"# because that's where our profile is!"),s(` + `),n("span",{class:"token key atrule"},"uses"),n("span",{class:"token punctuation"},":"),s(` actions/checkout@v3 + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" LINT "),n("span",{class:"token punctuation"},"-"),s(" Run InSpec Check "),n("span",{class:"token comment"},"# double-check that we don't have any serious issues in our profile code"),s(` + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(` inspec check $PROFILE + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" DEPLOY "),n("span",{class:"token punctuation"},"-"),s(` Run a Docker container from nginx + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(" docker run "),n("span",{class:"token punctuation"},"-"),s("dit "),n("span",{class:"token punctuation"},"-"),n("span",{class:"token punctuation"},"-"),s("name nginx nginx"),n("span",{class:"token punctuation"},":"),s(`latest + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" DEPLOY "),n("span",{class:"token punctuation"},"-"),s(` Install Python for our nginx container + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token punctuation"},"|"),n("span",{class:"token scalar string"},` + docker exec nginx apt-get update -y + docker exec nginx apt-get install -y python3`),s(` + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" HARDEN "),n("span",{class:"token punctuation"},"-"),s(` Fetch Ansible role + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token punctuation"},"|"),n("span",{class:"token scalar string"},` + git clone --branch docker https://github.com/mitre/ansible-nginx-stigready-hardening.git || true + chmod 755 ansible-nginx-stigready-hardening`),s(` + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" HARDEN "),n("span",{class:"token punctuation"},"-"),s(` Fetch Ansible requirements + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(" ansible"),n("span",{class:"token punctuation"},"-"),s("galaxy install "),n("span",{class:"token punctuation"},"-"),s("r ansible"),n("span",{class:"token punctuation"},"-"),s("nginx"),n("span",{class:"token punctuation"},"-"),s("stigready"),n("span",{class:"token punctuation"},"-"),s(`hardening/requirements.yml + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" HARDEN "),n("span",{class:"token punctuation"},"-"),s(` Run Ansible hardening + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(" ansible"),n("span",{class:"token punctuation"},"-"),s("playbook "),n("span",{class:"token punctuation"},"-"),n("span",{class:"token punctuation"},"-"),s("inventory=nginx"),n("span",{class:"token punctuation"},","),s(),n("span",{class:"token punctuation"},"-"),n("span",{class:"token punctuation"},"-"),s("connection=docker ansible"),n("span",{class:"token punctuation"},"-"),s("nginx"),n("span",{class:"token punctuation"},"-"),s("stigready"),n("span",{class:"token punctuation"},"-"),s("hardening/hardening"),n("span",{class:"token punctuation"},"-"),s(`playbook.yml + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" VALIDATE "),n("span",{class:"token punctuation"},"-"),s(` Run InSpec + `),n("span",{class:"token key atrule"},"continue-on-error"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token boolean important"},"true"),s(" "),n("span",{class:"token comment"},"# we dont want to stop if our InSpec run finds failures, we want to continue and record the result"),s(` + `),n("span",{class:"token key atrule"},"run"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token punctuation"},"|"),n("span",{class:"token scalar string"},` + inspec exec $PROFILE \\ + --input-file=$PROFILE/inputs.yml \\ + --target docker://nginx \\ + --reporter cli json:results/pipeline_run.json`),s(` + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" VALIDATE "),n("span",{class:"token punctuation"},"-"),s(" Save Test Result JSON "),n("span",{class:"token comment"},"# save our results to the pipeline artifacts, even if the InSpec run found failing tests"),s(` + `),n("span",{class:"token key atrule"},"uses"),n("span",{class:"token punctuation"},":"),s(" actions/upload"),n("span",{class:"token punctuation"},"-"),s(`artifact@v3 + `),n("span",{class:"token key atrule"},"with"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"path"),n("span",{class:"token punctuation"},":"),s(` results/pipeline_run.json + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" VERIFY "),n("span",{class:"token punctuation"},"-"),s(` Display our results summary + `),n("span",{class:"token key atrule"},"uses"),n("span",{class:"token punctuation"},":"),s(` mitre/saf_action@v1 + `),n("span",{class:"token key atrule"},"with"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"command_string"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token string"},'"view summary -i results/pipeline_run.json"'),s(` + + `),n("span",{class:"token punctuation"},"-"),s(),n("span",{class:"token key atrule"},"name"),n("span",{class:"token punctuation"},":"),s(" VERIFY "),n("span",{class:"token punctuation"},"-"),s(` Ensure the scan meets our results threshold + `),n("span",{class:"token key atrule"},"uses"),n("span",{class:"token punctuation"},":"),s(" mitre/saf_action@v1 "),n("span",{class:"token comment"},"# check if the pipeline passes our defined threshold"),s(` + `),n("span",{class:"token key atrule"},"with"),n("span",{class:"token punctuation"},":"),s(` + `),n("span",{class:"token key atrule"},"command_string"),n("span",{class:"token punctuation"},":"),s(),n("span",{class:"token string"},'"validate threshold -i results/pipeline_run.json -F threshold.yml"'),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),F=n("p",null,"A few things to note here:",-1),L={href:"https://github.com/mitre/saf_action",target:"_blank",rel:"noopener noreferrer"},E=n("li",null,[s("We added the "),n("code",null,"summary"),s(" step because it will print us a concise summary inside the pipeline job view itself. That command takes one file argument; the results file we want to summarize.")],-1),R=n("li",null,[s("The "),n("code",null,"validate threshold"),s(" command, however, needs two files -- one is our report file as usual, and the other is a "),n("strong",null,"threshold file"),s(".")],-1),P=l(`Threshold files are what we use to define what "passing" means for our pipeline, since like we said earlier, it's more complicated than failing the pipeline on a failed test.
Consider the following sample threshold file:
# threshold.yml file
+compliance:
+ min: 80
+passed:
+ total:
+ min: 1
+failed:
+ total:
+ max: 2
+
This file specifies that we require a minimum of 80% of the tests to pass. We also specify that at least one of them should pass, and that at maximum two of them can fail.
`,5),V={class:"hint-container info"},N=n("p",{class:"hint-container-title"},"Threshold Files Options",-1),O={href:"https://github.com/mitre/saf/wiki/Validation-with-Thresholds",target:"_blank",rel:"noopener noreferrer"},D=n("p",null,[n("em",null,"NOTE: You can name the threshold file something else or put it in a different location. We specify the name and location only for convenience.")],-1),j=l(`This is a sample pipeline, so we are not too worried about being very stringent. For now, let's settle for running the pipeline with no errors (that is, as long as each test runs, we do not care if it passed or failed, but a source code error should still fail the pipeline).
Create a new file called threshold.yml
in the main directory to specify the threshold for acceptable test results:
error:
+ total:
+ max: 0
+
How could we change this threshold file to ensure that the pipeline run will fail?
And with that, we have a complete pipeline file. Let's commit our changes and see what happens.
`,5),q=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[n("span",{class:"token function"},"git"),s(),n("span",{class:"token function"},"add"),s(` .github +`),n("span",{class:"token function"},"git"),s(" commit "),n("span",{class:"token parameter variable"},"-s"),s(),n("span",{class:"token parameter variable"},"-m"),s(),n("span",{class:"token string"},'"finishing the pipeline"'),s(` +`),n("span",{class:"token function"},"git"),s(` push origin main +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),H=n("div",{class:"language-bash line-numbers-mode","data-ext":"sh"},[n("pre",{class:"language-bash"},[n("code",null,[s("$"),n("span",{class:"token operator"},">"),s(),n("span",{class:"token function"},"git"),s(),n("span",{class:"token function"},"add"),s(),n("span",{class:"token builtin class-name"},"."),s(` +$`),n("span",{class:"token operator"},">"),s(),n("span",{class:"token function"},"git"),s(" commit "),n("span",{class:"token parameter variable"},"-s"),s(),n("span",{class:"token parameter variable"},"-m"),s(),n("span",{class:"token string"},'"finishing the pipeline"'),s(` +`),n("span",{class:"token punctuation"},"["),s("main e796abd"),n("span",{class:"token punctuation"},"]"),s(` finishing the pipeline + `),n("span",{class:"token number"},"2"),s(" files changed, "),n("span",{class:"token number"},"14"),s(" insertions"),n("span",{class:"token punctuation"},"("),s("+"),n("span",{class:"token punctuation"},")"),s(", "),n("span",{class:"token number"},"1"),s(" deletion"),n("span",{class:"token punctuation"},"("),s("-"),n("span",{class:"token punctuation"},")"),s(` + create mode `),n("span",{class:"token number"},"100644"),s(` threshold.yml +$`),n("span",{class:"token operator"},">"),s(),n("span",{class:"token function"},"git"),s(` push origin main +Enumerating objects: `),n("span",{class:"token number"},"10"),s(`, done. +Counting objects: `),n("span",{class:"token number"},"100"),s("% "),n("span",{class:"token punctuation"},"("),n("span",{class:"token number"},"10"),s("/10"),n("span",{class:"token punctuation"},")"),s(`, done. +Delta compression using up to `),n("span",{class:"token number"},"2"),s(` threads +Compressing objects: `),n("span",{class:"token number"},"100"),s("% "),n("span",{class:"token punctuation"},"("),n("span",{class:"token number"},"3"),s("/3"),n("span",{class:"token punctuation"},")"),s(`, done. +Writing objects: `),n("span",{class:"token number"},"100"),s("% "),n("span",{class:"token punctuation"},"("),n("span",{class:"token number"},"6"),s("/6"),n("span",{class:"token punctuation"},")"),s(", "),n("span",{class:"token number"},"720"),s(" bytes "),n("span",{class:"token operator"},"|"),s(),n("span",{class:"token number"},"720.00"),s(` KiB/s, done. +Total `),n("span",{class:"token number"},"6"),s(),n("span",{class:"token punctuation"},"("),s("delta "),n("span",{class:"token number"},"2"),n("span",{class:"token punctuation"},")"),s(", reused "),n("span",{class:"token number"},"1"),s(),n("span",{class:"token punctuation"},"("),s("delta "),n("span",{class:"token number"},"0"),n("span",{class:"token punctuation"},")"),s(", pack-reused "),n("span",{class:"token number"},"0"),s(` +remote: Resolving deltas: `),n("span",{class:"token number"},"100"),s("% "),n("span",{class:"token punctuation"},"("),n("span",{class:"token number"},"2"),s("/2"),n("span",{class:"token punctuation"},")"),s(", completed with "),n("span",{class:"token number"},"2"),s(),n("span",{class:"token builtin class-name"},"local"),s(` objects. +To https://github.com/wdower/saf-training-lab-environment + c4d9c67`),n("span",{class:"token punctuation"},".."),s("e796abd main -"),n("span",{class:"token operator"},">"),s(` main +$`),n("span",{class:"token operator"},">"),s(` +`)])]),n("div",{class:"line-numbers","aria-hidden":"true"},[n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"}),n("div",{class:"line-number"})])],-1),W=l('Let's hop back to our browser and take a look at the output:
There we go! All validation tests passed!
Note in the SAF CLI Summary step, we get a simple YAML output summary of the InSpec scan:
We see five critical tests (remember how we set them all to impact 1.0
?) passing, and no failures:
- profileName: my_nginx
+ resultSets:
+ - pipeline_run.json
+ compliance: 100
+ passed:
+ critical: 5
+ high: 0
+ medium: 0
+ low: 0
+ total: 5
+ failed:
+ critical: 0
+ high: 0
+ medium: 0
+ low: 0
+ total: 0
+ skipped:
+ critical: 0
+ high: 0
+ medium: 0
+ low: 0
+ total: 0
+ error:
+ critical: 0
+ high: 0
+ medium: 0
+ low: 0
+ total: 0
+ no_impact:
+ none: 0
+ total: 0
+
Note also that our test report is avaiable as an artifact from the overall pipeline run summary view now:
From here, we can download that file and drop it off in somehting like Heimdall or feed into some other security process at our leisure (or we can add a pipeline step to do that for us!).
In a real use case, if our pipeline passed, we would next save our bonafide hardened image to a secure registry where it could be distributed to developers. If the pipeline did not pass, we would have already collected data describing why, in the form of InSpec scan reports that we save as artifacts.
',11);function G(M,Y){const o=u("ExternalLinkIcon"),p=u("RouterLink"),c=u("CodeTabs");return m(),k("div",null,[v,n("p",null,[s("The "),n("a",f,[s("SAF CLI"),i(o)]),s(' is one the tool that the SAF supports to help automate security validation. It is our "kitchen-sink" utility for pipelines. If you took the '),i(p,{to:"/courses/user/"},{default:e(()=>[s("SAF User Class")]),_:1}),s(", you are already familiar with the SAF CLI's "),i(p,{to:"/courses/user/12.html"},{default:e(()=>[s("attestation")]),_:1}),s(" function.")]),y,g,n("p",null,[s("Some SAF CLI capabilities are listed in this diagram, but you can see all of them on the "),n("a",w,[s("SAF CLI documentation"),i(o)]),s(".")]),_,i(c,{id:"33",data:[{id:"Command"},{id:"Output"}]},{title0:e(({value:a,isActive:t})=>[s("Command")]),title1:e(({value:a,isActive:t})=>[s("Output")]),tab0:e(({value:a,isActive:t})=>[x]),tab1:e(({value:a,isActive:t})=>[A]),_:1}),I,i(c,{id:"51",data:[{id:"Adding Verify Steps"},{id:"pipeline.yml
after adding verify steps"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Adding Verify Steps")]),title1:e(({value:a,isActive:t})=>[S,s(" after adding verify steps")]),tab0:e(({value:a,isActive:t})=>[C]),tab1:e(({value:a,isActive:t})=>[T]),_:1},8,["data"]),F,n("ul",null,[n("li",null,[s("Both steps are using the "),n("a",L,[s("SAF CLI GitHub Action"),i(o)]),s(". This way, we don't need to install it directly on the runner; we can just pass in the command string.")]),E,R]),P,n("div",V,[N,n("p",null,[s("To make more specific or detailed thresholds, check out "),n("a",O,[s("this documentation on generating theshold files"),i(o)]),s(".")]),D]),j,i(c,{id:"112",data:[{id:"Committing And Pushing Code"},{id:"Output of Pushing Code"}],"tab-id":"shell"},{title0:e(({value:a,isActive:t})=>[s("Committing And Pushing Code")]),title1:e(({value:a,isActive:t})=>[s("Output of Pushing Code")]),tab0:e(({value:a,isActive:t})=>[q]),tab1:e(({value:a,isActive:t})=>[H]),_:1}),W])}const B=d(b,[["render",G],["__file","11.html.vue"]]);export{B as default};
diff --git a/assets/11.html-KxdFJYc_.js b/assets/11.html-KxdFJYc_.js
new file mode 100644
index 000000000..085542af7
--- /dev/null
+++ b/assets/11.html-KxdFJYc_.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-57193b6c","path":"/courses/beginner/11.html","title":"11. From STIG to Profile","lang":"en-US","frontmatter":{"order":11,"next":"12.md","title":"11. From STIG to Profile","author":"Aaron Lippold","headerDepth":3,"description":"From STIG to Profile You have seen in some of our examples in this class that a robust profile's controls will include a large number of metadata tags: InSpec control with many ...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/beginner/11.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"11. From STIG to Profile"}],["meta",{"property":"og:description","content":"From STIG to Profile You have seen in some of our examples in this class that a robust profile's controls will include a large number of metadata tags: InSpec control with many ..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"11. From STIG to Profile\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"From STIG to Profile","slug":"from-stig-to-profile","link":"#from-stig-to-profile","children":[{"level":3,"title":"How to Get the Pre-made Profile","slug":"how-to-get-the-pre-made-profile","link":"#how-to-get-the-pre-made-profile","children":[]},{"level":3,"title":"Example 'Stub' Control SV-230502","slug":"example-stub-control-sv-230502","link":"#example-stub-control-sv-230502","children":[]}]}],"git":{},"readingTime":{"minutes":10.71,"words":3213},"filePathRelative":"courses/beginner/11.md","autoDesc":true}`);export{e as data};
diff --git a/assets/11.html-QqaRwBGl.js b/assets/11.html-QqaRwBGl.js
new file mode 100644
index 000000000..4f1e451bd
--- /dev/null
+++ b/assets/11.html-QqaRwBGl.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-3bb2d1e5","path":"/courses/user/11.html","title":"11. Comparing Results","lang":"en-US","frontmatter":{"order":11,"next":"12.md","title":"11. Comparing Results","author":"Emily Rodriguez","headerDepth":3,"description":"11. Comparing Results 11.1 Validate the software after hardening Now that we have hardened the software, we need to run InSpec again to see the results. Let's change directories...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/user/11.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"11. Comparing Results"}],["meta",{"property":"og:description","content":"11. Comparing Results 11.1 Validate the software after hardening Now that we have hardened the software, we need to run InSpec again to see the results. Let's change directories..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Emily Rodriguez"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"11. Comparing Results\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Emily Rodriguez\\"}]}"]]},"headers":[{"level":2,"title":"11. Comparing Results","slug":"_11-comparing-results","link":"#_11-comparing-results","children":[{"level":3,"title":"11.1 Validate the software after hardening","slug":"_11-1-validate-the-software-after-hardening","link":"#_11-1-validate-the-software-after-hardening","children":[]},{"level":3,"title":"11.2 CLI Results","slug":"_11-2-cli-results","link":"#_11-2-cli-results","children":[]},{"level":3,"title":"11.3 Download the Results File","slug":"_11-3-download-the-results-file","link":"#_11-3-download-the-results-file","children":[]},{"level":3,"title":"11.4 Visualize the Results in Heimdall","slug":"_11-4-visualize-the-results-in-heimdall","link":"#_11-4-visualize-the-results-in-heimdall","children":[]}]}],"git":{},"readingTime":{"minutes":4.22,"words":1265},"filePathRelative":"courses/user/11.md","autoDesc":true}`);export{e as data};
diff --git a/assets/11.html-kkXZOg1S.js b/assets/11.html-kkXZOg1S.js
new file mode 100644
index 000000000..e0d585d30
--- /dev/null
+++ b/assets/11.html-kkXZOg1S.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-b878871c","path":"/courses/guidance/11.html","title":"11. Exporting Your Content","lang":"en-US","frontmatter":{"order":11,"next":"12.md","title":"11. Exporting Your Content","author":"Will Dower","headerDepth":3,"description":"11.1 Exporting Your Content At this point, we have tailored two requirements from the original SRG into STIG-ready content, and discussed how to peer review them. Let's discuss ...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/guidance/11.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"11. Exporting Your Content"}],["meta",{"property":"og:description","content":"11.1 Exporting Your Content At this point, we have tailored two requirements from the original SRG into STIG-ready content, and discussed how to peer review them. Let's discuss ..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:image","content":"https://mitre.github.io/saf-training/saf-training/"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"name":"twitter:card","content":"summary_large_image"}],["meta",{"name":"twitter:image:alt","content":"11. Exporting Your Content"}],["meta",{"property":"article:author","content":"Will Dower"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"11. Exporting Your Content\\",\\"image\\":[\\"https://mitre.github.io/saf-training/saf-training/\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Will Dower\\"}]}"]]},"headers":[{"level":2,"title":"11.1 Exporting Your Content","slug":"_11-1-exporting-your-content","link":"#_11-1-exporting-your-content","children":[]},{"level":2,"title":"11.2 Exporting the Guidance Document","slug":"_11-2-exporting-the-guidance-document","link":"#_11-2-exporting-the-guidance-document","children":[{"level":3,"title":"11.2.1 Notes on Releasing","slug":"_11-2-1-notes-on-releasing","link":"#_11-2-1-notes-on-releasing","children":[]},{"level":3,"title":"11.2.2 Notes on Exporting","slug":"_11-2-2-notes-on-exporting","link":"#_11-2-2-notes-on-exporting","children":[]}]},{"level":2,"title":"11.3 Diff Viewer","slug":"_11-3-diff-viewer","link":"#_11-3-diff-viewer","children":[{"level":3,"title":"11.3.1 Creating a Different Component","slug":"_11-3-1-creating-a-different-component","link":"#_11-3-1-creating-a-different-component","children":[]}]}],"git":{},"readingTime":{"minutes":2.78,"words":834},"filePathRelative":"courses/guidance/11.md","autoDesc":true}`);export{e as data};
diff --git a/assets/11.html-nXdLMfTA.js b/assets/11.html-nXdLMfTA.js
new file mode 100644
index 000000000..121115f12
--- /dev/null
+++ b/assets/11.html-nXdLMfTA.js
@@ -0,0 +1,65 @@
+import{_ as t}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as o,o as l,c as r,d as e,e as n,b as a,f as i}from"./app-PAvzDPkc.js";const d="/saf-training/assets/Codespaces_Download_Harden_Results-Hq3hWz77.png",c="/saf-training/assets/Heimdall_Select_Menu-S9wQZY4w.png",p="/saf-training/assets/Heimdall_Click_ComparisonView-XvsXjHbd.png",u={},h=i(`Now that we have hardened the software, we need to run InSpec again to see the results.
Let's change directories to get back to the root directory.
cd /workspaces/saf-training-lab-environment/
+
Now, rerun the InSpec scan with a different file name by changing the --reporter
to have a new file name indicating that these are the hardened results like this: --reporter cli json:./results/nginx_hardened_results.json
.
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx --reporter cli json:./results/nginx_hardened_results.json --input-file inputs.yml
+
After running the command, you should see different results than when we ran the vanilla InSpec scan.
inspec exec https://github.com/mitre/nginx-stigready-baseline -t docker://nginx --reporter cli json:./results/nginx_hardened_results.json --input-file inputs.yml
+[2022-09-26T12:33:00+00:00] WARN: URL target https://github.com/mitre/nginx-stigready-baseline transformed to https://github.com/mitre/nginx-stigready-baseline/archive/master.tar.gz. Consider using the git fetcher
+...
+ ↺ V-56019: An NGINX web server utilizing mobile code must meet DoD-defined mobile code
+ requirements.
+ ↺ This check is NA because NGINX does not implement mobile code.
+ ↺ V-56021: The NGINX web server must invalidate session identifiers upon hosted
+ application user logout or other session termination.
+ ↺ This test requires a Manual Review: Verify it invalidates session identifiers when a
+ session is terminated by reviewing the NGINX documentation.
+ ↺ V-56025: Cookies exchanged between the NGINX web server and client, such as session
+ cookies, must have security settings that disallow cookie access outside the
+ originating web server and hosted application.
+ ↺ This check is NA because the proxy_cookie_path directive is not configured.
+ ✔ V-56027: The web server must only accept client certificates issued by DoD PKI
+ or DoD-approved PKI Certification Authorities (CAs).
+ ✔ [["/etc/ssl/nginx-selfsigned.pem"]] is expected not to be nil
+ ✔ x509_certificate /etc/ssl/nginx-selfsigned.pem is expected not to be nil
+ ✔ x509_certificate /etc/ssl/nginx-selfsigned.pem subject.C is expected to cmp == "US"
+ ✔ x509_certificate /etc/ssl/nginx-selfsigned.pem subject.O is expected to cmp == "U.S. Government"
+ ✔ DoD is expected to be in "DoD" and "ECA"
+ ↺ V-56029: The NGINX web server must augment re-creation to a stable and known
+ baseline.
+ ↺ This test requires a Manual Review: Interview the SA and ask for documentation on the
+ disaster recovery methods for the NGINX web server in the event of the necessity for rollback.
+ ↺ V-56031: The NGINX web server must encrypt user identifiers and passwords.
+ ↺ This check is NA because NGINX does not manage authentication.
+ ✔ V-56033: The web server must install security-relevant software updates within
+ the configured time period directed by an authoritative source (e.g., IAVM,
+ CTOs, DTMs, and STIGs).
+ ✔ NGINX version v1.23.1 installed is not more then one patch level behind v1.23.0 is expected to cmp >= "1.23.0"
+ ✔ NGINX version v1.23.1 installed is greater then or equal to the organization approved version v1.23.1 is expected to cmp >= "1.23.1"
+ ✔ V-56035: The NGINX web server must display a default hosted application web page, not
+ a directory listing, when a requested web page cannot be found.
+ ✔ The root directory /usr/share/nginx/html should include the default index.html file.
+ ✔ V-61353: The web server must remove all export ciphers to protect the
+ confidentiality and integrity of transmitted information.
+ ✔ The ssl_prefer_server_cipher should be set to on.
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+ ✔ Each cipher found in configuration should be included in the list of ciphers approved to encrypt data
+
+
+Profile Summary: 62 successful controls, 3 control failures, 24 controls skipped
+Test Summary: 303 successful, 3 failures, 24 skipped
+
As you did before, download the results file.
Take some time to explore the hardened results. Filter through different statuses, checkout the alignment to NIST 800-53 controls, and more!
Another valuable view for monitoring changes and showing results is the comparison view.
You could use the Heimdall with backend version of the application and upload security results at regular intervals to see the changes over time. There are two graphs that show compliance over time and number of failed tests by severity over time.
',6);function N(T,E){const s=o("ExternalLinkIcon");return l(),r("div",null,[h,e("p",null,[n("If you reopened "),e("a",v,[n("Heimdall"),a(s)]),n(" to upload your "),m,n(", then there will only be one file loaded. However, if you uploaded the results to the same instance of Heimdall that you had open before, you will now see two sets of results - your vanilla results and hardened results. You can click the menu on the top left what files are loaded and select only those you wish to see. In this case, only select the hardened results so we can look more at those.")]),b,e("details",f,[k,e("p",null,[n("Throughout this class, we are using the "),e("a",g,[n("Heimdall-lite"),a(s)]),n(" version of the Heimdall application. However, many organizations chose to deploy Heimdall with a backend (you can see a demo version "),e("a",w,[n("here"),a(s)]),n("), in other words, with a server to store data. This requires more setup than just opening up Heimdall-lite in the webpage, however, you can:")]),y,e("p",null,[n("You can find out more details on the difference between the two versions of this application in the "),e("a",_,[n("Heimdall README"),a(s)]),n(".")])]),x])}const H=t(u,[["render",N],["__file","11.html.vue"]]);export{H as default}; diff --git a/assets/11.html-quYRa6mK.js b/assets/11.html-quYRa6mK.js new file mode 100644 index 000000000..3a97243fb --- /dev/null +++ b/assets/11.html-quYRa6mK.js @@ -0,0 +1 @@ +import{_ as e}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as t,c as n,f as a}from"./app-PAvzDPkc.js";const i={},r=a("When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents:
Maintain Version Integrity: Never Merge new requirements into older benchmark branches. This will create a 'mixed baseline' that doesn't align with any specific guidance document. Benchmarks, STIGs, and Guidance Documents form a 'proper subset' - they should be treated as 'all or nothing'. Mixing requirements from different versions can invalidate the concept of 'testing to a known benchmark'.
Benchmarks are a Complete Set of Requirements: A Security Benchmark is 'complete and valid' only when all requirements for a specific Release or Major Version are met. Unlike traditional software projects, features and capabilities cannot be incrementally added. A Security Benchmark and its corresponding InSpec Profile are valid only within the scope of a specific 'Release' of that Benchmark.
Release Readiness Is Predefined: A Benchmark is considered 'ready for release' when it meets the expected thresholds, hardening, and validation results. Don't be overwhelmed by the multitude of changes across the files. Instead, focus on the specific requirement you are working on. Understand its expected failure and success states on each of the target testing platforms. This approach prevents you from being overwhelmed and provides solid pivot points as you work through the implementation of the automated tests for each requirement and its 'contexts'.
Use Vendor-Managed Standard Releases: When setting up a test suite, prioritize using vendor-managed standard releases for software installations and baseline configurations. This should be the starting point for both 'vanilla' and 'hardening' workflows. This approach ensures that your initial and ongoing testing, hardening, and validation closely mirror the real-world usage scenarios of your end-users.
By adhering to these principles, you ensure that your updates to Benchmark Profiles are consistent, accurate, and aligned with the original guidance documents.
",3),s=[r];function o(l,c){return t(),n("div",null,s)}const p=e(i,[["render",o],["__file","11.html.vue"]]);export{p as default}; diff --git a/assets/11.html-rnOvxXZY.js b/assets/11.html-rnOvxXZY.js new file mode 100644 index 000000000..f23a01fcf --- /dev/null +++ b/assets/11.html-rnOvxXZY.js @@ -0,0 +1,560 @@ +import{_ as u}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as o,o as p,c as d,d as s,e as n,b as t,w as e,f as c}from"./app-PAvzDPkc.js";const m="/saf-training/assets/Download_STIG_Viewer-zAPfTvLY.png",h="/saf-training/assets/Download_STIG-N5yFp_SQ.png",v={},k=s("h2",{id:"from-stig-to-profile",tabindex:"-1"},[s("a",{class:"header-anchor",href:"#from-stig-to-profile","aria-hidden":"true"},"#"),n(" From STIG to Profile")],-1),S=s("p",null,"You have seen in some of our examples in this class that a robust profile's controls will include a large number of metadata tags:",-1),V={class:"hint-container details"},g=s("summary",null,"InSpec control with many STIG-related tags",-1),f={href:"https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/blob/main/controls/SV-204392.rb",target:"_blank",rel:"noopener noreferrer"},_=c(`control 'SV-204392' do
+ title 'The Red Hat Enterprise Linux operating system must be configured so that the file permissions, ownership,
+ and group membership of system files and commands match the vendor values.'
+ desc 'Discretionary access control is weakened if a user or group has access permissions to system files and
+ directories greater than the default.'
+ desc 'check', ...
+ desc 'fix', ...
+ impact 0.7
+ tag legacy: ['V-71849', 'SV-86473']
+ tag severity: 'high'
+ tag gtitle: 'SRG-OS-000257-GPOS-00098'
+ tag satisfies: ['SRG-OS-000257-GPOS-00098', 'SRG-OS-000278-GPOS-00108']
+ tag gid: 'V-204392'
+ tag rid: 'SV-204392r880752_rule'
+ tag stig_id: 'RHEL-07-010010'
+ tag fix_id: 'F-36302r880751_fix'
+ tag cci: ['CCI-001494', 'CCI-001496', 'CCI-002165', 'CCI-002235']
+ tag nist: ['AU-9', 'AU-9 (3)', 'AC-3 (4)', 'AC-6 (10)']
+ tag subsystems: ['permissions', 'package', 'rpm']
+ tag 'host'
+ tag 'container'
+
+ describe the_actual_test do # the actual describe block appears on line 54 of this control!
+ ...
+ end
+end
+
(The RHEL8 STIG is at version 1, release 5 at time of writing, but may have been updated by the time you downloaded. This will not affect how we use the STIG in this class.)
Timesaver Ahead!
We already converted the XCCDF STIG Benchmark into a starter profile using the saf generate xccdf_benchmark2inspec_stub
command using the correct flags, mapping file and other options. In a moment we will show you how to grab our pre-made profile that we generated with the SAF CLI.
wget
"},{id:"Output"}],"tab-id":"shell"},{title0:e(({value:l,isActive:a})=>[n("Fetching the pre-made profile with "),z]),title1:e(({value:l,isActive:a})=>[n("Output")]),tab0:e(({value:l,isActive:a})=>[B]),tab1:e(({value:l,isActive:a})=>[$]),_:1}),U,t(i,{id:"75",data:[{id:"Uncompressing the profile"},{id:"Output"}],"tab-id":"shell"},{title0:e(({value:l,isActive:a})=>[n("Uncompressing the profile")]),title1:e(({value:l,isActive:a})=>[n("Output")]),tab0:e(({value:l,isActive:a})=>[M]),tab1:e(({value:l,isActive:a})=>[W]),_:1}),K,Y,t(i,{id:"89",data:[{id:"Stub Generated InSpec Control"},{id:"Completed InSpec Control"}],"tab-id":"shell"},{title0:e(({value:l,isActive:a})=>[n("Stub Generated InSpec Control")]),title1:e(({value:l,isActive:a})=>[n("Completed InSpec Control")]),tab0:e(({value:l,isActive:a})=>[j]),tab1:e(({value:l,isActive:a})=>[J]),_:1}),Q,s("div",Z,[ss,s("p",null,[n("From the real "),s("a",ns,[n("MITRE SAF RHEL8 InSpec profile"),t(r)]),n(". Note that the control accounts for a few more edge cases than what we've done in this class, but it's still recognizably just a bunch of "),es,n(" and "),ls,n(" wrapped in "),as,n(" blocks.")])]),ts,s("div",rs,[is,s("p",null,[n("For more background on STIGs, see the "),t(b,{to:"/courses/guidance/03.html"},{default:e(()=>[n("SAF Guidance content")]),_:1}),n(".")])])])}const ds=u(v,[["render",os],["__file","11.html.vue"]]);export{ds as default};
diff --git a/assets/11.html-skD8zwiy.js b/assets/11.html-skD8zwiy.js
new file mode 100644
index 000000000..e561429d0
--- /dev/null
+++ b/assets/11.html-skD8zwiy.js
@@ -0,0 +1 @@
+const e=JSON.parse('{"key":"v-6200cf3c","path":"/courses/advanced/11.html","title":"11. Verifying Results With The SAF CLI","lang":"en-US","frontmatter":{"order":11,"next":"12.md","title":"11. Verifying Results With The SAF CLI","author":"Will Dower","headerDepth":3,"description":"Verification At this point we have a much more mature workflow file. We have one more activity we need to do -- verification, or checking that the output of our validation run m...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/advanced/11.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"11. Verifying Results With The SAF CLI"}],["meta",{"property":"og:description","content":"Verification At this point we have a much more mature workflow file. We have one more activity we need to do -- verification, or checking that the output of our validation run m..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Will Dower"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"11. Verifying Results With The SAF CLI\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Will Dower\\"}]}"]]},"headers":[{"level":2,"title":"Verification","slug":"verification","link":"#verification","children":[{"level":3,"title":"The SAF CLI","slug":"the-saf-cli","link":"#the-saf-cli","children":[]},{"level":3,"title":"Updating the Workflow File","slug":"updating-the-workflow-file","link":"#updating-the-workflow-file","children":[]}]}],"git":{},"readingTime":{"minutes":5.87,"words":1760},"filePathRelative":"courses/advanced/11.md","autoDesc":true}');export{e as data};
diff --git a/assets/11.html-uDyMJ-ud.js b/assets/11.html-uDyMJ-ud.js
new file mode 100644
index 000000000..710c34b8f
--- /dev/null
+++ b/assets/11.html-uDyMJ-ud.js
@@ -0,0 +1 @@
+const e=JSON.parse('{"key":"v-ac554bf0","path":"/courses/profile-dev-test/11.html","title":"Rules of the Road","lang":"en-US","frontmatter":{"order":11,"next":"12.md","title":"Rules of the Road","author":"Aaron Lippold","description":"When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents: 1. Maintain Version Integrity: Never Merge new requi...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/11.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Rules of the Road"}],["meta",{"property":"og:description","content":"When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents: 1. Maintain Version Integrity: Never Merge new requi..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Rules of the Road\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":1.06,"words":319},"filePathRelative":"courses/profile-dev-test/11.md","autoDesc":true}');export{e as data};
diff --git a/assets/12.html-1CYTXiAL.js b/assets/12.html-1CYTXiAL.js
new file mode 100644
index 000000000..1f6070a51
--- /dev/null
+++ b/assets/12.html-1CYTXiAL.js
@@ -0,0 +1 @@
+import{_ as a}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as i,c,d as e,e as t,b as o,w as h}from"./app-PAvzDPkc.js";const u={},l=e("h2",{id:"next-steps",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#next-steps","aria-hidden":"true"},"#"),t(" Next Steps")],-1),d=e("h3",{id:"take-the-class-survey",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#take-the-class-survey","aria-hidden":"true"},"#"),t(" Take the Class Survey")],-1),f={href:"https://forms.office.com/g/W2xtcV2frW",target:"_blank",rel:"noopener noreferrer"},p=e("h3",{id:"save-your-work-on-github",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#save-your-work-on-github","aria-hidden":"true"},"#"),t(" Save your work on GitHub")],-1),_={href:"https://education.github.com/git-cheat-sheet-education.pdf",target:"_blank",rel:"noopener noreferrer"},m={href:"https://learngitbranching.js.org/",target:"_blank",rel:"noopener noreferrer"},g=e("h3",{id:"reference-other-class-content",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#reference-other-class-content","aria-hidden":"true"},"#"),t(" Reference other class content")],-1),b=e("p",null,"This class is one of a set of security automation content offered by the MITRE SAF(c) team. If you found this content interesting and you want to learn more, we encourage you to go back to the User Class or Beginner Security Automation Developer Class (shown in the table of contents on the left).",-1),y=e("h3",{id:"check-out-the-rest-of-mitre-saf-c-s-content",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#check-out-the-rest-of-mitre-saf-c-s-content","aria-hidden":"true"},"#"),t(" Check Out the Rest of MITRE SAF(c)'s Content")],-1),k={href:"https://saf.mitre.org",target:"_blank",rel:"noopener noreferrer"},v=e("h3",{id:"contact-us",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#contact-us","aria-hidden":"true"},"#"),t(" Contact Us")],-1),w=e("p",null,[t("The MITRE SAF(c) team can be contacted at "),e("a",{href:"mailto:saf@groups.mitre.org"},"saf@groups.mitre.org"),t(". We support U.S. government sponsors in developing new tools for the Framework and in implementing the existing ones in DevSecOps pipelines. If you have a question about how you can use any of the content you saw in this class in your own environment, we'd be happy to help.")],-1);function x(S,I){const n=r("ExternalLinkIcon"),s=r("RouterLink");return i(),c("div",null,[l,d,e("p",null,[t("Take our brief "),e("a",f,[t("survey"),o(n)]),t(" to give feedback to fuel class improvement.")]),p,e("p",null,[t("If you want to save your work in your remote repository in GitHub, you need to use Git commands. You can reference a "),e("a",_,[t("Git cheat sheet"),o(n)]),t(" or checkout "),e("a",m,[t("this Git tutorial"),o(n)]),t(".")]),g,b,y,e("p",null,[t("MITRE SAF(c) is a large collection of tools and techniques for security automation in addition to those discussed in this class. You can find utilities and libraries to support any step of the software development lifecycle by browsing our offerings at "),e("a",k,[t("saf.mitre.org"),o(n)]),t(". Note that everything offered by MITRE SAF(c) is open-source and available to use free of charge. You can also reference all of the resources listed from the class on the "),o(s,{to:"/resources/"},{default:h(()=>[t("Resources Page")]),_:1})]),v,w])}const C=a(u,[["render",x],["__file","12.html.vue"]]);export{C as default};
diff --git a/assets/12.html-1aA7mi6b.js b/assets/12.html-1aA7mi6b.js
new file mode 100644
index 000000000..d1fd1f758
--- /dev/null
+++ b/assets/12.html-1aA7mi6b.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-58ce140b","path":"/courses/beginner/12.html","title":"12. Put it in Practice!","lang":"en-US","frontmatter":{"order":12,"next":"13.md","title":"12. Put it in Practice!","author":"Aaron Lippold","headerDepth":3,"description":"Getting Started on the RHEL8 Baseline Let's practice writing a few 'real' controls using a security guidance document. The Steps to write an InSpec Control 1. Read the Control -...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/beginner/12.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"12. Put it in Practice!"}],["meta",{"property":"og:description","content":"Getting Started on the RHEL8 Baseline Let's practice writing a few 'real' controls using a security guidance document. The Steps to write an InSpec Control 1. Read the Control -..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"12. Put it in Practice!\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"Getting Started on the RHEL8 Baseline","slug":"getting-started-on-the-rhel8-baseline","link":"#getting-started-on-the-rhel8-baseline","children":[{"level":3,"title":"Example Control Using login_defs Resource:","slug":"example-control-using-login-defs-resource","link":"#example-control-using-login-defs-resource","children":[]},{"level":3,"title":"Controls We Will Demonstrate","slug":"controls-we-will-demonstrate","link":"#controls-we-will-demonstrate","children":[]},{"level":3,"title":"Suggested Level 1 Controls","slug":"suggested-level-1-controls","link":"#suggested-level-1-controls","children":[]},{"level":3,"title":"Suggested Level 2 Controls","slug":"suggested-level-2-controls","link":"#suggested-level-2-controls","children":[]},{"level":3,"title":"Suggested InSpec Resources to Review","slug":"suggested-inspec-resources-to-review","link":"#suggested-inspec-resources-to-review","children":[]}]},{"level":2,"title":"Completed RHEL8 Profile for Reference","slug":"completed-rhel8-profile-for-reference","link":"#completed-rhel8-profile-for-reference","children":[]}],"git":{},"readingTime":{"minutes":3.69,"words":1107},"filePathRelative":"courses/beginner/12.md","autoDesc":true}`);export{e as data};
diff --git a/assets/12.html-9oj58LSE.js b/assets/12.html-9oj58LSE.js
new file mode 100644
index 000000000..e65b22275
--- /dev/null
+++ b/assets/12.html-9oj58LSE.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-b50ed5de","path":"/courses/guidance/12.html","title":"12. Publishing a STIG","lang":"en-US","frontmatter":{"order":12,"next":"13.md","title":"12. Publishing a STIG","author":"Will Dower","headerDepth":3,"description":"12.1 Notes on Formally Publishing a STIG The STIG Process Most of this section is informed by DISA's own published guidance for the Vendor STIG Process, as well as the experienc...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/guidance/12.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"12. Publishing a STIG"}],["meta",{"property":"og:description","content":"12.1 Notes on Formally Publishing a STIG The STIG Process Most of this section is informed by DISA's own published guidance for the Vendor STIG Process, as well as the experienc..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Will Dower"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"12. Publishing a STIG\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Will Dower\\"}]}"]]},"headers":[{"level":2,"title":"12.1 Notes on Formally Publishing a STIG","slug":"_12-1-notes-on-formally-publishing-a-stig","link":"#_12-1-notes-on-formally-publishing-a-stig","children":[]},{"level":2,"title":"12.2 Starting the Process","slug":"_12-2-starting-the-process","link":"#_12-2-starting-the-process","children":[{"level":3,"title":"12.2.1 Writing Style","slug":"_12-2-1-writing-style","link":"#_12-2-1-writing-style","children":[]}]},{"level":2,"title":"12.3 Stages of STIG Development","slug":"_12-3-stages-of-stig-development","link":"#_12-3-stages-of-stig-development","children":[{"level":3,"title":"12.3.1 Stage 1 STIG Development (The First Ten Requirements)","slug":"_12-3-1-stage-1-stig-development-the-first-ten-requirements","link":"#_12-3-1-stage-1-stig-development-the-first-ten-requirements","children":[]},{"level":3,"title":"12.3.2 Stage 2 STIG Development","slug":"_12-3-2-stage-2-stig-development","link":"#_12-3-2-stage-2-stig-development","children":[]},{"level":3,"title":"12.3.3 Stage 3 STIG Development","slug":"_12-3-3-stage-3-stig-development","link":"#_12-3-3-stage-3-stig-development","children":[]},{"level":3,"title":"12.3.4 Stage 4 STIG Development","slug":"_12-3-4-stage-4-stig-development","link":"#_12-3-4-stage-4-stig-development","children":[]},{"level":3,"title":"12.3.5 STIG Validation","slug":"_12-3-5-stig-validation","link":"#_12-3-5-stig-validation","children":[]},{"level":3,"title":"12.3.6 Review and Approval","slug":"_12-3-6-review-and-approval","link":"#_12-3-6-review-and-approval","children":[]}]}],"git":{},"readingTime":{"minutes":2.47,"words":740},"filePathRelative":"courses/guidance/12.md","autoDesc":true}`);export{e as data};
diff --git a/assets/12.html-SN14diUW.js b/assets/12.html-SN14diUW.js
new file mode 100644
index 000000000..5be1f8b64
--- /dev/null
+++ b/assets/12.html-SN14diUW.js
@@ -0,0 +1 @@
+const e=JSON.parse(`{"key":"v-63b5a7db","path":"/courses/advanced/12.html","title":"12. Next Steps","lang":"en-US","frontmatter":{"order":12,"title":"12. Next Steps","author":"Emily","headerDepth":3,"description":"Next Steps Take the Class Survey Take our brief survey (https://forms.office.com/g/W2xtcV2frW) to give feedback to fuel class improvement. Save your work on GitHub If you want t...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/advanced/12.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"12. Next Steps"}],["meta",{"property":"og:description","content":"Next Steps Take the Class Survey Take our brief survey (https://forms.office.com/g/W2xtcV2frW) to give feedback to fuel class improvement. Save your work on GitHub If you want t..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Emily"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"12. Next Steps\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Emily\\"}]}"]]},"headers":[{"level":2,"title":"Next Steps","slug":"next-steps","link":"#next-steps","children":[{"level":3,"title":"Take the Class Survey","slug":"take-the-class-survey","link":"#take-the-class-survey","children":[]},{"level":3,"title":"Save your work on GitHub","slug":"save-your-work-on-github","link":"#save-your-work-on-github","children":[]},{"level":3,"title":"Reference other class content","slug":"reference-other-class-content","link":"#reference-other-class-content","children":[]},{"level":3,"title":"Check Out the Rest of MITRE SAF(c)'s Content","slug":"check-out-the-rest-of-mitre-saf-c-s-content","link":"#check-out-the-rest-of-mitre-saf-c-s-content","children":[]},{"level":3,"title":"Contact Us","slug":"contact-us","link":"#contact-us","children":[]}]}],"git":{},"readingTime":{"minutes":0.96,"words":289},"filePathRelative":"courses/advanced/12.md","autoDesc":true}`);export{e as data};
diff --git a/assets/12.html-UW4blz-R.js b/assets/12.html-UW4blz-R.js
new file mode 100644
index 000000000..9a01c4cee
--- /dev/null
+++ b/assets/12.html-UW4blz-R.js
@@ -0,0 +1 @@
+import{_ as s}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as i,c as a,d as e,e as t,b as n,f as c}from"./app-PAvzDPkc.js";const d={},l=e("p",null,"A patch update involves making minor changes to a profile to fix issues or improve functionality. Here's a step-by-step guide:",-1),u=e("strong",null,"Report the Issue:",-1),h={href:"https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/issues",target:"_blank",rel:"noopener noreferrer"},p=c("tagged
patch release you're targeting for the update.inspec.yml
inputs, thresholds, etc. Don't worry about the InSpec version in the inspec.yml
- the release process handles that.vanilla
and hardened
variants of the known bad
and known good
states of the AWS EC2
and Docker
test targets. Also, test your controls outside perfect conditions to ensure they handle non-optimal target environments. Verify that your update considers the container
, virtual machine
, and 1U machine
testing context of applicability.bundle exec rake lint
and bundle exec rake lint:autocorrect
commands from the test suite to lint your updates.Fixes #ISSUE
in your commit messages to automatically close the issue when your PR is merged.Most of this section is informed by DISA's own published guidance for the Vendor STIG Process, as well as the experiences of external teams and Vulcan stakeholders who have undergone the STIG creation process.
We recommend that you review the official Vendor STIG Process guide (see Resources for a copy) if you want to undergo the process.
Using Vulcan for Aritifact Management
DISA will require you to provide your draft content for review in Excel format.
That is to say, DISA's process is completely separate to Vulcan, and they will not need access to it.
Luckily, Vulcan can both export STIG-ready content to Excel format for DISA to ingest, and load reviewed content from DISA as a separate Component for easy comparison.
First and foremost - reach out early!
DISA created the Vendor STIG Process to ensure that the content produced by the vendor community is up to DOD standard. As such, DISA prefers to meet with the STIG-ready content team before the content is written to discuss the characteristics of the software component the team is trying to write guidance for.
DISA will also provide guidance on which SRG (or set of SRGs) should be selected as a foundation.
Do I need to be the actual vendor to publish my content?
Not necessarily. DISA certainly expects that most people who look to formally publish STIG content will be the vendor that created a particular software component, but this is not required.
If you expect that the content you have created for a component for one project would be:
a) useful to the wider security community, or
b) useful to you personally on a later project
then reach out to DISA to formally publish it.
DISA's documentation on the STIG process[1] breaks it down into four development stages after the initial SRG selection, punctuated by frequent updates to and review by the agency. After the external author team finishes STIG development, there are a few more internal reviews at DISA before the final decision is made to publish.
The author team will first fill out a total of 10 of the requirements in the STIG document, where the 10 requirements are a mix of all statuses (Applicable – Configurable, Applicable – Inherently Meets, Applicable – Does Not Meet, and Not Applicable).
If this initial round of requirements are written satisfactorily, the author team can continue work on the STIG content for 30 days before the next work-in-progress review from DISA. The agency may give further feedback on areas to improve at this point before continuing.
The author team can continue writing STIG content for another 30 days (a total of 60 days after the initial decision to proceed) before the next round of work-in-progress review from DISA. The agency may give further feedback on areas to improve at this point before continuing.
After another 30 days (90 days total from the initial decision to proceed) the author team should submit a completed initial draft of the STIG to DISA for a full validation of the content.
Once the full draft is submitted, DISA will validate the contents of the STIG Check and Fix instructions by implementing them against a test system (the author team, if they work for the vendor of a not-yet-released product, may need to work to ensure that DISA can access a test system).
At this point, DISA personnel write up a formal reports to the DISA Authorizing Offical and confirm one last time that the STIG conforms to the style guide. Content that passes this final review is now officially a STIG and can be published to the DOD Cyber Exchange.
Section 3 of the "Vendor STIG Process", Version 4 Release 1. See Resources. ↩︎
desc "check", "Verify all local interactive users on RHEL 8 are assigned a home directory upon creation with
+the following command:
+
+$ sudo grep -i create_home /etc/login.defs
+...
+
Remember the matchers
Here, the login_defs resource shows examples using the includes
and eq
matcher. Here, we use eq
because we are looking for only one result from the command, not an array of items.
inspec exec rhel8-baseline-stubs -t docker://redhat8
+
Control | Resource Used |
---|---|
SV-230324 | login_defs resource |
SV-230250 | directory resource |
SV-230243 | directory looping & file resource |
SV-230505 | non applicable use case & package resource |
Control | Resource Used |
---|---|
SV-230383 | login_defs resource |
SV-230249 | directory resource |
SV-230471 | directory looping & file resource |
SV-230241 | non applicable use case & package resource |
Control | Resource Used |
---|---|
SV-230281 | parse config file |
SV-230365 | login_defs resource |
SV-230264 | file content |
Key Elements in this Profile
impact 0
for NA & Container Aware Controlscontainer aware
, andfail fast
approach to testing execution.Wait, does this mean that I can cheat on all of these exercises by looking up all the real controls?!
Yes. Feel free. We suggest you at least try thinking through how you'd write this test code without the real baseline, though.
What about controls that cannot be automated and require manual review? You may have noticed that Heimdall displays controls in 4 statuses: Passed
, Failed
, Not Applicable
, and Not Reviewed
.
Controls may be Not Reviewed
for multiple reasons. One major reason is that the control requires manual review. You can explore the details of the Not Reviewed
controls to find out more.
Look at the hardened results again in Heimdall. Go back to the menu in the top left to toggle off "Comparison View" and select on the hardened results.
Scroll down to see the details and learn why the controls were not reviewed.
You can see that for various reasons, many of these controls require manual review. If someone does that manual review, how can we show that in the data?
Here is an example of an attested control that we can create based on
saf attest create -o ./results/manual_attestation_results.json
+Enter a control ID or enter 'q' to exit: V-40792
+Attestation explanation: Verified that the server-side session management is configured correctly.
+Frequency (1d/3d/1wk/2wk/1m/3m/6m/1y/1.5y/custom): 3m
+Enter status ((p)assed/(f)ailed): p
+Updated By: Emily Rodriguez
+Enter a control ID or enter 'q' to exit:
+
Now, go through and add more attestations of the Not Reviewed results. You can decide if they should pass or fail as if you hypothetically did check these controls manually. Type q
when you are done.
Use the -h
flag to learn about applying attestations.
Apply the attestation like this:
saf attest apply -i ./results/nginx_hardened_results.json ./results/manual_attestation_results.json -o ./results/nginx_hardened_with_manual_attestations.json
+
As we have done before,
nginx_hardened_with_manual_attestations.json
file.In the example, a few manual attestations were completed, some of which were recorded as passing and some as failing. You may have chosen to do your manual attestations differently and have different metrics.
You can look at the details to find the attestation information captured. Expand the details for each control to view this data.
',9);function j(M,z){const c=r("ExternalLinkIcon"),o=r("CodeTabs");return p(),d("div",null,[f,e("p",null,[a("You have already seen the InSpec profiles and the Heimdall application that the SAF provides. Another feature of the SAF is the SAF CLI. This is a command line utility tool that helps with various steps in the security automation process. You can see all of the SAF CLI's capability "),e("a",g,[a("here"),i(c)]),a(", but we will look more at how we can use it to add manual attestation data to our overall results.")]),_,y,i(o,{id:"39",data:[{id:"Command"},{id:"Output"}]},{title0:s(({value:n,isActive:t})=>[a("Command")]),title1:s(({value:n,isActive:t})=>[a("Output")]),tab0:s(({value:n,isActive:t})=>[w]),tab1:s(({value:n,isActive:t})=>[A]),_:1}),x,i(o,{id:"50",data:[{id:"Command"},{id:"Output"}]},{title0:s(({value:n,isActive:t})=>[a("Command")]),title1:s(({value:n,isActive:t})=>[a("Output")]),tab0:s(({value:n,isActive:t})=>[C]),tab1:s(({value:n,isActive:t})=>[S]),_:1}),I,i(o,{id:"61",data:[{id:"Command"},{id:"Output"}]},{title0:s(({value:n,isActive:t})=>[a("Command")]),title1:s(({value:n,isActive:t})=>[a("Output")]),tab0:s(({value:n,isActive:t})=>[N]),tab1:s(({value:n,isActive:t})=>[F]),_:1}),q,D,E,i(o,{id:"78",data:[{id:"Command"},{id:"Output"}]},{title0:s(({value:n,isActive:t})=>[a("Command")]),title1:s(({value:n,isActive:t})=>[a("Output")]),tab0:s(({value:n,isActive:t})=>[O]),tab1:s(({value:n,isActive:t})=>[H]),_:1}),L,i(o,{id:"111",data:[{id:"Command"},{id:"Output"}]},{title0:s(({value:n,isActive:t})=>[a("Command")]),title1:s(({value:n,isActive:t})=>[a("Output")]),tab0:s(({value:n,isActive:t})=>[R]),tab1:s(({value:n,isActive:t})=>[T]),_:1}),V])}const P=u(k,[["render",j],["__file","12.html.vue"]]);export{P as default}; diff --git a/assets/13.html-CbzjgJYT.js b/assets/13.html-CbzjgJYT.js new file mode 100644 index 000000000..a65bdb68a --- /dev/null +++ b/assets/13.html-CbzjgJYT.js @@ -0,0 +1 @@ +import{_ as e}from"./plugin-vue_export-helper-x3n3nnut.js";import{o,c as t,f as n}from"./app-PAvzDPkc.js";const c={},r=n('A Release Update
involves creating a new branch, v#{x}R#{x+1}
, from the current main or latest patch release branch. The saf generate delta
workflow is then run, which updates the metadata of the controls
, inspec.yml
, README.md
, and other profile elements, while preserving the describe
and ruby code logic
. This workflow is detailed in the Inspec Delta section. After the initial commit of the new release branch, follow these steps to keep your work organized:
control ids
in the updated benchmark. This can be in CSV, Markdown Table, or in the PR overview information section. This helps track completed and pending work. PRs off the v#{x}r#{x+1}
can also be linked in the table, especially if using a micro
vs massive
PR approach.hardening
content (ansible, puppet, chef, hardened docker images, hardened vagrant boxes) to meet new requirements. Ensure the CI/CD process still functions with the updated elements, preferably on the PR as well.titles
and other labels to reflect the updated release number of the Benchmark.check text
or fix text
are likely to require inspec code changes
. If the check text
and fix text
of a control remain unchanged, it's likely only a cosmetic update, with no change in the security requirement or validation code.A Major Version Update
involves transitioning to a new STIG Benchmark, which introduces a new Rule ID index. This process is more complex than a Release Update
due to the need for aligning old requirements (Rule IDs) with the new ones.
For example, when transitioning from RedHat Enterprise Linux 8 v1R12 to Red Hat Enterprise Linux 9 V1R1, the alignment of InSpec tests to the new requirements must be fuzzy matched
. This involves using common identifiers such as SRG ID
, CCIs
, and, if necessary, the title
and descriptions
.
This is crucial when a single requirement from the old benchmark is split into multiple requirements in the new benchmark, although this is usually a rare occurrence.
",3),h={href:"https://vulcan.mitre.org",target:"_blank",rel:"noopener noreferrer"},u=e("code",null,"Delta",-1),p=e("p",null,[n("The good news is that "),e("strong",null,"these improvements are within reach"),n(". We can leverage the existing work from "),e("code",null,"Vulcan"),n(" and hopefully soon incorporate these improvements into the SAF "),e("code",null,"Delta"),n(" tool as a direct function.")],-1),m=e("p",null,"Once the 'old controls' and 'new controls' are aligned across 'Rule IDs', you can migrate the InSpec / Ruby code into their respective places.",-1),_=e("p",null,[n("Then, you follow the same setup, CI/CD organization, and control update process as in the "),e("code",null,"Release Update"),n(" process and hopfully finding that the actual InSpec code from the previous benchmark is very close to the needed InSpec code for the same 'requirement' in the new Benchmark.")],-1);function f(g,v){const o=s("ExternalLinkIcon");return r(),a("div",null,[d,e("p",null,[n("We use a similar process in our "),e("a",h,[n("MITRE Vulcan"),i(o)]),n(" to align 'Related Controls' in your Vulcan project to existing published STIG documents. However, the "),u,n(" tool currently requires manual intervention, and improvements are needed to automate this process.")]),p,m,_])}const x=t(l,[["render",f],["__file","14.html.vue"]]);export{x as default}; diff --git a/assets/14.html-XM-PcHav.js b/assets/14.html-XM-PcHav.js new file mode 100644 index 000000000..83cec2c38 --- /dev/null +++ b/assets/14.html-XM-PcHav.js @@ -0,0 +1 @@ +import{_ as o}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as s,c as n,d as t,e,b as l,f as a}from"./app-PAvzDPkc.js";const c="/saf-training/assets/SAF_Capabilities_Normalize-g0EjxQYS.png",h="/saf-training/assets/Heimdall_Samples-m15bvKmA.png",d="/saf-training/assets/Heimdall_Samples_Select-X7tVD4Jb.png",p="/saf-training/assets/Heimdall_TreeMap_Fuller-Bwbdr8fo.png",u="/saf-training/assets/Heimdall_MultiResults2-UId-VJn7.png",f="/saf-training/assets/Heimdall_MultiResults-OJWUYftF.png",m="/saf-training/assets/Heimdall_Export_Menu-zGjHDWhI.png",g={},_=a('Remember the "Normalize" pillar? We skipped over it when we were doing InSpec validation because InSpec results are automatically in HDF (or Heimdall Data Format).
However, other tools provide useful security data that is not inherently in HDF. So, to make a full picture of security, we have converters to convert third party data to HDF and HDF back into other forms.
However, you Heimdall can also auto-convert uploaded files in compatible formats, giving you another way to convert data and look at the whole picture at one time.
Test this out by adding sample files of other data in Heimdall.
Choose some sample data to add to the full security of a theoretical software stack.
As you add all of this data into one view, you can see how the NIST 800-53 controls are more filled out as more items are covered by different types of security scans.
In this big picture view, you can see the whole security posture and filter down, for example, on high failures across all scans. Your results may look different than these pictures depending on what you have loaded in Heimdall
And in the results details, you can see what file - in other words what scan or part of the system, is causing the problem.
This is a two-way street! There are other places security data needs to be - maybe in Splunk, eMASS, AWS Security Hub, or even just in an easy, high level diagram to show your boss. Because of this, Heimdall can also export data into different forms using the "Export" button in the top right. Try out some of these forms on your results!
',16);function v(w,x){const i=r("ExternalLinkIcon");return s(),n("div",null,[_,t("p",null,[e("The SAF CLI has utilies to convert files from one output to another. Take a look at the ever-growing list of compatible file types at the "),t("a",y,[e("SAF CLI README"),l(i)]),e(".")]),b])}const T=o(g,[["render",v],["__file","14.html.vue"]]);export{T as default}; diff --git a/assets/14.html-ZXBlwruU.js b/assets/14.html-ZXBlwruU.js new file mode 100644 index 000000000..fc597ba2f --- /dev/null +++ b/assets/14.html-ZXBlwruU.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-a2183836","path":"/courses/profile-dev-test/14.html","title":"Creating a `Major Version Update`","lang":"en-US","frontmatter":{"order":14,"next":"15.md","title":"Creating a `Major Version Update`","author":"Aaron Lippold","description":"A Major Version Update involves transitioning to a new STIG Benchmark, which introduces a new Rule ID index. This process is more complex than a Release Update due to the need f...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/14.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Creating a `Major Version Update`"}],["meta",{"property":"og:description","content":"A Major Version Update involves transitioning to a new STIG Benchmark, which introduces a new Rule ID index. This process is more complex than a Release Update due to the need f..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Creating a `Major Version Update`\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":0.91,"words":274},"filePathRelative":"courses/profile-dev-test/14.md","autoDesc":true}');export{e as data}; diff --git a/assets/15.html-08iC1CW-.js b/assets/15.html-08iC1CW-.js new file mode 100644 index 000000000..ae25feb82 --- /dev/null +++ b/assets/15.html-08iC1CW-.js @@ -0,0 +1,2 @@ +import{_ as r}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as i,o as a,c,d as e,e as t,b as o,a as d,f as l}from"./app-PAvzDPkc.js";const h={},f={href:"http://kitchen.ci",target:"_blank",rel:"noopener noreferrer"},u=l('Test Kitchen's workflow involves building out suites and platforms using its drivers and provisioners. It follows a create, converge, verify, and destroy cycle:
In our testing workflow, we have defined four test suites to test different deployment patterns in two configurations - vanilla
and hardened
.
vanilla
: This represents a completely stock installation of the testing target, as provided by the product vendor, with no configuration updates beyond what is 'shipped' by the vendor. Apart from the standard Test Kitchen initialization, the system is considered 'stock'.hardened
: This configuration is set up using the driver
section of the Test Kitchen suite and is executed during the converge
phase. The hardened
configuration represents the final target configuration state
of our test instance, adhering to the recommended configuration of the Benchmark we are working on. For example, it aligns as closely as possible with the Red Hat Enterprise Linux V1R12 recommendations.The following Ruby gems are required to install private Supermarket using the supermarket-omnibus-cookbook:
These should be accessible from your Gem mirror.
An install script is used to install Chef Infra Client when bootstrapping a new node. It simply pulls the Chef Infra Client package from your artifact store, and then installs it. For example, on Debian-based Linux systems, it would look similar to this:
#!/bin/bash
+
+cd /tmp/
+wget http://packages.example.com/chef_13.2.20-1_amd64.deb
+dpkg -i chef_13.2.20-1_amd64.deb
+
The install script should be accessible from your artifact store.
`,8);function q(N,G){const t=i("ExternalLinkIcon");return s(),a("div",null,[u,e("div",h,[d,e("p",null,[n("For more information on how to install InSpec on an airgapped system use the "),e("a",m,[n("chef instructions"),o(t)]),n(" as guidance")])]),f,_,e("ol",null,[g,b,k,x,e("li",null,[n("You have an artifact store for file downloads. At a minimum, it should have the following packages available: "),e("ol",null,[y,v,w,e("li",null,[n("An "),e("a",j,[n("install script"),o(t)]),n(" for Chef Infra Client")])])])]),T,S,R,e("ul",null,[e("li",null,[e("a",A,[n("supermarket-omnibus-cookbook"),o(t)])]),e("li",null,[e("a",C,[n("chef-ingredient"),o(t)])]),e("li",null,[e("a",E,[n("hostsfile"),o(t)])])]),I,l(` ## 14. Viewing and Analyzing Results + +InSpec allows you to output your test results to one or more reporters. You can configure the reporter(s) using either the --json-config option or the --reporter option. While you can configure multiple reporters to write to different files, only one reporter can output to the screen(stdout). + +\`\`\` +$ inspec exec /root/my_nginx -t ssh://TARGET_USERNAME:TARGET_PASSWORD@TARGET_IP --reporter cli json:baseline_output.json +\`\`\` + +### 14.1. Syntax + +You can specify one or more reporters using the --reporter cli flag. You can also specify a output by appending a path separated by a colon. + +Output json to screen. + +\`\`\` +inspec exec /root/my_nginx --reporter json +or +inspec exec /root/my_nginx --reporter json:- +\`\`\` + +Output yaml to screen + +\`\`\` +inspec exec /root/my_nginx --reporter yaml +or +inspec exec /root/my_nginx --reporter yaml:- +\`\`\` + +Output cli to screen and write json to a file. + +\`inspec exec /root/my_nginx --reporter cli json:/tmp/output.json\` + +Output nothing to screen and write junit and html to a file. + +\`inspec exec /root/my_nginx --reporter junit:/tmp/junit.xml html:www/index.html\` + +Output json to screen and write to a file. Write junit to a file. + +\`inspec exec /root/my_nginx --reporter json junit:/tmp/junit.xml | tee out.json\` + +If you wish to pass the profiles directly after specifying the reporters you will need to use the end of options flag --. + +\`inspec exec --reporter json junit:/tmp/junit.xml -- profile1 profile2\` + +Output cli to screen and write json to a file. + +\`\`\`json +{ + "reporter": { + "cli": { + "stdout": true + }, + "json": { + "file": "/tmp/output.json", + "stdout": false + } + } +} +\`\`\` + +### 14.2. Supported Reporters + +The following are the current supported reporters: + +- cli +- json +- json-min +- yaml +- documentation +- junit +- progress +- json-rspec +- html + +You can read more about [InSpec Reporters](https://www.inspec.io/docs/reference/reporters/) on the documentation page. + +### 14.3. Putting it all together + +The following command will run the nginx baseline profile from github and use the reporter to output a json, you will need this for the next step loading it into heimdall: + +\`$ inspec exec https://github.com/dev-sec/nginx-baseline -t ssh://TARGET_USERNAME:TARGET_PASSWORD@TARGET_IP --reporter cli json:baseline_output.json\` `)])}const D=r(p,[["render",q],["__file","15.html.vue"]]);export{D as default}; diff --git a/assets/16.html-PdB6Rg-R.js b/assets/16.html-PdB6Rg-R.js new file mode 100644 index 000000000..4a058a482 --- /dev/null +++ b/assets/16.html-PdB6Rg-R.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-9b44d5ba","path":"/courses/profile-dev-test/16.html","title":"Test Kitchen - Create","lang":"en-US","frontmatter":{"order":16,"next":"17.md","title":"Test Kitchen - Create","author":"Aaron Lippold","index":true,"description":"The create stage in Test Kitchen sets up testing environments. It uses standard and patched images from AWS and Red Hat, including AMI EC2 images, Docker containers, and Vagrant...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/16.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Test Kitchen - Create"}],["meta",{"property":"og:description","content":"The create stage in Test Kitchen sets up testing environments. It uses standard and patched images from AWS and Red Hat, including AMI EC2 images, Docker containers, and Vagrant..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Test Kitchen - Create\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":0.42,"words":126},"filePathRelative":"courses/profile-dev-test/16.md","autoDesc":true}');export{e as data}; diff --git a/assets/16.html-c-vzbbVY.js b/assets/16.html-c-vzbbVY.js new file mode 100644 index 000000000..ee09e0d44 --- /dev/null +++ b/assets/16.html-c-vzbbVY.js @@ -0,0 +1 @@ +import{_ as s}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as a,o as c,c as r,d as e,e as t,b as o}from"./app-PAvzDPkc.js";const i={},l=e("h1",{id:"test-kitchen-create-stage",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#test-kitchen-create-stage","aria-hidden":"true"},"#"),t(" Test Kitchen Create Stage")],-1),h=e("p",null,[t("The "),e("code",null,"create"),t(" stage in Test Kitchen sets up testing environments. It uses standard and patched images from AWS and Red Hat, including AMI EC2 images, Docker containers, and Vagrant boxes.")],-1),d=e("p",null,"Test Kitchen automatically fetches the latest images from sources like Amazon Marketplace, DockerHub, Vagrant Marketplace, and Bento Hub. You can customize this to use different images, private repositories (like Platform One's Iron Bank), or local images.",-1),_={href:"https://kitchen.ci",target:"_blank",rel:"noopener noreferrer"},u=e("code",null,"kitchen-ec2",-1),m=e("code",null,"kitchen-vagrant",-1),p=e("code",null,"kitchen-sync",-1),f={href:"https://github.com/inspec/kitchen-inspec",target:"_blank",rel:"noopener noreferrer"},k=e("code",null,"kitchen-inspec",-1);function g(b,x){const n=a("ExternalLinkIcon");return c(),r("div",null,[l,h,d,e("p",null,[t("For more details on how Test Kitchen manages images, visit the "),e("a",_,[t("Test Kitchen website"),o(n)]),t(". You can also refer to the GitHub documentation for the "),u,t(", "),m,t(", "),p,t(", and "),e("a",f,[k,o(n)]),t(" project on GitHub.")])])}const B=s(i,[["render",g],["__file","16.html.vue"]]);export{B as default}; diff --git a/assets/16.html-eRejMn1V.js b/assets/16.html-eRejMn1V.js new file mode 100644 index 000000000..7787fe282 --- /dev/null +++ b/assets/16.html-eRejMn1V.js @@ -0,0 +1 @@ +const e=JSON.parse(`{"key":"v-443b0d00","path":"/courses/user/16.html","title":"16. Next Steps","lang":"en-US","frontmatter":{"order":16,"title":"16. Next Steps","author":"Emily","headerDepth":3,"description":"16. Next Steps 16.1 Take the Class Survey Take our 9 question SAF User Class survey (https://forms.office.com/g/UxNr3nhtcm) to give feedback to fuel class improvement. 16.2 Take...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/user/16.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"16. Next Steps"}],["meta",{"property":"og:description","content":"16. Next Steps 16.1 Take the Class Survey Take our 9 question SAF User Class survey (https://forms.office.com/g/UxNr3nhtcm) to give feedback to fuel class improvement. 16.2 Take..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Emily"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"16. Next Steps\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Emily\\"}]}"]]},"headers":[{"level":2,"title":"16. Next Steps","slug":"_16-next-steps","link":"#_16-next-steps","children":[{"level":3,"title":"16.1 Take the Class Survey","slug":"_16-1-take-the-class-survey","link":"#_16-1-take-the-class-survey","children":[]},{"level":3,"title":"16.2 Take the Beginner Security Automation Developer Class","slug":"_16-2-take-the-beginner-security-automation-developer-class","link":"#_16-2-take-the-beginner-security-automation-developer-class","children":[]},{"level":3,"title":"16.3 Check Out the Rest of MITRE SAF(c)'s Content","slug":"_16-3-check-out-the-rest-of-mitre-saf-c-s-content","link":"#_16-3-check-out-the-rest-of-mitre-saf-c-s-content","children":[]},{"level":3,"title":"16.4 Contact Us","slug":"_16-4-contact-us","link":"#_16-4-contact-us","children":[]}]}],"git":{},"readingTime":{"minutes":0.9,"words":269},"filePathRelative":"courses/user/16.md","autoDesc":true}`);export{e as data}; diff --git a/assets/16.html-pDGZD3f7.js b/assets/16.html-pDGZD3f7.js new file mode 100644 index 000000000..ab11331c6 --- /dev/null +++ b/assets/16.html-pDGZD3f7.js @@ -0,0 +1 @@ +import{_ as r}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as s,o as i,c,d as e,e as t,b as o,w as h}from"./app-PAvzDPkc.js";const l={},u=e("h2",{id:"_16-next-steps",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_16-next-steps","aria-hidden":"true"},"#"),t(" 16. Next Steps")],-1),d=e("h3",{id:"_16-1-take-the-class-survey",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_16-1-take-the-class-survey","aria-hidden":"true"},"#"),t(" 16.1 Take the Class Survey")],-1),f={href:"https://forms.office.com/g/UxNr3nhtcm",target:"_blank",rel:"noopener noreferrer"},_=e("h3",{id:"_16-2-take-the-beginner-security-automation-developer-class",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_16-2-take-the-beginner-security-automation-developer-class","aria-hidden":"true"},"#"),t(" 16.2 Take the Beginner Security Automation Developer Class")],-1),p=e("p",null,"This class is one of a set of security automation content offered by the MITRE SAF(c) team. If you found this content interesting and you want to learn more about writing InSpec profiles of your own, we encourage you to check out the Beginner Class (shown in the table of contents on the left). You'll be writing your own automated validation tests in no time!",-1),m=e("h3",{id:"_16-3-check-out-the-rest-of-mitre-saf-c-s-content",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_16-3-check-out-the-rest-of-mitre-saf-c-s-content","aria-hidden":"true"},"#"),t(" 16.3 Check Out the Rest of MITRE SAF(c)'s Content")],-1),g={href:"https://saf.mitre.org",target:"_blank",rel:"noopener noreferrer"},y=e("h3",{id:"_16-4-contact-us",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#_16-4-contact-us","aria-hidden":"true"},"#"),t(" 16.4 Contact Us")],-1),b=e("p",null,[t("The MITRE SAF(c) team can be contacted at "),e("a",{href:"mailto:saf@groups.mitre.org"},"saf@groups.mitre.org"),t(". We support U.S. government sponsors in developing new tools for the Framework and in implementing the existing ones in DevSecOps pipelines. If you have a question about how you can use any of the content you saw in this class in your own environment, we'd be happy to help.")],-1);function k(v,w){const n=s("ExternalLinkIcon"),a=s("RouterLink");return i(),c("div",null,[u,d,e("p",null,[t("Take our 9 question SAF User Class "),e("a",f,[t("survey"),o(n)]),t(" to give feedback to fuel class improvement.")]),_,p,m,e("p",null,[t("MITRE SAF(c) is a large collection of tools and techniques for security automation in addition to those discussed in this class. You can find utilities and libraries to support any step of the software development lifecycle by browsing our offerings at "),e("a",g,[t("saf.mitre.org"),o(n)]),t(". Note that everything offered by MITRE SAF(c) is open-source and available to use free of charge. You can also reference all of the resources listed from the class on the "),o(a,{to:"/resources/"},{default:h(()=>[t("Resources Page")]),_:1})]),y,b])}const T=r(l,[["render",k],["__file","16.html.vue"]]);export{T as default}; diff --git a/assets/17.html-KmjmQR7T.js b/assets/17.html-KmjmQR7T.js new file mode 100644 index 000000000..4d2ab3023 --- /dev/null +++ b/assets/17.html-KmjmQR7T.js @@ -0,0 +1 @@ +import{_ as n}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as t,o as r,c as s,d as i,e,b as d,w as c,f as a}from"./app-PAvzDPkc.js";const l={},h=a('The converge
stage uses Ansible Playbooks from the Ansible Lockdown project to apply hardening configurations, specifically the RHEL8-STIG playbook, and RedHat managed containers.
For EC2 and Vagrant, we use 'wrapper playbooks' for the 'vanilla' and 'hardened' suites.
requirements.txt
, and Ansible Roles.Some tasks in the hardening role were disabled for automated testing, but this doesn't significantly impact our security posture. We can still meet our validation and thresholds.
For more on using these playbooks, running Ansible, or modifying the playbooks, roles, and tasks, see the Ansible Project Website.
',7),p=a('We use RedHat vendor images for both the vanilla
and hardened
containers.
vanilla
: This container uses the registry.access.redhat.com/ubi8/ubi:8.9-1028
image from RedHat's community repositories.hardened
: This container uses the registry1.dso.mil/ironbank/redhat/ubi/ubi8
image from Red Hat's Platform One Iron Bank project.The Iron Bank UBI8 image is regularly patched, updated, and hardened according to STIG requirements.
',4);function u(g,m){const o=t("RouterLink");return r(),s("div",null,[h,i("p",null,[e("Find these roles and 'wrapper playbooks' in the "),d(o,{to:"/courses/profile-dev-test/spec/"},{default:c(()=>[e("spec/")]),_:1}),e(" directory.")]),p])}const v=n(l,[["render",u],["__file","17.html.vue"]]);export{v as default}; diff --git a/assets/17.html-q1XWgL_N.js b/assets/17.html-q1XWgL_N.js new file mode 100644 index 000000000..6be9d81d7 --- /dev/null +++ b/assets/17.html-q1XWgL_N.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-97db247c","path":"/courses/profile-dev-test/17.html","title":"Test Kitchen - Converge","lang":"en-US","frontmatter":{"order":17,"next":"16.md","title":"Test Kitchen - Converge","author":"Aaron Lippold","index":true,"description":"The converge stage uses Ansible Playbooks from the Ansible Lockdown project to apply hardening configurations, specifically the RHEL8-STIG playbook, and RedHat managed container...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/17.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Test Kitchen - Converge"}],["meta",{"property":"og:description","content":"The converge stage uses Ansible Playbooks from the Ansible Lockdown project to apply hardening configurations, specifically the RHEL8-STIG playbook, and RedHat managed container..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Test Kitchen - Converge\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"EC2 and Vagrant Converge","slug":"ec2-and-vagrant-converge","link":"#ec2-and-vagrant-converge","children":[]},{"level":2,"title":"Container Converge","slug":"container-converge","link":"#container-converge","children":[]}],"git":{},"readingTime":{"minutes":0.7,"words":211},"filePathRelative":"courses/profile-dev-test/17.md","autoDesc":true}');export{e as data}; diff --git a/assets/18.html-6inXR1OS.js b/assets/18.html-6inXR1OS.js new file mode 100644 index 000000000..937203f9d --- /dev/null +++ b/assets/18.html-6inXR1OS.js @@ -0,0 +1 @@ +import{_ as e}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as t,c as a,f as i}from"./app-PAvzDPkc.js";const o={},s=i('The verify
stage uses the kitchen-inspec
verifier from Test Kitchen to run the profile against the test targets.
For this stage, the profile receives a set of tailored input
YAML files. These files adjust the testing for each target, ensuring accurate validation against the expected state and minimizing false results.
There are also specific threshold
files for each target environment platform (EC2, container, and Vagrant) in both the vanilla
and hardened
suites.
The following sections provide a detailed breakdown of these files, their structure, and the workflow organization.
',5),r=[s];function n(c,d){return t(),a("div",null,r)}const f=e(o,[["render",n],["__file","18.html.vue"]]);export{f as default}; diff --git a/assets/18.html-tlb9_esG.js b/assets/18.html-tlb9_esG.js new file mode 100644 index 000000000..45bffb5ab --- /dev/null +++ b/assets/18.html-tlb9_esG.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-9471733e","path":"/courses/profile-dev-test/18.html","title":"Test Kitchen - Validate","lang":"en-US","frontmatter":{"order":18,"next":"19.md","title":"Test Kitchen - Validate","author":"Aaron Lippold","index":true,"description":"The verify stage uses the kitchen-inspec verifier from Test Kitchen to run the profile against the test targets. For this stage, the profile receives a set of tailored input YAM...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/18.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Test Kitchen - Validate"}],["meta",{"property":"og:description","content":"The verify stage uses the kitchen-inspec verifier from Test Kitchen to run the profile against the test targets. For this stage, the profile receives a set of tailored input YAM..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Test Kitchen - Validate\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":0.35,"words":106},"filePathRelative":"courses/profile-dev-test/18.md","autoDesc":true}');export{e as data}; diff --git a/assets/19.html-1B95xKZW.js b/assets/19.html-1B95xKZW.js new file mode 100644 index 000000000..6a186a22c --- /dev/null +++ b/assets/19.html-1B95xKZW.js @@ -0,0 +1 @@ +import{_ as e}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as t,c as s,f as o}from"./app-PAvzDPkc.js";const n={},a=o('The destroy
stage terminates the EC2 instances, Vagrant boxes, or containers that Test Kitchen created for testing.
Occasionally, the destroy
stage may encounter issues if the hosting platforms have altered the state of the provisioned instance during your writing, testing, or debugging sessions. If you face any problems with the destroy
stage or any other Test Kitchen commands, verify the following:
Sometimes, the solution can be as simple as checking if the instance is still active.
',5),i=[a];function r(c,h){return t(),s("div",null,i)}const g=e(n,[["render",r],["__file","19.html.vue"]]);export{g as default}; diff --git a/assets/19.html-2LyglPur.js b/assets/19.html-2LyglPur.js new file mode 100644 index 000000000..6f0abe2ee --- /dev/null +++ b/assets/19.html-2LyglPur.js @@ -0,0 +1 @@ +const t=JSON.parse('{"key":"v-9107c200","path":"/courses/profile-dev-test/19.html","title":"Test Kitchen - Destroy","lang":"en-US","frontmatter":{"order":19,"next":"20.md","title":"Test Kitchen - Destroy","author":"Aaron Lippold","description":"The destroy stage terminates the EC2 instances, Vagrant boxes, or containers that Test Kitchen created for testing. Occasionally, the destroy stage may encounter issues if the h...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/19.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Test Kitchen - Destroy"}],["meta",{"property":"og:description","content":"The destroy stage terminates the EC2 instances, Vagrant boxes, or containers that Test Kitchen created for testing. Occasionally, the destroy stage may encounter issues if the h..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Test Kitchen - Destroy\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":0.38,"words":113},"filePathRelative":"courses/profile-dev-test/19.md","autoDesc":true}');export{t as data}; diff --git a/assets/20.html-RbtoI_3n.js b/assets/20.html-RbtoI_3n.js new file mode 100644 index 000000000..834449b15 --- /dev/null +++ b/assets/20.html-RbtoI_3n.js @@ -0,0 +1 @@ +import{_ as n}from"./plugin-vue_export-helper-x3n3nnut.js";import{r,o as c,c as i,d as e,e as t,b as a}from"./app-PAvzDPkc.js";const s={},h=e("h1",{id:"the-kitchen-directory",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#the-kitchen-directory","aria-hidden":"true"},"#"),t(" The "),e("code",null,".kitchen/"),t(" Directory")],-1),l={href:"/.kitchen/",target:"_blank",rel:"noopener noreferrer"},d=e("code",null,".kitchen/",-1),_=e("a",{href:"#311-locating-test-target-login-details"},"Finding Your Test Target Login Details",-1),f=e("code",null,".kitchen/",-1);function u(k,m){const o=r("ExternalLinkIcon");return c(),i("div",null,[h,e("p",null,[t("The "),e("a",l,[d,a(o)]),t(" directory contains the state file for Test Kitchen, which is automatically generated when you first run Test Kitchen. Refer to the "),_,t(" section to see how you can use the "),f,t(" directory.")])])}const y=n(s,[["render",u],["__file","20.html.vue"]]);export{y as default}; diff --git a/assets/20.html-fvwmUOh6.js b/assets/20.html-fvwmUOh6.js new file mode 100644 index 000000000..a8c90713a --- /dev/null +++ b/assets/20.html-fvwmUOh6.js @@ -0,0 +1 @@ +const t=JSON.parse('{"key":"v-45f286ac","path":"/courses/profile-dev-test/20.html","title":"Test Kitchen - .kitchen/ directory","lang":"en-US","frontmatter":{"order":20,"next":"21.md","title":"Test Kitchen - .kitchen/ directory","author":"Aaron Lippold","description":"The .kitchen/ (/.kitchen/) directory contains the state file for Test Kitchen, which is automatically generated when you first run Test Kitchen. Refer to the Finding Your Test T...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/20.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Test Kitchen - .kitchen/ directory"}],["meta",{"property":"og:description","content":"The .kitchen/ (/.kitchen/) directory contains the state file for Test Kitchen, which is automatically generated when you first run Test Kitchen. Refer to the Finding Your Test T..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Test Kitchen - .kitchen/ directory\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[],"git":{},"readingTime":{"minutes":0.2,"words":61},"filePathRelative":"courses/profile-dev-test/20.md","autoDesc":true}');export{t as data}; diff --git a/assets/21.html-4DDoeaAC.js b/assets/21.html-4DDoeaAC.js new file mode 100644 index 000000000..6882fa3e8 --- /dev/null +++ b/assets/21.html-4DDoeaAC.js @@ -0,0 +1,49 @@ +import{_ as n}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as s,c as e,f as a}from"./app-PAvzDPkc.js";const t={},i=a(`kitchen.yml
FileThe kitchen.yml
file is the primary configuration file for Test Kitchen. It outlines the shared configuration for all your testing environments, platforms, and the testing framework to be used.
Each of the subsequent kitchen files will inherit the shared settings from this file automatlly and merge them with the setting in the child kitchen file.
kitchen.yml
file---
+verifier:
+ name: inspec
+ sudo: true
+ reporter:
+ - cli
+ - json:spec/results/%{platform}_%{suite}.json
+ inspec_tests:
+ - name: RedHat 8 STIG v1r12
+ path: .
+ input_files:
+ - kitchen.inputs.yml
+ <% if ENV['INSPEC_CONTROL'] %>
+ controls:
+ - "<%= ENV['INSPEC_CONTROL'] %>"
+ <% end %>
+ load_plugins: true
+
+suites:
+ - name: vanilla
+ provisioner:
+ playbook: spec/ansible/roles/ansible-role-rhel-vanilla.yml
+ - name: hardened
+ provisioner:
+ playbook: spec/ansible/roles/ansible-role-rhel-hardened.yml
+
kitchen.yml
file:verifier:
+ name: inspec
+ sudo: true
+ reporter:
+ - cli
+ - json:spec/results/%{platform}_%{suite}.json
+ inspec_tests:
+ - name: RedHat 8 STIG v1r12
+ path: .
+ input_files:
+ - kitchen.inputs.yml
+ <% if ENV['INSPEC_CONTROL'] %>
+ controls:
+ - "<%= ENV['INSPEC_CONTROL'] %>"
+ <% end %>
+ load_plugins: true
+
This first section configures the verifier, which is the tool that checks if your system is in the desired state. Here, it's using InSpec.
sudo: true
means that InSpec will run with sudo privileges.reporter
specifies the formats in which the test results will be reported. Here, it's set to report in the command-line interface (cli
) and in a JSON file (json:spec/results/%{platform}_%{suite}.json
).inspec_tests
specifies the InSpec profiles to run. Here, it's running the "RedHat 8 STIG v1r12" profile located in the current directory (path: .
).input_files
specifies files that contain input variables for the InSpec profile. Here, it's using the kitchen.inputs.yml
file.controls
section is dynamically set based on the INSPEC_CONTROL
environment variable. If the variable is set, only the specified control will be run.load_plugins: true
means that InSpec will load any available plugins.suites:
+ - name: vanilla
+ provisioner:
+ playbook: spec/ansible/roles/ansible-role-rhel-vanilla.yml
+ - name: hardened
+ provisioner:
+ playbook: spec/ansible/roles/ansible-role-rhel-hardened.yml
+
This section defines the test suites. Each suite represents a different configuration to test.
name
and a provisioner
.provisioner
section specifies the Ansible playbook to use for the suite. Here, it's using the ansible-role-rhel-vanilla.yml
playbook for the "vanilla" suite and the ansible-role-rhel-hardened.yml
playbook for the "hardened" suite.kitchen.yml
INSPEC_CONTROL
: This variable allows you to specify a single control to run during the bundle exec kitchen verify
phase. This is particularly useful for testing or debugging a specific requirement.The workflow of Test Kitchen involves the following steps:
kitchen.ec2.yml
FileThe kitchen.ec2.yml
file is instrumental in setting up our testing targets within the AWS environment. It outlines the configuration details for these targets, including their VPC assignments and the specific settings for each VPC.
This file leverages the AWS CLI and AWS Credentials
configured as described in the previous Required Software section.
Alternatively, if you've set up AWS Environment Variables, the file will use those for AWS interactions.
kitchen.ec2.yml
file---
+platforms:
+ - name: rhel-8
+
+driver:
+ name: ec2
+ metadata_options:
+ http_tokens: required
+ http_put_response_hop_limit: 1
+ instance_metadata_tags: enabled
+ instance_type: m5.large
+ associate_public_ip: true
+ interface: public
+ skip_cost_warning: true
+ privileged: true
+ tags:
+ CreatedBy: test-kitchen
+
+provisioner:
+ name: ansible_playbook
+ hosts: all
+ require_chef_for_busser: false
+ require_ruby_for_busser: false
+ ansible_binary_path: /usr/local/bin
+ require_pip3: true
+ ansible_verbose: true
+ roles_path: spec/ansible/roles
+ galaxy_ignore_certs: true
+ requirements_path: spec/ansible/roles/requirements.yml
+ ansible_extra_flags: <%= ENV['ANSIBLE_EXTRA_FLAGS'] %>
+
+lifecycle:
+ pre_converge:
+ - remote: |
+ echo "NOTICE - Installing needed packages"
+ sudo dnf -y clean all
+ sudo dnf -y install --nogpgcheck bc bind-utils redhat-lsb-core vim
+ echo "updating system packages"
+ sudo dnf -y update --nogpgcheck --nobest
+ sudo dnf -y distro-sync
+ echo "NOTICE - Updating the ec2-user to keep sudo working"
+ sudo chage -d $(( $( date +%s ) / 86400 )) ec2-user
+ echo "NOTICE - updating ec2-user sudo config"
+ sudo chmod 600 /etc/sudoers && sudo sed -i'' "/ec2-user/d" /etc/sudoers && sudo chmod 400 /etc/sudoers
+
+transport:
+ name: ssh
+ max_ssh_sessions: 2
+
kitchen.ec2.yml
fileplatforms:
+ - name: rhel-8
+
This section defines the platforms on which your tests will run. In this case, it's Red Hat Enterprise Linux 8.
driver:
+ name: ec2
+ ...
+
This section configures the driver, which is responsible for creating and managing the instances. Here, it's set to use Amazon EC2 instances. The various options configure the EC2 instances, such as instance type (m5.large
), whether to associate a public IP address (associate_public_ip: true
), and various metadata options.
provisioner:
+ name: ansible_playbook
+ ...
+
This section configures the provisioner, which is the tool that brings your system to the desired state. Here, it's using Ansible playbooks. The various options configure how Ansible is run, such as the path to the Ansible binary (ansible_binary_path: /usr/local/bin
), whether to require pip3 (require_pip3: true
), and the path to the roles and requirements files.
lifecycle:
+ pre_converge:
+ - remote: |
+ ...
+
This section defines lifecycle hooks, which are commands that run at certain points in the Test Kitchen run. Here, it's running a series of commands before the converge phase (i.e., before applying the infrastructure code). These commands install necessary packages, update system packages, and update the ec2-user
configuration.
transport:
+ name: ssh
+ max_ssh_sessions: 2
+
This section configures the transport, which is the method Test Kitchen uses to communicate with the instance. Here, it's using SSH and allowing a maximum of 2 SSH sessions.
The workflow of Test Kitchen involves the following steps:
pre_converge
lifecycle hook.verifier
section.driver
section.The transport
is used in all these steps to communicate with the instance.
This approach allows for the evaluation of existing containers, even those created by other workflows. It can be leveraged to build a generalized workflow for validating any container against our Benchmark requirements, providing a comprehensive assessment of its security posture.
kitchen.container.yml
file---
+# see: https://kitchen.ci/docs/drivers/dokken/
+
+provisioner:
+ name: dummy
+
+driver:
+ name: dokken
+ pull_platform_image: false
+
+transport:
+ name: dokken
+
+platforms:
+ - name: ubi8
+
+suites:
+ - name: vanilla
+ driver:
+ image: <%= ENV['VANILLA_CONTAINER_IMAGE'] || "registry.access.redhat.com/ubi8/ubi:8.9-1028" %>
+ verifier:
+ input_files:
+ - container.vanilla.inputs.yml
+ - name: hardened
+ driver:
+ image: <%= ENV['HARDENED_CONTAINER_IMAGE'] || "registry1.dso.mil/ironbank/redhat/ubi/ubi8" %>
+ verifier:
+ input_files:
+ - container.hardened.inputs.yml
+ # creds_file: './creds.json'
+
kitchen.container.yml
file:provisioner:
+ name: dummy
+
This section configures the provisioner, which is the tool that brings your system to the desired state. Here, it's using a dummy provisioner, which means no provisioning will be done.
driver:
+ name: dokken
+ pull_platform_image: false
+
This section configures the driver, which is responsible for creating and managing the instances. Here, it's set to use the Dokken driver, which is designed for running tests in Docker containers. The pull_platform_image: false
option means that it won't automatically pull the Docker image for the platform; it will use the image specified in the suite.
transport:
+ name: dokken
+
This section configures the transport, which is the method Test Kitchen uses to communicate with the instance. Here, it's using the Dokken transport, which communicates with the Docker container.
platforms:
+ - name: ubi8
+
This section defines the platforms on which your tests will run. In this case, it's UBI 8 (Red Hat's Universal Base Image 8).
suites:
+ - name: vanilla
+ driver:
+ image: <%= ENV['VANILLA_CONTAINER_IMAGE'] || "registry.access.redhat.com/ubi8/ubi:8.9-1028" %>
+ verifier:
+ input_files:
+ - container.vanilla.inputs.yml
+ - name: hardened
+ driver:
+ image: <%= ENV['HARDENED_CONTAINER_IMAGE'] || "registry1.dso.mil/ironbank/redhat/ubi/ubi8" %>
+ verifier:
+ input_files:
+ - container.hardened.inputs.yml
+
This section defines the test suites. Each suite represents a different configuration to test.
name
, a driver
, and a verifier
.driver
section specifies the Docker image to use for the suite. It's dynamically set based on the VANILLA_CONTAINER_IMAGE
or HARDENED_CONTAINER_IMAGE
environment variable, with a default value if the variable is not set.verifier
section specifies files that contain input variables for the InSpec profile.The workflow of Test Kitchen involves the following steps:
verifier
section.driver
section.The transport
is used in all these steps to communicate with the container.
kitchen.container.yml
The kitchen.container.yml
file uses the following environment variables to select the containers used during its hardened
and vanilla
testing runs. You can test any container using these environment variables, even though standard defaults are set.
VANILLA_CONTAINER_IMAGE
: This variable specifies the Docker container image considered 'not hardened'. registry.access.redhat.com/ubi8/ubi:8.9-1028
HARDENED_CONTAINER_IMAGE
: This variable specifies the Docker container image considered 'hardened'. registry1.dso.mil/ironbank/redhat/ubi/ubi8
lint-profile.yml
This action checks out the repository, installs Ruby and InSpec, then runs bundle exec inspec check .
to validate the structure and syntax of the InSpec profile and its Ruby code.
verify-ec2.yml
This action performs the following steps:
vanilla
and hardened
test suites.threshold.yml
files for each test suite (hardened
and vanilla
).verify-container.yml
This action performs similar steps to verify-ec2.yml
, but with some differences:
verify-vagrant.yml.example
This action is similar to the verify-ec2
workflow, but instead of using a remote AWS EC2 instance in a VPC, it uses a local Vagrant virtual machine as the test target. The user can configure whether to upload the results to our Heimdall Demo server or not by modifing the Github Action.
Before running Delta, it's beneficial to format the profile to match the format Delta will use. This minimizes changes to only those necessary based on the guidance update. Follow these steps:
gem list cookstyle
. Create a .rubocop.yml
file with the provided example settings or modify these settings via the command line. Run cookstyle -a ./controls
and any tests you have for your profile.AllCops:
+ Exclude:
+ - "libraries/**/*"
+
+Layout/LineLength:
+ Max: 1000
+ AllowURI: true
+ IgnoreCopDirectives: true
+
+Naming/FileName:
+ Enabled: false
+
+Metrics/BlockLength:
+ Max: 400
+
+Lint/ConstantDefinitionInBlock:
+ Enabled: false
+
+# Required for Profiles as it can introduce profile errors
+Style/NumericPredicate:
+ Enabled: false
+
+Style/WordArray:
+ Description: "Use %w or %W for an array of words. (https://rubystyle.guide#percent-w)"
+ Enabled: false
+
+Style/RedundantPercentQ:
+ Enabled: true
+
+Style/NestedParenthesizedCalls:
+ Enabled: false
+
+Style/TrailingCommaInHashLiteral:
+ Description: "https://docs.rubocop.org/rubocop/cops_style.html#styletrailingcommainhashliteral"
+ Enabled: true
+ EnforcedStyleForMultiline: no_comma
+
+Style/TrailingCommaInArrayLiteral:
+ Enabled: true
+ EnforcedStyleForMultiline: no_comma
+
+Style/BlockDelimiters:
+ Enabled: false
+
+Lint/AmbiguousBlockAssociation:
+ Enabled: false
+
saf generate update_controls4delta
to check and update the control IDs with the provided XCCDF guidance. This process checks if the new guidance changes the control numbers and updates them if necessary. This minimizes the Delta output content and improves the visualization of the modifications provided by the Delta process.The SAF InSpec Delta workflow typically involves two phases, preformatting
and delta
.
Before starting, ensure you have the latest SAF-CLI, the InSpec Profile JSON file, and the updated guidance file.
saf generate update_controls4delta
command. This prepares the profile for the Delta process.saf generate delta [arguments]
to start the Delta process.For more information on these commands, refer to the following documentation:
',7),g={href:"https://saf-cli.mitre.org/#delta-supporting-options",target:"_blank",rel:"noopener noreferrer"},_={href:"https://saf-cli.mitre.org/#delta",target:"_blank",rel:"noopener noreferrer"},b=a('Delta focuses on specific modifications migrating the changes from the XCCDF Benchmark Rules to the Profiles controls, and updating the 'metadata' of each of thosin the control ID
, title
, default desc
, check text
, and fix text
, between the XCCDF Benchmark Rules and the Profile Controls.
It also adjusts the tags
and introduces a ref
between the impact
and tags
.
Delta does not modify the Ruby/InSpec code within the control, leaving it intact. Instead, it updates the 'control metadata' using the information from the supplied XCCDF guidance document. This applies to 'matched controls' between the XCCDF Guidance Document and the InSpec profile.
Test Kitchen stores the current host details of your provisioned test targets in the .kitchen/
directory. Here, you'll find a yml
file containing your target's hostname
, ip address
, host details
, and login credentials, which could be an ssh pem key
or another type of credential.
.kitchen
+├── .kitchen/hardened-container.yml
+├── .kitchen/hardened-rhel-8.pem
+├── .kitchen/hardened-rhel-8.yml
+├── .kitchen/logs
+├── .kitchen/vanilla-container.yml
+├── .kitchen/vanilla-rhel-8.pem
+├── .kitchen/vanilla-rhel-8.yml
+└── .kitchen/vanilla-ubi8.yml
+
If your test target reboots or updates its network information, you don't need to execute bundle exec kitchen destroy. Instead, update the corresponding .kitchen/#{suite}-#{target}.yml file with the updated information. This will ensure that your kitchen login, kitchen validate, and other kitchen commands function correctly, as they'll be connecting to the correct location instead of using outdated data.
Since we're using the free-tier for our AWS testing resources instead of a dedicated host, your test targets might shut down or 'reboot in the background' if you stop interacting with them, halt them, put them in a stop state, or leave them overnight. To regain access, edit the .kitchen/#{suite}-#{target}.yml file. As mentioned above, there's no need to recreate your testing targets if you can simply point Test Kitchen to the correct IP address.
pry
and pry-byebug
for Debugging ControlsWhen developing InSpec controls, it's beneficial to use the kitchen-test
suite, the INSPEC_CONTROL
environment variable, and pry
or pry-byebug
. This combination allows you to quickly debug, update, and experiment with your fixes in the context of the InSpec code, without having to run the full test suite.
pry
and pry-byebug
are powerful tools for debugging Ruby code, including InSpec controls. Here's how you can use them:
require 'pry'
or require 'pry-byebug'
at the top of your control file.binding.pry
at the point in your code where you want to start debugging.binding.pry
line, and you can inspect variables, step through the code, and more.!Pro Tip!
binding.pry
lines when you're done debugging or you won't have a good 'linting' down the road.inspec shell
The inspec shell
command allows you to test your full control update on your test target directly. To do this, you'll need to retrieve the IP address and SSH PEM key for your target instance from the Test Kitchen .kitchen
directory. For more details on this, refer to the Finding Your Test Target Login Details section.
Once you have your IP address and SSH PEM key (for AWS target instances), or the container ID (for Docker test instances), you can use the following commands:
bundle exec inspec shell -i #{pem-key} -t ssh://ec2-user@#{ipaddress} --sudo
bundle exec inspec shell -t docker://#{container-id}
kitchen login
for Easy Test Review and ModificationThe kitchen login
command provides an easy way to review and modify your test target. This tool is particularly useful for introducing test cases, exploring corner cases, and validating both positive and negative test scenarios.
The Department of Defense (DOD) has continually updated its databases that track rules and Security Technical Implementation Guides (STIGs) that house those rules.
Initially, the system was known as the Vulnerability Management System (VMS).
In the STIGs, you might come across data elements that are remnants from these iterations. These include Group Title
(gid or gtitle), Vulnerability ID
(VulnID), Rule ID
(rule_id), STIG ID
(stig_id), and others.
A significant change was the shift from using STIG ID
to Rule ID
in many security scanning tools. This change occurred because the Vulnerability Management System used the STIG_ID as the primary index for the requirements in each Benchmark in VMS.
However, when DISA updated the Vendor STIG Processes and replaced the VMS, they decided to migrate the primary ID from the STIG ID to the Rule ID, tracking changes in the Rules as described above.
Examples of tools that still use either fully or in part the 'STIG ID' vs the 'Rule ID' as a primary index are: the DISA STIG Viewer, Nessus Audit Scans, and Open SCAP client.
While these elements might seem confusing, understanding their historical context is essential.
In our modern profiles, some data from the XCCDF Benchmarks still exist in the document but are not used or rendered in the modern InSpec Profiles. However, in some of the older profiles, you may see many of these data elements as tags
in the profile. The intention was to ensure easy and lossless conversion between XCCDF Benchmark and HDF Profile.
It was later realized that since the structure of these data elements was 'static', they could be easily reintroduced when converting back to an XCCDF Benchmark. Therefore, rendering them in the profile was deemed unnecessary.
',12),r=[i];function o(d,c){return t(),n("div",null,r)}const u=e(s,[["render",o],["__file","28.html.vue"]]);export{u as default}; diff --git a/assets/28.html-_tB2aore.js b/assets/28.html-_tB2aore.js new file mode 100644 index 000000000..e1b41aa39 --- /dev/null +++ b/assets/28.html-_tB2aore.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-2aa4fcbc","path":"/courses/profile-dev-test/28.html","title":"Background & Definitions","lang":"en-US","frontmatter":{"order":28,"next":"29.md","title":"Background & Definitions","author":"Aaron Lippold","description":"Background Evolution of STIGs and Security Benchmarks The Department of Defense (DOD) has continually updated its databases that track rules and Security Technical Implementatio...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/profile-dev-test/28.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Background & Definitions"}],["meta",{"property":"og:description","content":"Background Evolution of STIGs and Security Benchmarks The Department of Defense (DOD) has continually updated its databases that track rules and Security Technical Implementatio..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Background & Definitions\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"Background","slug":"background","link":"#background","children":[{"level":3,"title":"Evolution of STIGs and Security Benchmarks","slug":"evolution-of-stigs-and-security-benchmarks","link":"#evolution-of-stigs-and-security-benchmarks","children":[]}]}],"git":{},"readingTime":{"minutes":1.02,"words":306},"filePathRelative":"courses/profile-dev-test/28.md","autoDesc":true}');export{e as data}; diff --git a/assets/29.html-9ri5Q3n2.js b/assets/29.html-9ri5Q3n2.js new file mode 100644 index 000000000..6f74d7e88 --- /dev/null +++ b/assets/29.html-9ri5Q3n2.js @@ -0,0 +1 @@ +import{_ as s}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as n,o as i,c as a,d as e,e as t,b as o,f as l}from"./app-PAvzDPkc.js";const c={},d=e("h1",{id:"terms-definitions",tabindex:"-1"},[e("a",{class:"header-anchor",href:"#terms-definitions","aria-hidden":"true"},"#"),t(" Terms & Definitions")],-1),h=e("li",null,[e("strong",null,"Baseline"),t(": This refers to a set of relevant security controls, such as NIST 800-53 controls or Center for Internet Security Controls. These controls offer high-level security best practices, grouped into common areas of concern.")],-1),u=e("li",null,[e("strong",null,"Benchmark"),t(": This is a set of security controls tailored to a specific type of application or product. These controls are typically categorized into 'high', 'medium', and 'low' levels based on Confidentiality, Integrity, and Availability (C.I.A).")],-1),p={href:"https://public.cyber.mil/stigs/cci/",target:"_blank",rel:"noopener noreferrer"},m=l("v1.12.4
to v1.12.5
.r
in the string - ('SV-230221) and (r858734_rule)'. The first part remains unique within the major version of a Benchmark document, while the latter part of the string is updated each time the 'Rule' is updated 'release to release' of the Benchmark. For example: 'SV-230221r858734_rule'.You might have noticed that many InSpec resources have a "plural" version. For example, user
has a users
counterpart, and package
has packages
.
Plural resources examine platform objects in bulk.
For example,
Plural resources usually include functions to query the set of objects it represents by an attribute, like so:
describe users.where(uid: 0).entries do
+ it { should eq ['root'] }
+ its('uids') { should eq [1234] }
+ its('gids') { should eq [1234] }
+end
+
This test queries all users to confirm that the only one with a uid of zero is the root user.
Plural InSpec resources are created by leveraging Ruby's FilterTable module to capture system data. Let's dig into how FilterTable works so that you can write your own plural resources.
FilterTable is intended to help you author plural resources with stucture data. You declare a number of columns of data, attach them to a FilterTable object, and then write methods that the FilterTable can call to populate those columns. You can also define custom matchers that make sense for whatever data you are modeling (to go alongside the standard InSpec matchers like be_in
,include
, and cmp
). You wind up with a queryable structure:
+inspec> etc_hosts.entries
+=>
+[#<struct ip_address="127.0.0.1", primary_name="localhost", all_host_names=["localhost", "localhost.localdomain", "localhost4", "localhost4.localdomain4"]>,
+ #<struct ip_address="::1", primary_name="localhost6", all_host_names=["localhost6", "localhost6.localdomain6"]>,
+ #<struct ip_address="127.0.0.1", primary_name="test1.org", all_host_names=["test1.org"]>,
+ #<struct ip_address="127.0.0.1", primary_name="test2.org", all_host_names=["test2.org"]>,
+ #<struct ip_address="127.0.0.1", primary_name="test3.org", all_host_names=["test3.org"]>,
+ #<struct ip_address="127.0.0.1", primary_name="test4.org", all_host_names=["test4.org"]>]
+
+
In theory, yes - that would be used to implement different data fetching / caching strategies. It is a very advanced usage, and no core resources currently do this, as far as we know.
Let's take a look at the structure of a resource that leverages FilterTable. We will write a dummy resource that models a small group of students. Our resource will describe each student's name, grade, and the toys they have. Usually, a resource will include some methods that reach out the system under test to populate the FilterTable with real system data, but for now we're just going to hard-code in some dummy data.
inspec init profile filtertable-test
+
libraries
directory as filter.rb
.Tips
You can also use inspec init resource <your-resource-name>
to create the template for your resource. When following the prompts, you can choose "plural" to create the template for a plural resource.
require 'inspec/utils/filter'
+
+class Filtertable < Inspec.resource(1)
+
+ name "filtertable"
+ supports platform: "linux"
+
+ filter_table = FilterTable.create
+
+ filter_table.register_column(:name, field: :name)
+ filter_table.register_column(:grade, field: :grade)
+ filter_table.register_column(:toys, field: :toys)
+
+ filter_table.register_custom_matcher(:has_bike?) { |filter_table| filter_table.toys.flatten.include?('bike') }
+ filter_table.register_custom_matcher(:has_middle_schooler?) { |filter_table| filter_table.grade.uniq.any?{ |grade| grade >= 6} }
+
+ filter_table.register_custom_property(:bike_count) { |filter_table| filter_table.toys.flatten.include?('bike').count }
+ filter_table.register_custom_property(:middle_schooler_count) { |filter_table| filter_table.where{ grade >= 6 }.count }
+
+ filter_table.install_filter_methods_on_resource(self, :fetch_data)
+
+ def fetch_data
+ # This method should return an array of hashes - the raw data. We'll hardcode it here.
+ [
+ { name: "Sarah", grade: 7, toys: ['car','train','bike']},
+ { name: "John", grade: 4, toys: ['top','bike']},
+ { name: "Donny", grade: 5, toys: ['train','nintento']},
+ { name: "Susan", grade: 7, toys: ['car','gameboy','bike']},
+ ]
+ end
+end
+
Now we've got a nice blob of code in a resource file. Let's load this resource in the InSpec shell and see what we can do with it.
Invoking the InSpec shell with inspec shell
will give you access to all the core InSpec resources by default, but InSpec does not automatically know about your locally defined resources unless you point them out. If you're testing a local resource, use the --depends
flag and pass in the profile directory that your resource lives in.
inspec shell --depends /path/to/profile/root/
+
FilterTables organize their data into columns. Your resource will declare a number of columns using the register_column
method.
Once you declare the columns that you want in your FilterTable (name
, grade
, and toys
in our example), you need to insert some data into them using the install_filter_methods_on_resource
method. That method takes two args -- self
and a data structure that is an array of hashes. The array of hashes will be matched up to the columns you defined using the hashes' keys. For our example we hard-coded this data structure, which is returned by the fetch_data
method.
After we define our FilterTable's columns, we can also define custom matchers just like we do in singluar resources using register_custom_matcher
. That function takes a block as an argument that defines a boolean expression that tells InSpec when that matcher should return true
. Note that the matcher's logic can get pretty complicated -- that's why we're shoving all of it into a resource so we can avoid having to write complicated tests.
has_bike?
describe filtertable.where( name: "Donny" ) do
+ it { should have_bike }
+end
+
+Profile: inspec-shell
+Version: (not specified)
+
+ filtertable with name == "Donny"
+ × should have bike
+ expected #has_bike? to return true, got false
+
+Test Summary: 0 successful, 1 failure, 0 skipped
+
describe filtertable.where( name: "Sarah" ) do
+ it { should have_bike }
+end
+
+Profile: inspec-shell
+Version: (not specified)
+
+ filtertable with name == "Sarah"
+ ✔ should have bike
+
+Test Summary: 1 successful, 0 failures, 0 skipped
+
+
In the simplest examples, we filter the table down to a single student using where
(more on where
in a minute) and invoke a matcher that checks if that student has a bike
in their list of toys. We can write matchers to have whatever logic we like. For example, while has_bike
checks if all of the students in the table under test have a bike, while has_middle_schooler
checks if any student in the table under test is in the 7th grade or higher.
has_middle_schooler?
describe filtertable.where { name =~ /Sarah|John/ } do
+ it { should have_middle_schooler }
+end
+
+Profile: inspec-shell
+Version: (not specified)
+Target ID:
+
+ filtertable with name =~ /Sarah|John/
+ ✔ is expected to have middle schooler
+
+Test Summary: 1 successful, 0 failures, 0 skipped
+
+
We can also declare custom properties for our resource, using whatever logic we like, just like we did for our custom matchers. Properties can be referred to with its
syntax in an InSpec test.
bike_count
describe filtertable do
+ its('bike_count') { should eq 3 }
+end
+
+Profile: inspec-shell
+Version: (not specified)
+Target ID:
+
+ filtertable
+ ✔ bike_count is expected to eq 3
+
+Test Summary: 1 successful, 0 failures, 0 skipped
+
middle_schooler_count
describe filtertable do
+ its('middle_schooler_count') { should eq 4 }
+end
+
+Profile: inspec-shell
+Version: (not specified)
+Target ID:
+
+ filtertable
+ × middle_schooler_count is expected to eq 4
+
+ expected: 4
+ got: 2
+
+ (compared using ==)
+
+
+Test Summary: 0 successful, 1 failure, 0 skipped
+
+
To get a better feel for how FilterTable works, we suggest you add a few extra features to the sample given above.
Then write some tests to see how your new matchers and properties work.
When you create a new FilterTable, these methods are installed automatically: where
, entries
, raw_data
, count
, and exist?
. Each is very useful both for writing tests in and of themselves and for creating custom matchers and properties inside the resource code.
where
methodYou may have already noticed that a bunch of our example tests are using the where
method on the FilterTable object. This method returns a new FilterTable object created from the rows of the original table that match the query provided to where
. If you have experience with relational databases, think of it like the WHERE
clause in a SQL query. This method is extremely flexible; we give some examples below.
where
as a method with no block and passing hash params, with keys you know are in the raw data, it will fetch the raw data, then filter row-wise and return the resulting Table. describe things.where(color: 'red') do
+ its('count') { should cmp 2 }
+ end
+
+ # Regexes
+ describe things.where(color: /^re/) do
+ its('count') { should cmp 2 }
+ end
+
+ # It eventually falls out to '===' comparison
+ # Here, range membership 1..2
+ describe things.where(thing_id: (1..2)) do
+ its('count') { should cmp 2 }
+ end
+
+ # irregular rows are supported
+ # Only one row has the :tackiness key, with value 'very'.
+ describe things.where(tackiness: 'very') do
+ its('count') { should cmp 1 }
+ end
+
+
where
method with blocksYou can also call the where
method with a block. The block is executed row-wise. If it returns truthy, the row is included in the results. Each field declared with the register_custom_property
configuration method is available as a data accessor.
+ # You can have any logic you want in the block
+ describe things.where { true } do
+ its('count') { should cmp 3 }
+ end
+
+ # You can access any fields you declared using \`register_column\`
+ describe things.where { thing_id > 2 } do
+ its('count') { should cmp 1 }
+ end
+
where
calls and Tables without re-fetching raw dataThe first time where
is called, the data fetcher method is called. where
performs filtration on the raw data table. It then constructs a new FilterTable::Table
, directly passing in the filtered raw data; this is then the return value from where
.
# This only calls fetch_data once
+ describe things.where(color: :red).where { thing_id > 2 } do
+ its('count') { should cmp 1 }
+ end
+
Some other methods return a Table object, and they may be chained without a re-fetch as well.
entries
methodThe other register_filter_method
call enables a pre-defined method, entries
. entries
is much simpler than where
- in fact, its behavior is unrelated. It returns an encapsulated version of the raw data - a plain array, containing Structs as row-entries. Each struct has an attribute for each time you called register_column
.
Importantly, note that the return value of entries
is not the resource, nor the Table - in other words, you cannot chain it. However, you can call entries
on any Table.
If you call entries
without chaining it after where
, calling entries will trigger the call to the data fetching method.
+ # Access the entries array
+ describe things.entries do
+ # This is Array#count, not the resource's \`count\` method
+ its('count') { should cmp 3}
+ end
+
+ # Access the entries array after chaining off of where
+ describe things.where(color: :red).entries do
+ # This is Array#count, not the resource's or table's \`count\` method
+ its('count') { should cmp 2}
+ end
+
+ # You can access the struct elements as a method, as a hash keyed on symbol, or as a hash keyed on string
+ describe things.entries.first.color do
+ it { should cmp :red }
+ end
+ describe things.entries.first[:color] do
+ it { should cmp :red }
+ end
+ describe things.entries.first['color'] do
+ it { should cmp :red }
+ end
+
exist?
matcherThis register_custom_matcher
call:
filter_table_config.register_custom_matcher(:exist?) { |filter_table| !filter_table.entries.empty? }
+
causes a new method to be defined on both the resource class and the Table class. The body of the method is taken from the block that is provided. When the method it called, it will receive the FilterTable::Table
instance as its first parameter. (It may also accept a second param, but that doesn't make sense for this method - see thing_ids).
As when you are implementing matchers on a singular resource, the only thing that distinguishes this as a matcher is the fact that it ends in ?
.
# Bare call on the matcher (called as a method on the resource)
+ describe things do
+ it { should exist }
+ end
+
+ # Chained on where (called as a method on the Table)
+ describe things.where(color: :red) do
+ it { should exist }
+ end
+
count
propertyThis register_custom_property
call:
filter_table_config.register_custom_property(:count) { |filter_table| filter_table.entries.count }
+
causes a new method to be defined on both the resource class and the Table class. As with exists?
, the body is taken from the block.
# Bare call on the property (called as a method on the resource)
+ describe things do
+ its('count') { should cmp 3 }
+ end
+
+ # Chained on where (called as a method on the Table)
+ describe things.where(color: :red) do
+ its('count') { should cmp 2 }
+ end
+
raw_data
methodUnlike entries
, which wraps each row in a Struct and omits undeclared fields, raw_data
simply returns the actual raw data array-of-hashes. It is not dup
'd. People definitely use this out in the wild, even though it returns a rougher data structure.
tacky_things = things.where(color: :blue).raw_data.select { |row| row[:tackiness] }
+ tacky_things.map { |row| row[:thing_id] }.each do |thing_id|
+ # Use to audit a singular Thing
+ describe thing(thing_id) do
+ it { should_not be_paisley }
+ end
+ end
+
FilterTable is a very flexible and powerful class that works well when designing plural resources. As always, if you need to write a plural resource, we encourage you to examine existing resources in the InSpec source code to see how other developers have implemented it. Some good examples include:
`,52),m={href:"https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/firewalld.rb",target:"_blank",rel:"noopener noreferrer"},h={href:"https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/users.rb",target:"_blank",rel:"noopener noreferrer"},b={href:"https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/shadow.rb",target:"_blank",rel:"noopener noreferrer"};function v(g,y){const a=l("ExternalLinkIcon");return p(),i("div",null,[r,s("p",null,[n("As we mentioned earlier, a real InSpec resource will include methods that will populate the resource with real system data. Take a look at the "),s("a",u,[n("Firewalld resource"),e(a)]),n(" for an example of a resource that does this -- note the resource is ultimately invoking a shell command ("),d,n(") to populate its FilterTable. There are plenty of other InSpec resources using FilterTable that you can find in the source code if you are interested in more examples.")]),k,s("ul",null,[s("li",null,[s("a",m,[n("FirewallD"),e(a)])]),s("li",null,[s("a",h,[n("Users"),e(a)])]),s("li",null,[s("a",b,[n("Shadow"),e(a)])])])])}const _=o(c,[["render",v],["__file","Appendix A - Writing Plural Resources.html.vue"]]);export{_ as default}; diff --git a/assets/Appendix A - Writing Plural Resources.html-yPzso6d-.js b/assets/Appendix A - Writing Plural Resources.html-yPzso6d-.js new file mode 100644 index 000000000..c2cefdc02 --- /dev/null +++ b/assets/Appendix A - Writing Plural Resources.html-yPzso6d-.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-32f5f052","path":"/courses/advanced/Appendix%20A%20-%20Writing%20Plural%20Resources.html","title":"Appendix A - Writing Plural Resources","lang":"en-US","frontmatter":{"order":14,"title":"Appendix A - Writing Plural Resources","author":"Aaron Lippold","headerDepth":3,"description":"10. Plural Resources You might have noticed that many InSpec resources have a \\"plural\\" version. For example, user has a users counterpart, and package has packages. Plural resou...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/advanced/Appendix%20A%20-%20Writing%20Plural%20Resources.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Appendix A - Writing Plural Resources"}],["meta",{"property":"og:description","content":"10. Plural Resources You might have noticed that many InSpec resources have a \\"plural\\" version. For example, user has a users counterpart, and package has packages. Plural resou..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Appendix A - Writing Plural Resources\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":2,"title":"10. Plural Resources","slug":"_10-plural-resources","link":"#_10-plural-resources","children":[{"level":3,"title":"10.1. Using FilterTable to write a Plural Resource","slug":"_10-1-using-filtertable-to-write-a-plural-resource","link":"#_10-1-using-filtertable-to-write-a-plural-resource","children":[]},{"level":3,"title":"10.2. FilterTable Hands-On","slug":"_10-2-filtertable-hands-on","link":"#_10-2-filtertable-hands-on","children":[]},{"level":3,"title":"10.3. Predefined Methods for FilterTable","slug":"_10-3-predefined-methods-for-filtertable","link":"#_10-3-predefined-methods-for-filtertable","children":[]},{"level":3,"title":"10.4 FilterTable Examples","slug":"_10-4-filtertable-examples","link":"#_10-4-filtertable-examples","children":[]}]}],"git":{},"readingTime":{"minutes":7.71,"words":2312},"filePathRelative":"courses/advanced/Appendix A - Writing Plural Resources.md","autoDesc":true}');export{e as data}; diff --git a/assets/Appendix B - Resource Examples.html-NvdExdec.js b/assets/Appendix B - Resource Examples.html-NvdExdec.js new file mode 100644 index 000000000..637bb4dd3 --- /dev/null +++ b/assets/Appendix B - Resource Examples.html-NvdExdec.js @@ -0,0 +1,486 @@ +import{_ as p}from"./plugin-vue_export-helper-x3n3nnut.js";import{r as o,o as i,c as l,d as n,e as s,b as e,f as t}from"./app-PAvzDPkc.js";const c={},r=t(`As an example we will go through a few custom resources that were built and approved.
---
+title: About the ip6tables Resource
+platform: linux
+---
+
+# ip6tables
+
+Use the \`ip6tables\` Chef InSpec audit resource to test rules that are defined in \`ip6tables\`, which maintains tables of IP packet filtering rules for IPv6. There may be more than one table. Each table contains one (or more) chains (both built-in and custom). A chain is a list of rules that match packets. When the rule matches, the rule defines what target to assign to the packet.
+
+<br>
+
+## Availability
+
+### Installation
+
+This resource is distributed along with Chef InSpec itself. You can use it automatically.
+
+### Version
+
+This resource first became available in v4.6.9 of InSpec.
+
+## Syntax
+
+A \`ip6tables\` resource block declares tests for rules in IP tables:
+
+ describe ip6tables(rule:'name', table:'name', chain: 'name') do
+ it { should have_rule('RULE') }
+ end
+
+where
+
+* \`ip6tables()\` may specify any combination of \`rule\`, \`table\`, or \`chain\`
+* \`rule:'name'\` is the name of a rule that matches a set of packets
+* \`table:'name'\` is the packet matching table against which the test is run
+* \`chain: 'name'\` is the name of a user-defined chain or one of \`ACCEPT\`, \`DROP\`, \`QUEUE\`, or \`RETURN\`
+* \`have_rule('RULE')\` tests that rule in the ip6tables list. This must match the entire line taken from \`ip6tables -S CHAIN\`.
+
+<br>
+
+## Examples
+
+The following examples show how to use this Chef InSpec audit resource.
+
+### Test if the INPUT chain is in default ACCEPT mode
+
+ describe ip6tables do
+ it { should have_rule('-P INPUT ACCEPT') }
+ end
+
+### Test if the INPUT chain from the mangle table is in ACCEPT mode
+
+ describe ip6tables(table:'mangle', chain: 'INPUT') do
+ it { should have_rule('-P INPUT ACCEPT') }
+ end
+
+### Test if there is a rule allowing Postgres (5432/TCP) traffic
+
+ describe ip6tables do
+ it { should have_rule('-A INPUT -p tcp -m tcp -m multiport --dports 5432 -m comment --comment "postgres" -j ACCEPT') }
+ end
+
+Note that the rule specification must exactly match what's in the output of \`ip6tables -S INPUT\`, which will depend on how you've built your rules.
+
+<br>
+
+## Matchers
+
+For a full list of available matchers, please visit our [matchers page](https://www.inspec.io/docs/reference/matchers/).
+
+### have_rule
+
+The \`have_rule\` matcher tests the named rule against the information in the \`ip6tables\` file:
+
+ it { should have_rule('RULE') }
+
require "inspec/resources/iis_site"
+require "inspec/resources/inetd_conf"
+require "inspec/resources/interface"
+require "inspec/resources/ip6tables"
+require "inspec/resources/iptables"
+require "inspec/resources/kernel_module"
+require "inspec/resources/kernel_parameter"
+
require "inspec/resources/command"
+
+# Usage:
+# describe ip6tables do
+# it { should have_rule('-P INPUT ACCEPT') }
+# end
+#
+# The following serverspec sytax is not implemented:
+# describe ip6tables do
+# it { should have_rule('-P INPUT ACCEPT').with_table('mangle').with_chain('INPUT') }
+# end
+# Please use the new sytax:
+# describe ip6tables(table:'mangle', chain: 'input') do
+# it { should have_rule('-P INPUT ACCEPT') }
+# end
+#
+# Note: Docker containers normally do not have ip6tables installed
+#
+# @see http://ipset.netfilter.org/ip6tables.man.html
+# @see http://ipset.netfilter.org/ip6tables.man.html
+module Inspec::Resources
+ class Ip6Tables < Inspec.resource(1)
+ name "ip6tables"
+ supports platform: "linux"
+ desc "Use the ip6tables InSpec audit resource to test rules that are defined in ip6tables, which maintains tables of IP packet filtering rules. There may be more than one table. Each table contains one (or more) chains (both built-in and custom). A chain is a list of rules that match packets. When the rule matches, the rule defines what target to assign to the packet."
+ example <<~EXAMPLE
+ describe ip6tables do
+ it { should have_rule('-P INPUT ACCEPT') }
+ end
+ EXAMPLE
+
+ def initialize(params = {})
+ @table = params[:table]
+ @chain = params[:chain]
+
+ # we're done if we are on linux
+ return if inspec.os.linux?
+
+ # ensures, all calls are aborted for non-supported os
+ @ip6tables_cache = []
+ skip_resource "The \`ip6tables\` resource is not supported on your OS yet."
+ end
+
+ def has_rule?(rule = nil, _table = nil, _chain = nil)
+ # checks if the rule is part of the ruleset
+ # for now, we expect an exact match
+ retrieve_rules.any? { |line| line.casecmp(rule) == 0 }
+ end
+
+ def retrieve_rules
+ return @ip6tables_cache if defined?(@ip6tables_cache)
+
+ # construct ip6tables command to read all rules
+ bin = find_ip6tables_or_error
+ table_cmd = "-t #{@table}" if @table
+ ip6tables_cmd = format("%s %s -S %s", bin, table_cmd, @chain).strip
+
+ cmd = inspec.command(ip6tables_cmd)
+ return [] if cmd.exit_status.to_i != 0
+
+ # split rules, returns array or rules
+ @ip6tables_cache = cmd.stdout.split("\\n").map(&:strip)
+ end
+
+ def to_s
+ format("Ip6tables %s %s", @table && "table: #{@table}", @chain && "chain: #{@chain}").strip
+ end
+
+ private
+
+ def find_ip6tables_or_error
+ %w{/usr/sbin/ip6tables /sbin/ip6tables ip6tables}.each do |cmd|
+ return cmd if inspec.command(cmd).exist?
+ end
+
+ raise Inspec::Exceptions::ResourceFailed, "Could not find \`ip6tables\`"
+ end
+ end
+end
+
case os[:family]
+when 'ubuntu', 'fedora', 'debian', 'suse'
+ describe ip6tables do
+ it { should have_rule('-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m comment --comment "http v6 on 80" -j ACCEPT') }
+ it { should_not have_rule('-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT') }
+
+ # single-word comments have their quotes dropped
+ it { should have_rule('-A derby-cognos-web-v6 -p tcp -m tcp --dport 80 -m comment --comment derby-cognos-web-v6 -j ACCEPT') }
+ end
+when 'redhat', 'centos'
+ describe ip6tables do
+ it { should have_rule('-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m comment --comment "http v6 on 80" -j ACCEPT') }
+ it { should_not have_rule('-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT') }
+ end
+
+ describe ip6tables do
+ it { should have_rule('-A derby-cognos-web-v6 -p tcp -m tcp --dport 80 -m comment --comment "derby-cognos-web-v6" -j ACCEPT') }
+ end if os[:release] == 6
+
+ describe ip6tables do
+ it { should have_rule('-A derby-cognos-web-v6 -p tcp -m tcp --dport 80 -m comment --comment derby-cognos-web-v6 -j ACCEPT') }
+ end if os[:release] == 7
+end
+
require "helper"
+require "inspec/resource"
+require "inspec/resources/ip6tables"
+
+describe "Inspec::Resources::Ip6tables" do
+
+ # ubuntu 14.04
+ it "verify ip6tables on ubuntu" do
+ resource = MockLoader.new(:ubuntu1404).load_resource("ip6tables")
+ _(resource.has_rule?("-P OUTPUT ACCEPT")).must_equal true
+ _(resource.has_rule?("-P OUTPUT DROP")).must_equal false
+ end
+
+ it "verify ip6tables with comments on ubuntu" do
+ resource = MockLoader.new(:ubuntu1404).load_resource("ip6tables")
+ _(resource.has_rule?('-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m comment --comment "http-v6 like its 1990" -j ACCEPT')).must_equal true
+ end
+
+ it "verify ip6tables on windows" do
+ resource = MockLoader.new(:windows).load_resource("ip6tables")
+ _(resource.has_rule?("-P OUTPUT ACCEPT")).must_equal false
+ _(resource.has_rule?("-P OUTPUT DROP")).must_equal false
+ end
+
+ # undefined
+ it "verify ip6tables on unsupported os" do
+ resource = MockLoader.new(:undefined).load_resource("ip6tables")
+ _(resource.has_rule?("-P OUTPUT ACCEPT")).must_equal false
+ _(resource.has_rule?("-P OUTPUT DROP")).must_equal false
+ end
+
+end
+
---
+title: The Nginx Resource
+---
+
+# nginx
+
+Use the \`nginx\` InSpec audit resource to test the fields and validity of nginx.
+
+Nginx resource extracts and exposes data reported by the command 'nginx -V'
+
+## Syntax
+
+An \`nginx\` InSpec audit resource block extracts configuration settings that should be tested:
+
+ describe nginx do
+ its('attribute') { should eq 'value' }
+ end
+
+ describe nginx('path to nginx') do
+ its('attribute') { should eq 'value' }
+ end
+
+where
+
+* \`'attribute'\` is a configuration parsed from result of the command 'nginx -V'
+* \`'value'\` is the value that is expected of the attribute
+
+## Supported Properties
+
+* 'compiler_info', 'error_log_path', 'http_client_body_temp_path', 'http_fastcgi_temp_path', 'http_log_path', 'http_proxy_temp_path', 'http_scgi_temp_path', 'http_uwsgi_temp_path', 'lock_path', 'modules', 'modules_path', 'openssl_version', 'prefix', 'sbin_path', 'service', 'support_info', 'version'
+
+## Property Examples and Return Types
+
+### version(String)
+
+\`version\` returns a string of the version of the running nginx instance
+
+ describe nginx do
+ its('version') { should eq '1.12.0' }
+ end
+
+### modules(String)
+
+\`modules\` returns a array modules in the running nginx instance
+
+ describe nginx do
+ its('modules') { should include 'my_module' }
+ end
+
+### openssl_version(Hash)
+
+\`openssl_version \` returns a hash with 'version' and 'date' as keys
+
+ describe nginx do
+ its('openssl_version.date') { should eq '11 Feb 2013' }
+ end
+
+### compiler_info(Hash)
+
+\`compiler_info \` returns a hash with 'compiler' , version' and 'date' as keys
+
+ describe nginx do
+ its('compiler_info.compiler') { should eq 'gcc' }
+ end
+
+### support_info(String)
+
+\`support_info \` returns a string containing supported protocols
+
+ describe nginx do
+ its('support_info') { should match /TLS/ }
+ end
+
require 'resources/mysql'
+require 'resources/mysql_conf'
+require 'resources/mysql_session'
+require 'resources/nginx'
+require 'resources/nginx_conf'
+require 'resources/npm'
+require 'resources/ntp_conf'
+
# encoding: utf-8
+# author: Aaron Lippold, lippold@gmail.com
+# author: Rony Xavier, rx294@gmail.com
+
+require 'pathname'
+require 'hashie/mash'
+
+module Inspec::Resources
+ class Nginx < Inspec.resource(1)
+ name 'nginx'
+ desc 'Use the nginx InSpec audit resource to test information about your NGINX instance.'
+ example "
+ describe nginx do
+ its('conf_path') { should cmp '/etc/nginx/nginx.conf' }
+ end
+ describe nginx('/etc/sbin/') do
+ its('version') { should be >= '1.0.0' }
+ end
+ describe nginx do
+ its('modules') { should include 'my_module' }
+ end
+ "
+ attr_reader :params, :bin_dir
+
+ def initialize(nginx_path = '/usr/sbin/nginx')
+ return skip_resource 'The \`nginx\` resource is not yet available on your OS.' if inspec.os.windows?
+ return skip_resource 'The \`nginx\` binary not found in the path provided.' unless inspec.command(nginx_path).exist?
+
+ cmd = inspec.command("#{nginx_path} -V 2>&1")
+ if !cmd.exit_status.zero?
+ return skip_resource 'Error using the command nginx -V'
+ end
+ @data = cmd.stdout
+ @params = {}
+ read_content
+ end
+
+ %w{compiler_info error_log_path http_client_body_temp_path http_fastcgi_temp_path http_log_path http_proxy_temp_path http_scgi_temp_path http_uwsgi_temp_path lock_path modules_path openssl_version prefix sbin_path service support_info version}.each do |property|
+ define_method(property.to_sym) do
+ @params[property.to_sym]
+ end
+ end
+
+ def openssl_version
+ result = @data.scan(/built with OpenSSL\\s(\\S+)\\s(\\d+\\s\\S+\\s\\d{4})/).flatten
+ Hashie::Mash.new({ 'version' => result[0], 'date' => result[1] })
+ end
+
+ def compiler_info
+ result = @data.scan(/built by (\\S+)\\s(\\S+)\\s(\\S+)/).flatten
+ Hashie::Mash.new({ 'compiler' => result[0], 'version' => result[1], 'date' => result[2] })
+ end
+
+ def support_info
+ support_info = @data.scan(/(.*\\S+) support enabled/).flatten
+ support_info.empty? ? nil : support_info.join(' ')
+ end
+
+ def modules
+ @data.scan(/--with-(\\S+)_module/).flatten
+ end
+
+ def to_s
+ 'Nginx Environment'
+ end
+
+ private
+
+ def read_content
+ parse_config
+ parse_path
+ parse_http_path
+ end
+
+ def parse_config
+ @params[:prefix] = @data.scan(/--prefix=(\\S+)\\s/).flatten.first
+ @params[:service] = 'nginx'
+ @params[:version] = @data.scan(%r{nginx version: nginx\\/(\\S+)\\s}).flatten.first
+ end
+
+ def parse_path
+ @params[:sbin_path] = @data.scan(/--sbin-path=(\\S+)\\s/).flatten.first
+ @params[:modules_path] = @data.scan(/--modules-path=(\\S+)\\s/).flatten.first
+ @params[:error_log_path] = @data.scan(/--error-log-path=(\\S+)\\s/).flatten.first
+ @params[:http_log_path] = @data.scan(/--http-log-path=(\\S+)\\s/).flatten.first
+ @params[:lock_path] = @data.scan(/--lock-path=(\\S+)\\s/).flatten.first
+ end
+
+ def parse_http_path
+ @params[:http_client_body_temp_path] = @data.scan(/--http-client-body-temp-path=(\\S+)\\s/).flatten.first
+ @params[:http_proxy_temp_path] = @data.scan(/--http-proxy-temp-path=(\\S+)\\s/).flatten.first
+ @params[:http_fastcgi_temp_path] = @data.scan(/--http-fastcgi-temp-path=(\\S+)\\s/).flatten.first
+ @params[:http_uwsgi_temp_path] = @data.scan(/--http-uwsgi-temp-path=(\\S+)\\s/).flatten.first
+ @params[:http_scgi_temp_path] = @data.scan(/--http-scgi-temp-path=(\\S+)\\s/).flatten.first
+ end
+ end
+end
+
# encoding: utf-8
+# author: Aaron Lippold, lippold@gmail.com
+# author: Rony Xavier, rx294@nyu.edu
+
+require 'helper'
+require 'inspec/resource'
+
+describe 'Inspec::Resources::Nginx' do
+ describe 'NGINX Methods' do
+ it 'Verify nginx parsing \`support_info\` - \`TLS SNI\`' do
+ resource = load_resource('nginx')
+ _(resource.support_info).must_match 'TLS SNI'
+ end
+ it 'Verify nginx parsing \`openssl_version\` - \`1.0.1e-fips/11 Feb 2013\`' do
+ resource = load_resource('nginx')
+ _(resource.openssl_version.date).must_match '11 Feb 2013'
+ _(resource.openssl_version.version).must_match '1.0.1e-fips'
+ end
+ it 'Verify nginx parsing \`compiler_info\` - \`gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC)\`' do
+ resource = load_resource('nginx')
+ _(resource.compiler_info.compiler).must_match 'gcc'
+ _(resource.compiler_info.version).must_match '4.8.5'
+ _(resource.compiler_info.date).must_match '20150623'
+ end
+ it 'Verify nginx parsing \`version\` - 1.12.0' do
+ resource = load_resource('nginx')
+ _(resource.version).must_match '1.12.0'
+ end
+ it 'Verify nginx_module parsing with custom path\`version\` - 1.12.0' do
+ resource = load_resource('nginx','/usr/sbin/nginx')
+ _(resource.version).must_match '1.12.0'
+ end
+ it 'Verify nginx_module parsing with a broken custom path\`version\` - 1.12.0' do
+ resource = load_resource('nginx','/usr/sbin/nginx')
+ _(resource.version).must_match '1.12.0'
+ end
+ it 'Verify nginx parsing \`service\` - \`nginx\`' do
+ resource = load_resource('nginx')
+ _(resource.service).must_match 'nginx'
+ end
+ it 'Verify nginx parsing \`modules\` - \`nginx\`' do
+ resource = load_resource('nginx')
+ _(resource.modules).must_include 'http_addition'
+ end
+ it 'Verify nginx parsing \`prefix\` - \`/etc/nginx\`' do
+ resource = load_resource('nginx')
+ _(resource.prefix).must_match '/etc/nginx'
+ end
+ it 'Verify nginx parsing \`sbin_path\` - \`/usr/sbin/nginx\`' do
+ resource = load_resource('nginx')
+ _(resource.sbin_path).must_match '/usr/sbin/nginx'
+ end
+ it 'Verify nginx parsing \`modules_path\` - \`/usr/lib64/nginx/modules\`' do
+ resource = load_resource('nginx')
+ _(resource.modules_path).must_match '/usr/lib64/nginx/modules'
+ end
+ it 'Verify nginx parsing \`error_log_path\` - \`/var/log/nginx/error.log\`' do
+ resource = load_resource('nginx')
+ _(resource.error_log_path).must_match '/var/log/nginx/error.log'
+ end
+ it 'Verify nginx parsing \`error_log_path\` - \`/var/log/nginx/access.log\`' do
+ resource = load_resource('nginx')
+ _(resource.http_log_path).must_match '/var/log/nginx/access.log'
+ end
+ it 'Verify nginx parsing \`lock_path\` - \`/var/run/nginx.lock\`' do
+ resource = load_resource('nginx')
+ _(resource.lock_path).must_match '/var/run/nginx.lock'
+ end
+ it 'Verify nginx parsing \`http_client_body_temp_path\` - \`/var/cache/nginx/client_temp\`' do
+ resource = load_resource('nginx')
+ _(resource.http_client_body_temp_path).must_match '/var/cache/nginx/client_temp'
+ end
+ it 'Verify nginx parsing \`http_proxy_temp_path\` - \`/var/cache/nginx/proxy_temp\`' do
+ resource = load_resource('nginx')
+ _(resource.http_proxy_temp_path).must_match '/var/cache/nginx/proxy_temp'
+ end
+ it 'Verify nginx parsing \`http_fastcgi_temp_path\` - \`/var/cache/nginx/fastcgi_temp\`' do
+ resource = load_resource('nginx')
+ _(resource.http_fastcgi_temp_path).must_match '/var/cache/nginx/fastcgi_temp'
+ end
+ it 'Verify nginx parsing \`http_uwsgi_temp_path\` - \`/var/cache/nginx/uwsgi_temp\`' do
+ resource = load_resource('nginx')
+ _(resource.http_uwsgi_temp_path).must_match '/var/cache/nginx/uwsgi_temp'
+ end
+ it 'Verify nginx parsing \`http_scgi_temp_path\` - \`/var/cache/nginx/scgi_temp\`' do
+ resource = load_resource('nginx')
+ _(resource.http_scgi_temp_path).must_match '/var/cache/nginx/scgi_temp'
+ end
+ it 'Verify nginx parsing \`http_scgi_temp_path\` - \`/var/cache/nginx/scgi_temp\`' do
+ resource = load_resource('nginx')
+ _(resource.http_scgi_temp_path).must_match '/var/cache/nginx/scgi_temp'
+ end
+ end
+end
+
InSpec's source code's top level directory looks like:
$ tree inspec -L 1 -d
+inspec
+├── contrib
+├── docs
+├── etc
+├── examples
+├── habitat
+├── inspec-bin
+├── kitchen
+├── lib
+├── omnibus
+├── support
+├── tasks
+├── test
+└── www
+
+13 directories
+
The 3 key directories we need to focus on here are the docs/
directory, the lib/
directory and finally the test/
directory. When developing a resource for upstream InSpec, you must:
The resource contents
When creating this resource.rb file or in this scenario the file.rb
, it would be developed and written the same exact way if you had put it in the libraries directory for a local resource. If you already developed the resource for local use, but want to push it to upstream, you can copy and paste the file directly to the following location:
$ tree -L 1 lib/inspec/resources/
+lib/inspec/resources/
+...
+├── file.rb
+...
+
+0 directories, 104 files
+
This is the helper file you need to adjust for the file resource:
$ tree -L 1 lib/inspec/
+lib/inspec/
+...
+├── resources.rb
+...
+
+10 directories, 47 files
+
The resource helper
When adding this line of code, be sure to place the resources in alphabetical order as shown in the example below.
In the resources.rb
file you would add the following line:
require "inspec/resources/etc_hosts"
+require "inspec/resources/file"
+require "inspec/resources/filesystem"
+
Next you would need to write out your unit and integration tests:
$ tree test/integration/default/controls/
+test/integration/default/controls/
+...
+├── file_spec.rb
+...
+
+0 directories, 42 files
+
$ tree test/unit/resources/
+test/unit/resources/
+...
+├── file_test.rb
+...
+
+0 directories, 145 files
+
$ tree docs/resources/
+docs/resources/
+...
+├── file.md.erb
+...
+
+0 directories, 156 files
+
This pipeline is intended to validate that the RHEL7 InSpec profile itself functions correctly. We're not too concerned with whether out "hardened" box is actually hardened; we just want to know if InSpec is assessing it correctly.
Why Vanilla and Hardened?
Having two test suites, where one is hardened and one is not, can be useful for seeing the differences between how a profile behaves on different types of systems.
It also has the added bonus of simultaneously validating that whatever tool we use for hardening is working correctly.
Modularity in Automation
We will demonstrate the automation process through this example, but note that the different orchestration tools, configuration mangement tools, and targets can be traded out for different uses while following the same automation flow and security automation framework.
name: EC2 Testing Matrix
+
+on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+jobs:
+ my-job:
+ name: Validate my profile
+ runs-on: ubuntu-latest
+ env:
+ CHEF_LICENSE: accept-silent
+ KITCHEN_LOCAL_YAML: kitchen.ec2.yml
+ LC_ALL: "en_US.UTF-8"
+ strategy:
+ matrix:
+ suite: ['vanilla', 'hardened']
+ fail-fast: false
+ steps:
+ - name: add needed packages
+ run: sudo apt-get install -y jq
+ - name: Configure AWS credentials
+ env:
+ AWS_SG_ID: ${{ secrets.SAF_AWS_SG_ID }}
+ AWS_SUBNET_ID: ${{ secrets.SAF_AWS_SUBNET_ID }}
+ uses: aws-actions/configure-aws-credentials@v1
+ with:
+ aws-access-key-id: ${{ secrets.SAF_AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.SAF_AWS_SECRET_ACCESS_KEY }}
+ aws-region: us-east-1
+ - name: Check out repository
+ uses: actions/checkout@v2
+ - name: Clone full repository so we can push
+ run: git fetch --prune --unshallow
+ - name: Setup Ruby
+ uses: ruby/setup-ruby@v1
+ with:
+ ruby-version: '2.7'
+ - name: Disable ri and rdoc
+ run: 'echo "gem: --no-ri --no-rdoc" >> ~/.gemrc'
+ - run: bundle install
+ - name: Regenerate current \`profile.json\`
+ run: |
+ bundle exec inspec json . | jq . > profile.json
+ - name: Lint the Inspec profile
+ run: bundle exec inspec check .
+ - name: Run kitchen test
+ run: bundle exec kitchen test --destroy=always ${{ matrix.suite }}-rhel-7 || true
+ - name: Display our ${{ matrix.suite }} results summary
+ uses: mitre/saf_action@v1
+ with:
+ command_string: 'view summary -i spec/results/ec2_rhel-7_\${{ matrix.suite }}.json'
+ - name: Ensure the scan meets our ${{ matrix.suite }} results threshold
+ uses: mitre/saf_action@v1
+ with:
+ command_string: 'validate threshold -i spec/results/ec2_rhel-7_\${{ matrix.suite }}.json -F \${{ matrix.suite }}.threshold.yml'
+ - name: Save Test Result JSON
+ uses: actions/upload-artifact@v2
+ with:
+ path: spec/results/
+
The two machines are then tested by running an InSpec profile. The results are viewed and validated against a threshold to allow the pipeline to automatically pass or fail based on whether the results meet those thresholds. The SAF CLI is used to view and validate.
`,6),b={class:"hint-container tip"},y=n("p",{class:"hint-container-title"},"Use Examples to Help Automate",-1),_={href:"https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/",target:"_blank",rel:"noopener noreferrer"},f={href:"https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/actions",target:"_blank",rel:"noopener noreferrer"},g=n("h3",{id:"_5-2-pipeline-example-with-manual-attestations",tabindex:"-1"},[n("a",{class:"header-anchor",href:"#_5-2-pipeline-example-with-manual-attestations","aria-hidden":"true"},"#"),s(" 5.2. Pipeline Example with Manual Attestations")],-1),w=n("p",null,"You could also add manual attestations with the SAF CLI to the pipeline to combine applicable manual attestations to the automated test results to determine a more full and accurate look at the overall security posture. SAF supports the validation of all controls including both automatable controls and manual attestation of those controls that cannot be automated. This expands the SAF’s coverage across interview, examination, and policy controls.",-1),x=n("p",null,"In a general sense we can use the SAF CLI to manage security data in the pipeline, supporting activities for managing POA&Ms.",-1),S=n("figure",null,[n("img",{src:k,alt:"The CI Pipeline - Attestation",tabindex:"0",loading:"lazy"}),n("figcaption",null,"The CI Pipeline - Attestation")],-1);function A(C,E){const e=t("ExternalLinkIcon"),p=t("RouterLink");return o(),l("div",null,[m,n("p",null,[s("Below is a "),n("a",v,[s("RedHat 7 example"),a(e)]),s(" of an automated pipeline that creates and configures two machines with the RedHat 7 operating system - one of which is set up as a vanilla configuration, and one of which is hardened using hardening scripts run by the Chef configuration management tool called kitchen.")]),h,n("div",b,[y,n("p",null,[s("To get more information on setting up the whole automation pipeline for your use case, use examples, such as the "),n("a",_,[s("RedHat 7 repository"),a(e)]),s(". You can view results of the workflows in the "),n("a",f,[s("Actions tab"),a(e)]),s(".")])]),g,w,x,n("p",null,[s("To practice doing manual attestations, take a look at the "),a(p,{to:"/courses/user/12.html"},{default:c(()=>[s("User Class")]),_:1}),s(".")]),S])}const L=i(d,[["render",A],["__file","Appendix D - Example Pipeline for Validating an InSpec Profile.html.vue"]]);export{L as default}; diff --git a/assets/Appendix D - Example Pipeline for Validating an InSpec Profile.html-osNKVVHI.js b/assets/Appendix D - Example Pipeline for Validating an InSpec Profile.html-osNKVVHI.js new file mode 100644 index 000000000..2a119c86b --- /dev/null +++ b/assets/Appendix D - Example Pipeline for Validating an InSpec Profile.html-osNKVVHI.js @@ -0,0 +1 @@ +const e=JSON.parse('{"key":"v-4faaa59d","path":"/courses/advanced/Appendix%20D%20-%20Example%20Pipeline%20for%20Validating%20an%20InSpec%20Profile.html","title":"Appendix D - Example Pipeline for Validating an InSpec Profile","lang":"en-US","frontmatter":{"order":17,"title":"Appendix D - Example Pipeline for Validating an InSpec Profile","author":"Aaron Lippold","headerDepth":3,"description":"RHEL7 Pipeline example Below is a RedHat 7 example (https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/blob/master/.github/workflows/verify-ec2.yml) of an automate...","head":[["meta",{"property":"og:url","content":"https://mitre.github.io/saf-training/saf-training/courses/advanced/Appendix%20D%20-%20Example%20Pipeline%20for%20Validating%20an%20InSpec%20Profile.html"}],["meta",{"property":"og:site_name","content":"MITRE SAF Training"}],["meta",{"property":"og:title","content":"Appendix D - Example Pipeline for Validating an InSpec Profile"}],["meta",{"property":"og:description","content":"RHEL7 Pipeline example Below is a RedHat 7 example (https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/blob/master/.github/workflows/verify-ec2.yml) of an automate..."}],["meta",{"property":"og:type","content":"article"}],["meta",{"property":"og:locale","content":"en-US"}],["meta",{"property":"article:author","content":"Aaron Lippold"}],["script",{"type":"application/ld+json"},"{\\"@context\\":\\"https://schema.org\\",\\"@type\\":\\"Article\\",\\"headline\\":\\"Appendix D - Example Pipeline for Validating an InSpec Profile\\",\\"image\\":[\\"\\"],\\"dateModified\\":null,\\"author\\":[{\\"@type\\":\\"Person\\",\\"name\\":\\"Aaron Lippold\\"}]}"]]},"headers":[{"level":3,"title":"RHEL7 Pipeline example","slug":"rhel7-pipeline-example","link":"#rhel7-pipeline-example","children":[]},{"level":3,"title":"5.2. Pipeline Example with Manual Attestations","slug":"_5-2-pipeline-example-with-manual-attestations","link":"#_5-2-pipeline-example-with-manual-attestations","children":[]}],"git":{},"readingTime":{"minutes":2.17,"words":651},"filePathRelative":"courses/advanced/Appendix D - Example Pipeline for Validating an InSpec Profile.md","autoDesc":true}');export{e as data}; diff --git a/assets/Appendix E - More Resource Examples.html-9JW9f5Xc.js b/assets/Appendix E - More Resource Examples.html-9JW9f5Xc.js new file mode 100644 index 000000000..45c68d84d --- /dev/null +++ b/assets/Appendix E - More Resource Examples.html-9JW9f5Xc.js @@ -0,0 +1,421 @@ +import{_ as s}from"./plugin-vue_export-helper-x3n3nnut.js";import{o as n,c as a,f as e}from"./app-PAvzDPkc.js";const t={},p=e(`# copyright: 2015, Vulcano Security GmbH
+
+require "shellwords"
+require "inspec/utils/parser"
+
+module Inspec::Resources
+ module FilePermissionsSelector
+ def select_file_perms_style(os)
+ if os.unix?
+ UnixFilePermissions.new(inspec)
+ elsif os.windows?
+ WindowsFilePermissions.new(inspec)
+ end
+ end
+ end
+
+ # TODO: rename file_resource.rb
+ class FileResource < Inspec.resource(1)
+ include FilePermissionsSelector
+ include LinuxMountParser
+
+ name "file"
+ supports platform: "unix"
+ supports platform: "windows"
+ desc "Use the file InSpec audit resource to test all system file types, including files, directories, symbolic links, named pipes, sockets, character devices, block devices, and doors."
+ example <<~EXAMPLE
+ describe file('path') do
+ it { should exist }
+ it { should be_file }
+ it { should be_readable }
+ it { should be_writable }
+ it { should be_executable.by_user('root') }
+ it { should be_owned_by 'root' }
+ its('mode') { should cmp '0644' }
+ end
+ EXAMPLE
+
+ attr_reader :file, :mount_options
+ def initialize(path)
+ # select permissions style
+ @perms_provider = select_file_perms_style(inspec.os)
+ @file = inspec.backend.file(path)
+ end
+
+ %w{
+ type exist? file? block_device? character_device? socket? directory?
+ symlink? pipe? mode mode? owner owned_by? group grouped_into?
+ link_path shallow_link_path linked_to? mtime size selinux_label immutable?
+ product_version file_version version? md5sum sha256sum
+ path basename source source_path uid gid
+ }.each do |m|
+ define_method m do |*args|
+ file.send(m, *args)
+ end
+ end
+
+ def content
+ res = file.content
+ return nil if res.nil?
+
+ res.force_encoding("utf-8")
+ end
+
+ def contain(*_)
+ raise "Contain is not supported. Please use standard RSpec matchers."
+ end
+
+ def readable?(by_usergroup, by_specific_user)
+ return false unless exist?
+ return skip_resource "\`readable?\` is not supported on your OS yet." if @perms_provider.nil?
+
+ file_permission_granted?("read", by_usergroup, by_specific_user)
+ end
+
+ def writable?(by_usergroup, by_specific_user)
+ return false unless exist?
+ return skip_resource "\`writable?\` is not supported on your OS yet." if @perms_provider.nil?
+
+ file_permission_granted?("write", by_usergroup, by_specific_user)
+ end
+
+ def executable?(by_usergroup, by_specific_user)
+ return false unless exist?
+ return skip_resource "\`executable?\` is not supported on your OS yet." if @perms_provider.nil?
+
+ file_permission_granted?("execute", by_usergroup, by_specific_user)
+ end
+
+ def allowed?(permission, opts = {})
+ return false unless exist?
+ return skip_resource "\`allowed?\` is not supported on your OS yet." if @perms_provider.nil?
+
+ file_permission_granted?(permission, opts[:by], opts[:by_user])
+ end
+
+ def mounted?(expected_options = nil, identical = false)
+ mounted = file.mounted
+
+ # return if no additional parameters have been provided
+ return file.mounted? if expected_options.nil?
+
+ # deprecation warning, this functionality will be removed in future version
+ Inspec.deprecate(:file_resource_be_mounted_matchers, "The file resource \`be_mounted.with\` and \`be_mounted.only_with\` matchers are deprecated. Please use the \`mount\` resource instead")
+
+ # we cannot read mount data on non-Linux systems
+ return nil unless inspec.os.linux?
+
+ # parse content if we are on linux
+ @mount_options ||= parse_mount_options(mounted.stdout, true)
+
+ if identical
+ # check if the options should be identical
+ @mount_options == expected_options
+ else
+ # otherwise compare the selected values
+ @mount_options.contains(expected_options)
+ end
+ end
+
+ def suid
+ (mode & 04000) > 0
+ end
+
+ alias setuid? suid
+
+ def sgid
+ (mode & 02000) > 0
+ end
+
+ alias setgid? sgid
+
+ def sticky
+ (mode & 01000) > 0
+ end
+
+ alias sticky? sticky
+
+ def more_permissive_than?(max_mode = nil)
+ raise Inspec::Exceptions::ResourceFailed, "The file" + file.path + "doesn't seem to exist" unless exist?
+ raise ArgumentError, "You must proivde a value for the \`maximum allowable permission\` for the file." if max_mode.nil?
+ raise ArgumentError, "You must proivde the \`maximum permission target\` as a \`String\`, you provided: " + max_mode.class.to_s unless max_mode.is_a?(String)
+ raise ArgumentError, "The value of the \`maximum permission target\` should be a valid file mode in 4-ditgit octal format: for example, \`0644\` or \`0777\`" unless /(0)?([0-7])([0-7])([0-7])/.match?(max_mode)
+
+ # Using the files mode and a few bit-wise calculations we can ensure a
+ # file is no more permisive than desired.
+ #
+ # 1. Calculate the inverse of the desired mode (e.g., 0644) by XOR it with
+ # 0777 (all 1s). We are interested in the bits that are currently 0 since
+ # it indicates that the actual mode is more permissive than the desired mode.
+ # Conversely, we dont care about the bits that are currently 1 because they
+ # cannot be any more permissive and we can safely ignore them.
+ #
+ # 2. Calculate the above result of ANDing the actual mode and the inverse
+ # mode. This will determine if any of the bits that would indicate a more
+ # permissive mode are set in the actual mode.
+ #
+ # 3. If the result is 0000, the files mode is equal
+ # to or less permissive than the desired mode (PASS). Otherwise, the files
+ # mode is more permissive than the desired mode (FAIL).
+
+ max_mode = max_mode.to_i(8)
+ inv_mode = 0777 ^ max_mode
+
+ inv_mode & file.mode != 0
+ end
+
+ def to_s
+ "File #{source_path}"
+ end
+
+ private
+
+ def file_permission_granted?(access_type, by_usergroup, by_specific_user)
+ raise "\`file_permission_granted?\` is not supported on your OS" if @perms_provider.nil?
+
+ if by_specific_user.nil? || by_specific_user.empty?
+ @perms_provider.check_file_permission_by_mask(file, access_type, by_usergroup, by_specific_user)
+ else
+ @perms_provider.check_file_permission_by_user(access_type, by_specific_user, source_path)
+ end
+ end
+ end
+
+ class FilePermissions
+ attr_reader :inspec
+ def initialize(inspec)
+ @inspec = inspec
+ end
+ end
+
+ class UnixFilePermissions < FilePermissions
+ def permission_flag(access_type)
+ case access_type
+ when "read"
+ "r"
+ when "write"
+ "w"
+ when "execute"
+ "x"
+ else
+ raise "Invalid access_type provided"
+ end
+ end
+
+ def usergroup_for(usergroup, specific_user)
+ if usergroup == "others"
+ "other"
+ elsif (usergroup.nil? || usergroup.empty?) && specific_user.nil?
+ "all"
+ else
+ usergroup
+ end
+ end
+
+ def check_file_permission_by_mask(file, access_type, usergroup, specific_user)
+ usergroup = usergroup_for(usergroup, specific_user)
+ flag = permission_flag(access_type)
+ mask = file.unix_mode_mask(usergroup, flag)
+ raise "Invalid usergroup/owner provided" if mask.nil?
+
+ (file.mode & mask) != 0
+ end
+
+ def check_file_permission_by_user(access_type, user, path)
+ flag = permission_flag(access_type)
+ if inspec.os.linux?
+ perm_cmd = "su -s /bin/sh -c \\"test -#{flag} #{path}\\" #{user}"
+ elsif inspec.os.bsd? || inspec.os.solaris?
+ perm_cmd = "sudo -u #{user} test -#{flag} #{path}"
+ elsif inspec.os.aix?
+ perm_cmd = "su #{user} -c test -#{flag} #{path}"
+ elsif inspec.os.hpux?
+ perm_cmd = "su #{user} -c \\"test -#{flag} #{path}\\""
+ else
+ return skip_resource "The \`file\` resource does not support \`by_user\` on your OS."
+ end
+
+ cmd = inspec.command(perm_cmd)
+ cmd.exit_status == 0 ? true : false
+ end
+ end
+
+ class WindowsFilePermissions < FilePermissions
+ def check_file_permission_by_mask(_file, _access_type, _usergroup, _specific_user)
+ raise "\`check_file_permission_by_mask\` is not supported on Windows"
+ end
+
+ def more_permissive_than?(*)
+ raise Inspec::Exceptions::ResourceSkipped, "The \`more_permissive_than?\` matcher is not supported on your OS yet."
+ end
+
+ def check_file_permission_by_user(access_type, user, path)
+ access_rule = translate_perm_names(access_type)
+ access_rule = convert_to_powershell_array(access_rule)
+
+ cmd = inspec.command("@(@((Get-Acl '#{path}').access | Where-Object {$_.AccessControlType -eq 'Allow' -and $_.IdentityReference -eq '#{user}' }) | Where-Object {($_.FileSystemRights.ToString().Split(',') | % {$_.trim()} | ? {#{access_rule} -contains $_}) -ne $null}) | measure | % { $_.Count }")
+ cmd.stdout.chomp == "0" ? false : true
+ end
+
+ private
+
+ def convert_to_powershell_array(arr)
+ if arr.empty?
+ "@()"
+ else
+ %{@('#{arr.join("', '")}')}
+ end
+ end
+
+ # Translates a developer-friendly string into a list of acceptable
+ # FileSystemRights that match it, because Windows has a fun heirarchy
+ # of permissions that are able to be noted in multiple ways.
+ #
+ # See also: https://www.codeproject.com/Reference/871338/AccessControl-FileSystemRights-Permissions-Table
+ def translate_perm_names(access_type)
+ names = translate_common_perms(access_type)
+ names ||= translate_granular_perms(access_type)
+ names ||= translate_uncommon_perms(access_type)
+ raise "Invalid access_type provided" unless names
+
+ names
+ end
+
+ def translate_common_perms(access_type)
+ case access_type
+ when "full-control"
+ %w{FullControl}
+ when "modify"
+ translate_perm_names("full-control") + %w{Modify}
+ when "read"
+ translate_perm_names("modify") + %w{ReadAndExecute Read}
+ when "write"
+ translate_perm_names("modify") + %w{Write}
+ when "execute"
+ translate_perm_names("modify") + %w{ReadAndExecute ExecuteFile Traverse}
+ when "delete"
+ translate_perm_names("modify") + %w{Delete}
+ end
+ end
+
+ def translate_uncommon_perms(access_type)
+ case access_type
+ when "delete-subdirectories-and-files"
+ translate_perm_names("full-control") + %w{DeleteSubdirectoriesAndFiles}
+ when "change-permissions"
+ translate_perm_names("full-control") + %w{ChangePermissions}
+ when "take-ownership"
+ translate_perm_names("full-control") + %w{TakeOwnership}
+ when "synchronize"
+ translate_perm_names("full-control") + %w{Synchronize}
+ end
+ end
+
+ def translate_granular_perms(access_type)
+ case access_type
+ when "write-data", "create-files"
+ translate_perm_names("write") + %w{WriteData CreateFiles}
+ when "append-data", "create-directories"
+ translate_perm_names("write") + %w{CreateDirectories AppendData}
+ when "write-extended-attributes"
+ translate_perm_names("write") + %w{WriteExtendedAttributes}
+ when "write-attributes"
+ translate_perm_names("write") + %w{WriteAttributes}
+ when "read-data", "list-directory"
+ translate_perm_names("read") + %w{ReadData ListDirectory}
+ when "read-attributes"
+ translate_perm_names("read") + %w{ReadAttributes}
+ when "read-extended-attributes"
+ translate_perm_names("read") + %w{ReadExtendedAttributes}
+ when "read-permissions"
+ translate_perm_names("read") + %w{ReadPermissions}
+ end
+ end
+ end
+end
+
require "inspec/resources/file"
+
+module Inspec::Resources
+ class Directory < FileResource
+ name "directory"
+ supports platform: "unix"
+ supports platform: "windows"
+ desc "Use the directory InSpec audit resource to test if the file type is a directory. This is equivalent to using the file InSpec audit resource and the be_directory matcher, but provides a simpler and more direct way to test directories. All of the matchers available to file may be used with directory."
+ example <<~EXAMPLE
+ describe directory('path') do
+ it { should be_directory }
+ end
+ EXAMPLE
+
+ def exist?
+ file.exist? && file.directory?
+ end
+
+ def to_s
+ "Directory #{source_path}"
+ end
+ end
+end
+
require "inspec/utils/parser"
+require "inspec/utils/file_reader"
+
+class EtcHosts < Inspec.resource(1)
+ name "etc_hosts"
+ supports platform: "linux"
+ supports platform: "bsd"
+ supports platform: "windows"
+ desc 'Use the etc_hosts InSpec audit resource to find an
+ ip_address and its associated hosts'
+ example <<~EXAMPLE
+ describe etc_hosts.where { ip_address == '127.0.0.1' } do
+ its('ip_address') { should cmp '127.0.0.1' }
+ its('primary_name') { should cmp 'localhost' }
+ its('all_host_names') { should eq [['localhost', 'localhost.localdomain', 'localhost4', 'localhost4.localdomain4']] }
+ end
+ EXAMPLE
+
+ attr_reader :params
+
+ include CommentParser
+ include FileReader
+
+ DEFAULT_UNIX_PATH = "/etc/hosts".freeze
+ DEFAULT_WINDOWS_PATH = 'C:\\windows\\system32\\drivers\\etc\\hosts'.freeze
+
+ def initialize(hosts_path = nil)
+ content = read_file_content(hosts_path || default_hosts_file_path)
+
+ @params = parse_conf(content.lines)
+ end
+
+ FilterTable.create
+ .register_column(:ip_address, field: "ip_address")
+ .register_column(:primary_name, field: "primary_name")
+ .register_column(:all_host_names, field: "all_host_names")
+ .install_filter_methods_on_resource(self, :params)
+
+ private
+
+ def default_hosts_file_path
+ inspec.os.windows? ? DEFAULT_WINDOWS_PATH : DEFAULT_UNIX_PATH
+ end
+
+ def parse_conf(lines)
+ lines.reject(&:empty?).reject(&comment?).map(&parse_data).map(&format_data)
+ end
+
+ def comment?
+ parse_options = { comment_char: "#", standalone_comments: false }
+
+ ->(data) { parse_comment_line(data, parse_options).first.empty? }
+ end
+
+ def parse_data
+ ->(data) { [data.split[0], data.split[1], data.split[1..-1]] }
+ end
+
+ def format_data
+ ->(data) { %w{ip_address primary_name all_host_names}.zip(data).to_h }
+ end
+end
+
# RedHat, Ubuntu, and macOS
+$ curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P inspec
+
Another option is to install InSpec via a command line:
When installing from source, gem dependencies may require ruby build tools to be installed.
For CentOS/RedHat/Fedora:$ yum -y install ruby ruby-devel make gcc gcc-c++
For Debian/Ubuntu:$ apt-get -y install ruby ruby-dev gcc g++ make
Now we’re on to the good stuff. Let’s install InSpec:
To install inspec from rubygems:$ gem install inspec
Install the following gems:
$ gem install bundler
+$ gem install test-kitchen
+
Once InSpec is installed, run inspec version
to verify that the installation was successful.
# RedHat, Ubuntu, and macOS
+$ curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P inspec
+
Another option is to install InSpec via a command line:
Before I could install InSpec, I needed to have the latest version of Ruby installed. And before I could install the latest version of Ruby, I had to install Homebrew, the OS X package manager.
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
$ brew install rbenv ruby-build
+
Add rbenv to bash so that it loads every time you open a terminal
$ echo 'if which rbenv > /dev/null; then eval "$(rbenv init -)"; fi' >> ~/.bash_profile
+$ source ~/.bash_profile
+
+$ rbenv install 2.7.2
+$ rbenv global 2.7.2
+
Close terminal and reopen.
$ ruby -v
+
Now we’re on to the good stuff. Let’s install InSpec:
$ gem install inspec
+
Install the following gems:
$ gem install bundler
+$ gem install test-kitchen
+
Once InSpec is installed, run inspec version
to verify that the installation was successful.
Add Ruby executables to your PATH
and Associate .rb and .rbw files with this Ruby installation
enter
. When this is complete close the command prompt$ ruby -v
, then press enter $ gem install json --platform=ruby
, then press enter # Windows (PowerShell)
+. { iwr -useb https://omnitruck.chef.io/install.ps1 } | iex; install -project inspec
+
$ gem install inspec
, then press enter$ gem install bundler
, then press enter$ gem install test-kitchen
, then press enterOnce InSpec is installed, run $ inspec version
to verify that the installation was successful.
{const{slotScopeIds:D}=I;D&&(G=G?G.concat(D):D);const F=l(y),Q=v(s(y),I,F,V,A,G,B);return Q&&$r(Q)&&Q.data==="]"?s(I.anchor=Q):(kt=!0,i(I.anchor=c("]"),F,Q),Q)},w=(y,I,V,A,G,B)=>{if(kt=!0,I.el=null,B){const Q=T(y);for(;;){const H=s(y);if(H&&H!==Q)a(H);else break}}const D=s(y),F=l(y);return a(y),n(null,I,F,D,V,A,Dr(F),G),D},T=(y,I="[",V="]")=>{let A=0;for(;y;)if(y=s(y),y&&$r(y)&&(y.data===I&&A++,y.data===V)){if(A===0)return s(y);A--}return y},b=(y,I,V)=>{const A=I.parentNode;A&&A.replaceChild(y,I);let G=V;for(;G;)G.vnode.el===I&&(G.vnode.el=G.subTree.el=y),G=G.parent},P=y=>y.nodeType===1&&y.tagName.toLowerCase()==="template";return[u,f]}const Fe=ui;function R1(e){return P1(e,I1)}function P1(e,t){const n=Ma();n.__VUE__=!0;const{insert:r,remove:o,patchProp:s,createElement:l,createText:a,createComment:i,setText:c,setElementText:u,parentNode:f,nextSibling:p,setScopeId:v=tt,insertStaticContent:_}=e,w=(h,g,E,L=null,S=null,x=null,j=void 0,$=null,N=!!g.dynamicChildren)=>{if(h===g)return;h&&!tn(h,g)&&(L=R(h),Se(h,S,x,!0),h=null),g.patchFlag===-2&&(N=!1,g.dynamicChildren=null);const{type:k,ref:q,shapeFlag:Z}=g;switch(k){case Sn:T(h,g,E,L);break;case st:b(h,g,E,L);break;case Xn:h==null&&P(g,E,L,j);break;case Ke:H(h,g,E,L,S,x,j,$,N);break;default:Z&1?V(h,g,E,L,S,x,j,$,N):Z&6?te(h,g,E,L,S,x,j,$,N):(Z&64||Z&128)&&k.process(h,g,E,L,S,x,j,$,N,M)}q!=null&&S&&Zr(q,h&&h.ref,x,g||h,!g)},T=(h,g,E,L)=>{if(h==null)r(g.el=a(g.children),E,L);else{const S=g.el=h.el;g.children!==h.children&&c(S,g.children)}},b=(h,g,E,L)=>{h==null?r(g.el=i(g.children||""),E,L):g.el=h.el},P=(h,g,E,L)=>{[h.el,h.anchor]=_(h.children,g,E,L,h.el,h.anchor)},y=({el:h,anchor:g},E,L)=>{let S;for(;h&&h!==g;)S=p(h),r(h,E,L),h=S;r(g,E,L)},I=({el:h,anchor:g})=>{let E;for(;h&&h!==g;)E=p(h),o(h),h=E;o(g)},V=(h,g,E,L,S,x,j,$,N)=>{g.type==="svg"?j="svg":g.type==="math"&&(j="mathml"),h==null?A(g,E,L,S,x,j,$,N):D(h,g,S,x,j,$,N)},A=(h,g,E,L,S,x,j,$)=>{let N,k;const{props:q,shapeFlag:Z,transition:X,dirs:re}=h;if(N=h.el=l(h.type,x,q&&q.is,q),Z&8?u(N,h.children):Z&16&&B(h.children,N,null,L,S,Ro(h,x),j,$),re&&bt(h,null,L,"created"),G(N,h,h.scopeId,j,L),q){for(const he in q)he!=="value"&&!Yn(he)&&s(N,he,null,q[he],x,h.children,L,S,Pe);"value"in q&&s(N,"value",null,q.value,x),(k=q.onVnodeBeforeMount)&&Ze(k,L,h)}re&&bt(h,null,L,"beforeMount");const se=Oi(S,X);se&&X.beforeEnter(N),r(N,g,E),((k=q&&q.onVnodeMounted)||se||re)&&Fe(()=>{k&&Ze(k,L,h),se&&X.enter(N),re&&bt(h,null,L,"mounted")},S)},G=(h,g,E,L,S)=>{if(E&&v(h,E),L)for(let x=0;x{for(let k=N;k {const $=g.el=h.el;let{patchFlag:N,dynamicChildren:k,dirs:q}=g;N|=h.patchFlag&16;const Z=h.props||Ee,X=g.props||Ee;let re;if(E&&Xt(E,!1),(re=X.onVnodeBeforeUpdate)&&Ze(re,E,g,h),q&&bt(g,h,E,"beforeUpdate"),E&&Xt(E,!0),k?F(h.dynamicChildren,k,$,E,L,Ro(g,S),x):j||Y(h,g,$,null,E,L,Ro(g,S),x,!1),N>0){if(N&16)Q($,g,Z,X,E,L,S);else if(N&2&&Z.class!==X.class&&s($,"class",null,X.class,S),N&4&&s($,"style",Z.style,X.style,S),N&8){const se=g.dynamicProps;for(let he=0;he {re&&Ze(re,E,g,h),q&&bt(g,h,E,"updated")},L)},F=(h,g,E,L,S,x,j)=>{for(let $=0;$ {if(E!==L){if(E!==Ee)for(const $ in E)!Yn($)&&!($ in L)&&s(h,$,E[$],null,j,g.children,S,x,Pe);for(const $ in L){if(Yn($))continue;const N=L[$],k=E[$];N!==k&&$!=="value"&&s(h,$,k,N,j,g.children,S,x,Pe)}"value"in L&&s(h,"value",E.value,L.value,j)}},H=(h,g,E,L,S,x,j,$,N)=>{const k=g.el=h?h.el:a(""),q=g.anchor=h?h.anchor:a("");let{patchFlag:Z,dynamicChildren:X,slotScopeIds:re}=g;re&&($=$?$.concat(re):re),h==null?(r(k,E,L),r(q,E,L),B(g.children,E,q,S,x,j,$,N)):Z>0&&Z&64&&X&&h.dynamicChildren?(F(h.dynamicChildren,X,E,S,x,j,$),(g.key!=null||S&&g===S.subTree)&&Li(h,g,!0)):Y(h,g,E,q,S,x,j,$,N)},te=(h,g,E,L,S,x,j,$,N)=>{g.slotScopeIds=$,h==null?g.shapeFlag&512?S.ctx.activate(g,E,L,j,N):Te(g,E,L,S,x,j,N):be(h,g,N)},Te=(h,g,E,L,S,x,j)=>{const $=h.component=F1(h,L,S);if(gr(h)&&($.ctx.renderer=M),H1($),$.asyncDep){if(S&&S.registerDep($,U),!h.el){const N=$.subTree=Le(st);b(null,N,g,E)}}else U($,h,g,E,S,x,j)},be=(h,g,E)=>{const L=g.component=h.component;if(Gd(h,g,E))if(L.asyncDep&&!L.asyncResolved){ne(L,g,E);return}else L.next=g,Fd(L.update),L.effect.dirty=!0,L.update();else g.el=h.el,L.vnode=g},U=(h,g,E,L,S,x,j)=>{const $=()=>{if(h.isMounted){let{next:q,bu:Z,u:X,parent:re,vnode:se}=h;{const pn=Ii(h);if(pn){q&&(q.el=se.el,ne(h,q,j)),pn.asyncDep.then(()=>{h.isUnmounted||$()});return}}let he=q,ye;Xt(h,!1),q?(q.el=se.el,ne(h,q,j)):q=se,Z&&To(Z),(ye=q.props&&q.props.onVnodeBeforeUpdate)&&Ze(ye,re,q,se),Xt(h,!0);const xe=Ao(h),ct=h.subTree;h.subTree=xe,w(ct,xe,f(ct.el),R(ct),h,S,x),q.el=xe.el,he===null&&Kd(h,xe.el),X&&Fe(X,S),(ye=q.props&&q.props.onVnodeUpdated)&&Fe(()=>Ze(ye,re,q,se),S)}else{let q;const{el:Z,props:X}=g,{bm:re,m:se,parent:he}=h,ye=Jn(g);if(Xt(h,!1),re&&To(re),!ye&&(q=X&&X.onVnodeBeforeMount)&&Ze(q,he,g),Xt(h,!0),Z&&ae){const xe=()=>{h.subTree=Ao(h),ae(Z,h.subTree,h,S,null)};ye?g.type.__asyncLoader().then(()=>!h.isUnmounted&&xe()):xe()}else{const xe=h.subTree=Ao(h);w(null,xe,E,L,h,S,x),g.el=xe.el}if(se&&Fe(se,S),!ye&&(q=X&&X.onVnodeMounted)){const xe=g;Fe(()=>Ze(q,he,xe),S)}(g.shapeFlag&256||he&&Jn(he.vnode)&&he.vnode.shapeFlag&256)&&h.a&&Fe(h.a,S),h.isMounted=!0,g=E=L=null}},N=h.effect=new Os($,tt,()=>fo(k),h.scope),k=h.update=()=>{N.dirty&&N.run()};k.id=h.uid,Xt(h,!0),k()},ne=(h,g,E)=>{g.component=h;const L=h.vnode.props;h.vnode=g,h.next=null,E1(h,g.props,L,E),A1(h,g.children,E),un(),ml(h),dn()},Y=(h,g,E,L,S,x,j,$,N=!1)=>{const k=h&&h.children,q=h?h.shapeFlag:0,Z=g.children,{patchFlag:X,shapeFlag:re}=g;if(X>0){if(X&128){mt(k,Z,E,L,S,x,j,$,N);return}else if(X&256){Re(k,Z,E,L,S,x,j,$,N);return}}re&8?(q&16&&Pe(k,S,x),Z!==k&&u(E,Z)):q&16?re&16?mt(k,Z,E,L,S,x,j,$,N):Pe(k,S,x,!0):(q&8&&u(E,""),re&16&&B(Z,E,L,S,x,j,$,N))},Re=(h,g,E,L,S,x,j,$,N)=>{h=h||wn,g=g||wn;const k=h.length,q=g.length,Z=Math.min(k,q);let X;for(X=0;X q?Pe(h,S,x,!0,!1,Z):B(g,E,L,S,x,j,$,N,Z)},mt=(h,g,E,L,S,x,j,$,N)=>{let k=0;const q=g.length;let Z=h.length-1,X=q-1;for(;k<=Z&&k<=X;){const re=h[k],se=g[k]=N?Bt(g[k]):dt(g[k]);if(tn(re,se))w(re,se,E,null,S,x,j,$,N);else break;k++}for(;k<=Z&&k<=X;){const re=h[Z],se=g[X]=N?Bt(g[X]):dt(g[X]);if(tn(re,se))w(re,se,E,null,S,x,j,$,N);else break;Z--,X--}if(k>Z){if(k<=X){const re=X+1,se=re X)for(;k<=Z;)Se(h[k],S,x,!0),k++;else{const re=k,se=k,he=new Map;for(k=se;k<=X;k++){const Ue=g[k]=N?Bt(g[k]):dt(g[k]);Ue.key!=null&&he.set(Ue.key,k)}let ye,xe=0;const ct=X-se+1;let pn=!1,sl=0;const zn=new Array(ct);for(k=0;k=ct){Se(Ue,S,x,!0);continue}let _t;if(Ue.key!=null)_t=he.get(Ue.key);else for(ye=se;ye<=X;ye++)if(zn[ye-se]===0&&tn(Ue,g[ye])){_t=ye;break}_t===void 0?Se(Ue,S,x,!0):(zn[_t-se]=k+1,_t>=sl?sl=_t:pn=!0,w(Ue,g[_t],E,null,S,x,j,$,N),xe++)}const ll=pn?C1(zn):wn;for(ye=ll.length-1,k=ct-1;k>=0;k--){const Ue=se+k,_t=g[Ue],al=Ue+1 {const{el:x,type:j,transition:$,children:N,shapeFlag:k}=h;if(k&6){Qe(h.component.subTree,g,E,L);return}if(k&128){h.suspense.move(g,E,L);return}if(k&64){j.move(h,g,E,M);return}if(j===Ke){r(x,g,E);for(let Z=0;Z$.enter(x),S);else{const{leave:Z,delayLeave:X,afterLeave:re}=$,se=()=>r(x,g,E),he=()=>{Z(x,()=>{se(),re&&re()})};X?X(x,se,he):he()}else r(x,g,E)},Se=(h,g,E,L=!1,S=!1)=>{const{type:x,props:j,ref:$,children:N,dynamicChildren:k,shapeFlag:q,patchFlag:Z,dirs:X}=h;if($!=null&&Zr($,null,E,h,!0),q&256){g.ctx.deactivate(h);return}const re=q&1&&X,se=!Jn(h);let he;if(se&&(he=j&&j.onVnodeBeforeUnmount)&&Ze(he,g,h),q&6)gt(h.component,E,L);else{if(q&128){h.suspense.unmount(E,L);return}re&&bt(h,null,g,"beforeUnmount"),q&64?h.type.remove(h,g,E,S,M,L):k&&(x!==Ke||Z>0&&Z&64)?Pe(k,g,E,!1,!0):(x===Ke&&Z&384||!S&&q&16)&&Pe(N,g,E),L&&qe(h)}(se&&(he=j&&j.onVnodeUnmounted)||re)&&Fe(()=>{he&&Ze(he,g,h),re&&bt(h,null,g,"unmounted")},E)},qe=h=>{const{type:g,el:E,anchor:L,transition:S}=h;if(g===Ke){Et(E,L);return}if(g===Xn){I(h);return}const x=()=>{o(E),S&&!S.persisted&&S.afterLeave&&S.afterLeave()};if(h.shapeFlag&1&&S&&!S.persisted){const{leave:j,delayLeave:$}=S,N=()=>j(E,x);$?$(h.el,x,N):N()}else x()},Et=(h,g)=>{let E;for(;h!==g;)E=p(h),o(h),h=E;o(g)},gt=(h,g,E)=>{const{bum:L,scope:S,update:x,subTree:j,um:$}=h;L&&To(L),S.stop(),x&&(x.active=!1,Se(j,h,g,E)),$&&Fe($,g),Fe(()=>{h.isUnmounted=!0},g),g&&g.pendingBranch&&!g.isUnmounted&&h.asyncDep&&!h.asyncResolved&&h.suspenseId===g.pendingId&&(g.deps--,g.deps===0&&g.resolve())},Pe=(h,g,E,L=!1,S=!1,x=0)=>{for(let j=x;j h.shapeFlag&6?R(h.component.subTree):h.shapeFlag&128?h.suspense.next():p(h.anchor||h.el),W=(h,g,E)=>{h==null?g._vnode&&Se(g._vnode,null,null,!0):w(g._vnode||null,h,g,null,null,null,E),ml(),Yr(),g._vnode=h},M={p:w,um:Se,m:Qe,r:qe,mt:Te,mc:B,pc:Y,pbc:F,n:R,o:e};let J,ae;return t&&([J,ae]=t(M)),{render:W,hydrate:J,createApp:b1(W,J)}}function Ro({type:e,props:t},n){return n==="svg"&&e==="foreignObject"||n==="mathml"&&e==="annotation-xml"&&t&&t.encoding&&t.encoding.includes("html")?void 0:n}function Xt({effect:e,update:t},n){e.allowRecurse=t.allowRecurse=n}function Oi(e,t){return(!e||e&&!e.pendingBranch)&&t&&!t.persisted}function Li(e,t,n=!1){const r=e.children,o=t.children;if(ee(r)&&ee(o))for(let s=0;s >1,e[n[a]] 0&&(t[r]=n[s-1]),n[s]=r)}}for(s=n.length,l=n[s-1];s-- >0;)n[s]=l,l=t[l];return n}function Ii(e){const t=e.subTree.component;if(t)return t.asyncDep&&!t.asyncResolved?t:Ii(t)}const S1=e=>e.__isTeleport,Ke=Symbol.for("v-fgt"),Sn=Symbol.for("v-txt"),st=Symbol.for("v-cmt"),Xn=Symbol.for("v-stc"),Zn=[];let ft=null;function x1(e=!1){Zn.push(ft=e?null:[])}function k1(){Zn.pop(),ft=Zn[Zn.length-1]||null}let ir=1;function Rl(e){ir+=e}function Ri(e){return e.dynamicChildren=ir>0?ft||wn:null,k1(),ir>0&&ft&&ft.push(e),e}function N4(e,t,n,r,o,s){return Ri(Ci(e,t,n,r,o,s,!0))}function D1(e,t,n,r,o){return Ri(Le(e,t,n,r,o,!0))}function rs(e){return e?e.__v_isVNode===!0:!1}function tn(e,t){return e.type===t.type&&e.key===t.key}const ho="__vInternal",Pi=({key:e})=>e??null,Wr=({ref:e,ref_key:t,ref_for:n})=>(typeof e=="number"&&(e=""+e),e!=null?ue(e)||$e(e)||oe(e)?{i:nt,r:e,k:t,f:!!n}:e:null);function Ci(e,t=null,n=null,r=0,o=null,s=e===Ke?0:1,l=!1,a=!1){const i={__v_isVNode:!0,__v_skip:!0,type:e,props:t,key:t&&Pi(t),ref:t&&Wr(t),scopeId:ii,slotScopeIds:null,children:n,component:null,suspense:null,ssContent:null,ssFallback:null,dirs:null,transition:null,el:null,anchor:null,target:null,targetAnchor:null,staticCount:0,shapeFlag:s,patchFlag:r,dynamicProps:o,dynamicChildren:null,appContext:null,ctx:nt};return a?(Bs(i,n),s&128&&e.normalize(i)):n&&(i.shapeFlag|=ue(n)?8:16),ir>0&&!l&&ft&&(i.patchFlag>0||s&6)&&i.patchFlag!==32&&ft.push(i),i}const Le=$1;function $1(e,t=null,n=null,r=0,o=null,s=!1){if((!e||e===Yd)&&(e=st),rs(e)){const a=qt(e,t,!0);return n&&Bs(a,n),ir>0&&!s&&ft&&(a.shapeFlag&6?ft[ft.indexOf(e)]=a:ft.push(a)),a.patchFlag|=-2,a}if(U1(e)&&(e=e.__vccOpts),t){t=V1(t);let{class:a,style:i}=t;a&&!ue(a)&&(t.class=io(a)),Ae(i)&&(Xa(i)&&!ee(i)&&(i=Ce({},i)),t.style=ao(i))}const l=ue(e)?1:Qd(e)?128:S1(e)?64:Ae(e)?4:oe(e)?2:0;return Ci(e,t,n,r,o,l,s,!0)}function V1(e){return e?Xa(e)||ho in e?Ce({},e):e:null}function qt(e,t,n=!1){const{props:r,ref:o,patchFlag:s,children:l}=e,a=t?M1(r||{},t):r;return{__v_isVNode:!0,__v_skip:!0,type:e.type,props:a,key:a&&Pi(a),ref:t&&t.ref?n&&o?ee(o)?o.concat(Wr(t)):[o,Wr(t)]:Wr(t):o,scopeId:e.scopeId,slotScopeIds:e.slotScopeIds,children:l,target:e.target,targetAnchor:e.targetAnchor,staticCount:e.staticCount,shapeFlag:e.shapeFlag,patchFlag:t&&e.type!==Ke?s===-1?16:s|16:s,dynamicProps:e.dynamicProps,dynamicChildren:e.dynamicChildren,appContext:e.appContext,dirs:e.dirs,transition:e.transition,component:e.component,suspense:e.suspense,ssContent:e.ssContent&&qt(e.ssContent),ssFallback:e.ssFallback&&qt(e.ssFallback),el:e.el,anchor:e.anchor,ctx:e.ctx,ce:e.ce}}function Si(e=" ",t=0){return Le(Sn,null,e,t)}function B4(e,t){const n=Le(Xn,null,e);return n.staticCount=t,n}function F4(e="",t=!1){return t?(x1(),D1(st,null,e)):Le(st,null,e)}function dt(e){return e==null||typeof e=="boolean"?Le(st):ee(e)?Le(Ke,null,e.slice()):typeof e=="object"?Bt(e):Le(Sn,null,String(e))}function Bt(e){return e.el===null&&e.patchFlag!==-1||e.memo?e:qt(e)}function Bs(e,t){let n=0;const{shapeFlag:r}=e;if(t==null)t=null;else if(ee(t))n=16;else if(typeof t=="object")if(r&65){const o=t.default;o&&(o._c&&(o._d=!1),Bs(e,o()),o._c&&(o._d=!0));return}else{n=32;const o=t._;!o&&!(ho in t)?t._ctx=nt:o===3&&nt&&(nt.slots._===1?t._=1:(t._=2,e.patchFlag|=1024))}else oe(t)?(t={default:t,_ctx:nt},n=32):(t=String(t),r&64?(n=16,t=[Si(t)]):n=8);e.children=t,e.shapeFlag|=n}function M1(...e){const t={};for(let n=0;n ke||nt;let Fs,os;{const e=Ma(),t=(n,r)=>{let o;return(o=e[n])||(o=e[n]=[]),o.push(r),s=>{o.length>1?o.forEach(l=>l(s)):o[0](s)}};Fs=t("__VUE_INSTANCE_SETTERS__",n=>ke=n),os=t("__VUE_SSR_SETTERS__",n=>br=n)}const xn=e=>{Fs(e),e.scope.on()},ln=()=>{ke&&ke.scope.off(),Fs(null)};function xi(e){return e.vnode.shapeFlag&4}let br=!1;function H1(e,t=!1){t&&os(t);const{props:n,children:r}=e.vnode,o=xi(e);y1(e,n,o,t),T1(e,r);const s=o?j1(e,t):void 0;return t&&os(!1),s}function j1(e,t){const n=e.type;e.accessCache=Object.create(null),e.proxy=Za(new Proxy(e.ctx,f1));const{setup:r}=n;if(r){const o=e.setupContext=r.length>1?W1(e):null;xn(e),un();const s=zt(r,e,0,[e.props,o]);if(dn(),ln(),Va(s)){if(s.then(ln,ln),t)return s.then(l=>{Pl(e,l,t)}).catch(l=>{mr(l,e,0)});e.asyncDep=s}else Pl(e,s,t)}else ki(e,t)}function Pl(e,t,n){oe(t)?e.type.__ssrInlineRender?e.ssrRender=t:e.render=t:Ae(t)&&(e.setupState=ni(t)),ki(e,n)}let Cl;function ki(e,t,n){const r=e.type;if(!e.render){if(!t&&Cl&&!r.render){const o=r.template||Ms(e).template;if(o){const{isCustomElement:s,compilerOptions:l}=e.appContext.config,{delimiters:a,compilerOptions:i}=r,c=Ce(Ce({isCustomElement:s,delimiters:a},l),i);r.render=Cl(o,c)}}e.render=r.render||tt}{xn(e),un();try{p1(e)}finally{dn(),ln()}}}function z1(e){return e.attrsProxy||(e.attrsProxy=new Proxy(e.attrs,{get(t,n){return ze(e,"get","$attrs"),t[n]}}))}function W1(e){const t=n=>{e.exposed=n||{}};return{get attrs(){return z1(e)},slots:e.slots,emit:e.emit,expose:t}}function Hs(e){if(e.exposed)return e.exposeProxy||(e.exposeProxy=new Proxy(ni(Za(e.exposed)),{get(t,n){if(n in t)return t[n];if(n in Qn)return Qn[n](e)},has(t,n){return n in t||n in Qn}}))}function q1(e,t=!0){return oe(e)?e.displayName||e.name:e.name||t&&e.__name}function U1(e){return oe(e)&&"__vccOpts"in e}const O=(e,t)=>Sd(e,t,br);function d(e,t,n){const r=arguments.length;return r===2?Ae(t)&&!ee(t)?rs(t)?Le(e,null,[t]):Le(e,t):Le(e,null,t):(r>3?n=Array.prototype.slice.call(arguments,2):r===3&&rs(n)&&(n=[n]),Le(e,t,n))}const G1="3.4.5",K1="http://www.w3.org/2000/svg",Y1="http://www.w3.org/1998/Math/MathML",Ft=typeof document<"u"?document:null,Sl=Ft&&Ft.createElement("template"),J1={insert:(e,t,n)=>{t.insertBefore(e,n||null)},remove:e=>{const t=e.parentNode;t&&t.removeChild(e)},createElement:(e,t,n,r)=>{const o=t==="svg"?Ft.createElementNS(K1,e):t==="mathml"?Ft.createElementNS(Y1,e):Ft.createElement(e,n?{is:n}:void 0);return e==="select"&&r&&r.multiple!=null&&o.setAttribute("multiple",r.multiple),o},createText:e=>Ft.createTextNode(e),createComment:e=>Ft.createComment(e),setText:(e,t)=>{e.nodeValue=t},setElementText:(e,t)=>{e.textContent=t},parentNode:e=>e.parentNode,nextSibling:e=>e.nextSibling,querySelector:e=>Ft.querySelector(e),setScopeId(e,t){e.setAttribute(t,"")},insertStaticContent(e,t,n,r,o,s){const l=n?n.previousSibling:t.lastChild;if(o&&(o===s||o.nextSibling))for(;t.insertBefore(o.cloneNode(!0),n),!(o===s||!(o=o.nextSibling)););else{Sl.innerHTML=r==="svg"?``:r==="mathml"?``:e;const a=Sl.content;if(r==="svg"||r==="mathml"){const i=a.firstChild;for(;i.firstChild;)a.appendChild(i.firstChild);a.removeChild(i)}t.insertBefore(a,n)}return[l?l.nextSibling:t.firstChild,n?n.previousSibling:t.lastChild]}},Dt="transition",Wn="animation",kn=Symbol("_vtc"),Ut=(e,{slots:t})=>d(n1,$i(e),t);Ut.displayName="Transition";const Di={name:String,type:String,css:{type:Boolean,default:!0},duration:[String,Number,Object],enterFromClass:String,enterActiveClass:String,enterToClass:String,appearFromClass:String,appearActiveClass:String,appearToClass:String,leaveFromClass:String,leaveActiveClass:String,leaveToClass:String},Q1=Ut.props=Ce({},vi,Di),Zt=(e,t=[])=>{ee(e)?e.forEach(n=>n(...t)):e&&e(...t)},xl=e=>e?ee(e)?e.some(t=>t.length>1):e.length>1:!1;function $i(e){const t={};for(const H in e)H in Di||(t[H]=e[H]);if(e.css===!1)return t;const{name:n="v",type:r,duration:o,enterFromClass:s=`${n}-enter-from`,enterActiveClass:l=`${n}-enter-active`,enterToClass:a=`${n}-enter-to`,appearFromClass:i=s,appearActiveClass:c=l,appearToClass:u=a,leaveFromClass:f=`${n}-leave-from`,leaveActiveClass:p=`${n}-leave-active`,leaveToClass:v=`${n}-leave-to`}=e,_=X1(o),w=_&&_[0],T=_&&_[1],{onBeforeEnter:b,onEnter:P,onEnterCancelled:y,onLeave:I,onLeaveCancelled:V,onBeforeAppear:A=b,onAppear:G=P,onAppearCancelled:B=y}=t,D=(H,te,Te)=>{Mt(H,te?u:a),Mt(H,te?c:l),Te&&Te()},F=(H,te)=>{H._isLeaving=!1,Mt(H,f),Mt(H,v),Mt(H,p),te&&te()},Q=H=>(te,Te)=>{const be=H?G:P,U=()=>D(te,H,Te);Zt(be,[te,U]),kl(()=>{Mt(te,H?i:s),Tt(te,H?u:a),xl(be)||Dl(te,r,w,U)})};return Ce(t,{onBeforeEnter(H){Zt(b,[H]),Tt(H,s),Tt(H,l)},onBeforeAppear(H){Zt(A,[H]),Tt(H,i),Tt(H,c)},onEnter:Q(!1),onAppear:Q(!0),onLeave(H,te){H._isLeaving=!0;const Te=()=>F(H,te);Tt(H,f),Mi(),Tt(H,p),kl(()=>{H._isLeaving&&(Mt(H,f),Tt(H,v),xl(I)||Dl(H,r,T,Te))}),Zt(I,[H,Te])},onEnterCancelled(H){D(H,!1),Zt(y,[H])},onAppearCancelled(H){D(H,!0),Zt(B,[H])},onLeaveCancelled(H){F(H),Zt(V,[H])}})}function X1(e){if(e==null)return null;if(Ae(e))return[Po(e.enter),Po(e.leave)];{const t=Po(e);return[t,t]}}function Po(e){return td(e)}function Tt(e,t){t.split(/\s+/).forEach(n=>n&&e.classList.add(n)),(e[kn]||(e[kn]=new Set)).add(t)}function Mt(e,t){t.split(/\s+/).forEach(r=>r&&e.classList.remove(r));const n=e[kn];n&&(n.delete(t),n.size||(e[kn]=void 0))}function kl(e){requestAnimationFrame(()=>{requestAnimationFrame(e)})}let Z1=0;function Dl(e,t,n,r){const o=e._endId=++Z1,s=()=>{o===e._endId&&r()};if(n)return setTimeout(s,n);const{type:l,timeout:a,propCount:i}=Vi(e,t);if(!l)return r();const c=l+"end";let u=0;const f=()=>{e.removeEventListener(c,p),s()},p=v=>{v.target===e&&++u>=i&&f()};setTimeout(()=>{u(n[_]||"").split(", "),o=r(`${Dt}Delay`),s=r(`${Dt}Duration`),l=$l(o,s),a=r(`${Wn}Delay`),i=r(`${Wn}Duration`),c=$l(a,i);let u=null,f=0,p=0;t===Dt?l>0&&(u=Dt,f=l,p=s.length):t===Wn?c>0&&(u=Wn,f=c,p=i.length):(f=Math.max(l,c),u=f>0?l>c?Dt:Wn:null,p=u?u===Dt?s.length:i.length:0);const v=u===Dt&&/\b(transform|all)(,|$)/.test(r(`${Dt}Property`).toString());return{type:u,timeout:f,propCount:p,hasTransform:v}}function $l(e,t){for(;e.length Vl(n)+Vl(e[r])))}function Vl(e){return e==="auto"?0:Number(e.slice(0,-1).replace(",","."))*1e3}function Mi(){return document.body.offsetHeight}function ef(e,t,n){const r=e[kn];r&&(t=(t?[t,...r]:[...r]).join(" ")),t==null?e.removeAttribute("class"):n?e.setAttribute("class",t):e.className=t}const tf=Symbol("_vod"),nf=Symbol("");function rf(e,t,n){const r=e.style,o=ue(n);if(n&&!o){if(t&&!ue(t))for(const s in t)n[s]==null&&ss(r,s,"");for(const s in n)ss(r,s,n[s])}else{const s=r.display;if(o){if(t!==n){const l=r[nf];l&&(n+=";"+l),r.cssText=n}}else t&&e.removeAttribute("style");tf in e&&(r.display=s)}}const Ml=/\s*!important$/;function ss(e,t,n){if(ee(n))n.forEach(r=>ss(e,t,r));else if(n==null&&(n=""),t.startsWith("--"))e.setProperty(t,n);else{const r=of(e,t);Ml.test(n)?e.setProperty(Mn(r),n.replace(Ml,""),"important"):e[r]=n}}const Nl=["Webkit","Moz","ms"],Co={};function of(e,t){const n=Co[t];if(n)return n;let r=it(t);if(r!=="filter"&&r in e)return Co[t]=r;r=vr(r);for(let o=0;o So||(ff.then(()=>So=0),So=Date.now());function vf(e,t){const n=r=>{if(!r._vts)r._vts=Date.now();else if(r._vts<=n.attached)return;rt(hf(r,n.value),t,5,[r])};return n.value=e,n.attached=pf(),n}function hf(e,t){if(ee(t)){const n=e.stopImmediatePropagation;return e.stopImmediatePropagation=()=>{n.call(e),e._stopped=!0},t.map(r=>o=>!o._stopped&&r&&r(o))}else return t}const jl=e=>e.charCodeAt(0)===111&&e.charCodeAt(1)===110&&e.charCodeAt(2)>96&&e.charCodeAt(2)<123,mf=(e,t,n,r,o,s,l,a,i)=>{const c=o==="svg";t==="class"?ef(e,r,c):t==="style"?rf(e,n,r):pr(t)?ws(t)||uf(e,t,n,r,l):(t[0]==="."?(t=t.slice(1),!0):t[0]==="^"?(t=t.slice(1),!1):gf(e,t,r,c))?lf(e,t,r,s,l,a,i):(t==="true-value"?e._trueValue=r:t==="false-value"&&(e._falseValue=r),sf(e,t,r,c))};function gf(e,t,n,r){if(r)return!!(t==="innerHTML"||t==="textContent"||t in e&&jl(t)&&oe(n));if(t==="spellcheck"||t==="draggable"||t==="translate"||t==="form"||t==="list"&&e.tagName==="INPUT"||t==="type"&&e.tagName==="TEXTAREA")return!1;if(t==="width"||t==="height"){const o=e.tagName;if(o==="IMG"||o==="VIDEO"||o==="CANVAS"||o==="SOURCE")return!1}return jl(t)&&ue(n)?!1:t in e}const Ni=new WeakMap,Bi=new WeakMap,eo=Symbol("_moveCb"),zl=Symbol("_enterCb"),Fi={name:"TransitionGroup",props:Ce({},Q1,{tag:String,moveClass:String}),setup(e,{slots:t}){const n=Nn(),r=pi();let o,s;return gi(()=>{if(!o.length)return;const l=e.moveClass||`${e.name||"v"}-move`;if(!Tf(o[0].el,n.vnode.el,l))return;o.forEach(yf),o.forEach(Ef);const a=o.filter(wf);Mi(),a.forEach(i=>{const c=i.el,u=c.style;Tt(c,l),u.transform=u.webkitTransform=u.transitionDuration="";const f=c[eo]=p=>{p&&p.target!==c||(!p||/transform$/.test(p.propertyName))&&(c.removeEventListener("transitionend",f),c[eo]=null,Mt(c,l))};c.addEventListener("transitionend",f)})}),()=>{const l=le(e),a=$i(l);let i=l.tag||Ke;o=s,s=t.default?$s(t.default()):[];for(let c=0;c delete e.mode;Fi.props;const bf=Fi;function yf(e){const t=e.el;t[eo]&&t[eo](),t[zl]&&t[zl]()}function Ef(e){Bi.set(e,e.el.getBoundingClientRect())}function wf(e){const t=Ni.get(e),n=Bi.get(e),r=t.left-n.left,o=t.top-n.top;if(r||o){const s=e.el.style;return s.transform=s.webkitTransform=`translate(${r}px,${o}px)`,s.transitionDuration="0s",e}}function Tf(e,t,n){const r=e.cloneNode(),o=e[kn];o&&o.forEach(a=>{a.split(/\s+/).forEach(i=>i&&r.classList.remove(i))}),n.split(/\s+/).forEach(a=>a&&r.classList.add(a)),r.style.display="none";const s=t.nodeType===1?t:t.parentNode;s.appendChild(r);const{hasTransform:l}=Vi(r);return s.removeChild(r),l}const Af=Ce({patchProp:mf},J1);let xo,Wl=!1;function Of(){return xo=Wl?xo:R1(Af),Wl=!0,xo}const Lf=(...e)=>{const t=Of().createApp(...e),{mount:n}=t;return t.mount=r=>{const o=Rf(r);if(o)return n(o,!0,If(o))},t};function If(e){if(e instanceof SVGElement)return"svg";if(typeof MathMLElement=="function"&&e instanceof MathMLElement)return"mathml"}function Rf(e){return ue(e)?document.querySelector(e):e}const Pf="modulepreload",Cf=function(e){return"/saf-training/"+e},ql={},m=function(t,n,r){let o=Promise.resolve();if(n&&n.length>0){const s=document.getElementsByTagName("link");o=Promise.all(n.map(l=>{if(l=Cf(l),l in ql)return;ql[l]=!0;const a=l.endsWith(".css"),i=a?'[rel="stylesheet"]':"";if(!!r)for(let f=s.length-1;f>=0;f--){const p=s[f];if(p.href===l&&(!a||p.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${l}"]${i}`))return;const u=document.createElement("link");if(u.rel=a?"stylesheet":Pf,a||(u.as="script",u.crossOrigin=""),u.href=l,document.head.appendChild(u),a)return new Promise((f,p)=>{u.addEventListener("load",f),u.addEventListener("error",()=>p(new Error(`Unable to preload CSS for ${l}`)))})}))}return o.then(()=>t()).catch(s=>{const l=new Event("vite:preloadError",{cancelable:!0});if(l.payload=s,window.dispatchEvent(l),!l.defaultPrevented)throw s})},Sf={"v-8daa1a0e":()=>m(()=>import("./index.html-59moNvjP.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2e3eac9e":()=>m(()=>import("./slides.html-m6Ki5i2X.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-1473bf53":()=>m(()=>import("./index.html-9JfmLwJY.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-4e65ec78":()=>m(()=>import("./disable.html-P4g8GSkI.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-c151bf32":()=>m(()=>import("./encrypt.html-Ciea4rOp.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-438ffe52":()=>m(()=>import("./markdown.html-CH_gcEO-.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-6e19edb7":()=>m(()=>import("./page.html-A8BFrJpC.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-fffb8e28":()=>m(()=>import("./index.html-n95ijdY1.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2ae58894":()=>m(()=>import("./LinuxInstall.html-tKU-W61s.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-59a19ede":()=>m(()=>import("./MacInstall.html-uUATyCGo.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-08a5d2dc":()=>m(()=>import("./index.html-sqnfiXXT.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-4f4ac476":()=>m(()=>import("./WindowsInstall.html-uhkObwql.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-255cf054":()=>m(()=>import("./02.html-JlG01nLe.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2711c8f3":()=>m(()=>import("./03.html-703ul5jl.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-28c6a192":()=>m(()=>import("./04.html-RCsMUVdJ.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7fe15663":()=>m(()=>import("./index.html-sZ-w2xqT.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2ecf6c9a":()=>m(()=>import("./02.html-_OD8ieYF.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-30844539":()=>m(()=>import("./03.html-bRDdZlc4.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-32391dd8":()=>m(()=>import("./04.html-dWXbloX-.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-33edf677":()=>m(()=>import("./05.html-njU0dtLN.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-35a2cf16":()=>m(()=>import("./06.html-PA_6PI_i.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3757a7b5":()=>m(()=>import("./07.html-HFJag681.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-390c8054":()=>m(()=>import("./08.html-liPSgNZG.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3ac158f3":()=>m(()=>import("./09.html-TYobHnIk.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-604bf69d":()=>m(()=>import("./10.html-49p5-RwS.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-6200cf3c":()=>m(()=>import("./11.html-skD8zwiy.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-63b5a7db":()=>m(()=>import("./12.html-SN14diUW.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-32f5f052":()=>m(()=>import("./Appendix A - Writing Plural Resources.html-yPzso6d-.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-937704e2":()=>m(()=>import("./Appendix B - Resource Examples.html-i6OYSmdP.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-f978fcb6":()=>m(()=>import("./Appendix C - Adding Your Resource to InSpec.html-WbTjo22h.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-4faaa59d":()=>m(()=>import("./Appendix D - Example Pipeline for Validating an InSpec Profile.html-osNKVVHI.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-b1912590":()=>m(()=>import("./Appendix E - More Resource Examples.html-alOD9ysh.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-0cb10646":()=>m(()=>import("./index.html-sCIaLq4e.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-23e7d8ca":()=>m(()=>import("./02.html-ZLuOUrAS.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-259cb169":()=>m(()=>import("./03.html-BTNaDcVS.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-27518a08":()=>m(()=>import("./04.html-DtdjMdcP.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-290662a7":()=>m(()=>import("./05.html-bpZhLxIw.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2abb3b46":()=>m(()=>import("./06.html-ubMevqRY.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2c7013e5":()=>m(()=>import("./07.html-ugOSkN4O.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2e24ec84":()=>m(()=>import("./08.html-c8Ip0qKh.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2fd9c523":()=>m(()=>import("./09.html-pI5nuU8N.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-556462cd":()=>m(()=>import("./10.html-UeGa7Yhg.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-57193b6c":()=>m(()=>import("./11.html-KxdFJYc_.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-58ce140b":()=>m(()=>import("./12.html-1aA7mi6b.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-5a82ecaa":()=>m(()=>import("./13.html-HAYOzXBj.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2c0932a6":()=>m(()=>import("./index.html-zhrKuXVN.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-709259d0":()=>m(()=>import("./02.html-ZGAvABD3.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7247326f":()=>m(()=>import("./03.html-DjzoyP4T.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-73fc0b0e":()=>m(()=>import("./04.html-bS-ifw0D.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-75b0e3ad":()=>m(()=>import("./05.html-MDzOhQll.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7765bc4c":()=>m(()=>import("./06.html-KKCegm7r.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-791a94eb":()=>m(()=>import("./07.html-R8R7eLbd.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7acf6d8a":()=>m(()=>import("./08.html-i1-P6J-B.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7c844629":()=>m(()=>import("./09.html-Ti7khI6G.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-bbe2385a":()=>m(()=>import("./10.html-plkslbHh.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-b878871c":()=>m(()=>import("./11.html-kkXZOg1S.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-b50ed5de":()=>m(()=>import("./12.html-9oj58LSE.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-b1a524a0":()=>m(()=>import("./13.html-KEZLmN6b.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-a41d3f32":()=>m(()=>import("./index.html-zwYWQQxm.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-76a3f766":()=>m(()=>import("./02.html-gNvxgFN6.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7858d005":()=>m(()=>import("./03.html-yprIC4go.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7a0da8a4":()=>m(()=>import("./04.html-xZbjtynr.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7bc28143":()=>m(()=>import("./05.html-_Spw3v3F.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7d7759e2":()=>m(()=>import("./06.html-JybVP6pe.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7f2c3281":()=>m(()=>import("./07.html-EymqpKt9.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-fe3de9c0":()=>m(()=>import("./08.html-V5v08TE2.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-fad43882":()=>m(()=>import("./09.html-qiGyptHL.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-afbefd2e":()=>m(()=>import("./10.html-itnRGyFj.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-ac554bf0":()=>m(()=>import("./11.html-uDyMJ-ud.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-a8eb9ab2":()=>m(()=>import("./12.html-l5YafvWJ.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-a581e974":()=>m(()=>import("./13.html-rrPOwW7c.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-a2183836":()=>m(()=>import("./14.html-ZXBlwruU.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-9eae86f8":()=>m(()=>import("./15.html-XcKq8quf.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-9b44d5ba":()=>m(()=>import("./16.html-PdB6Rg-R.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-97db247c":()=>m(()=>import("./17.html-q1XWgL_N.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-9471733e":()=>m(()=>import("./18.html-tlb9_esG.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-9107c200":()=>m(()=>import("./19.html-2LyglPur.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-45f286ac":()=>m(()=>import("./20.html-fvwmUOh6.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-4288d56e":()=>m(()=>import("./21.html-VDxNt4VZ.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3f1f2430":()=>m(()=>import("./22.html-qkZXzTMM.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3bb572f2":()=>m(()=>import("./23.html-_AtI14QP.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-384bc1b4":()=>m(()=>import("./24.html-kTVwnpjk.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-34e21076":()=>m(()=>import("./25.html-pWBKoq-i.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-31785f38":()=>m(()=>import("./26.html-olY5IfEU.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2e0eadfa":()=>m(()=>import("./27.html-fTpEr_ai.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-2aa4fcbc":()=>m(()=>import("./28.html-_tB2aore.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-273b4b7e":()=>m(()=>import("./29.html-SGeBfLos.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-10283f91":()=>m(()=>import("./index.html-ZOwdIAst.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-08816f43":()=>m(()=>import("./02.html-UxwkeC4t.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-0a3647e2":()=>m(()=>import("./03.html-3gTCNJob.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-0beb2081":()=>m(()=>import("./04.html-7Ib1xBlQ.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-0d9ff920":()=>m(()=>import("./05.html-1mszlEOh.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-0f54d1bf":()=>m(()=>import("./06.html-mS6M1fnL.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-1109aa5e":()=>m(()=>import("./07.html-_G_KwOB_.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-12be82fd":()=>m(()=>import("./08.html-EmHHxW2G.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-14735b9c":()=>m(()=>import("./09.html-TNAq4b_z.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-39fdf946":()=>m(()=>import("./10.html-BPnXkYrG.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3bb2d1e5":()=>m(()=>import("./11.html-QqaRwBGl.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3d67aa84":()=>m(()=>import("./12.html-_CVEeIkv.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3f1c8323":()=>m(()=>import("./13.html-Nsy5O-FB.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-40d15bc2":()=>m(()=>import("./14.html-5nzAflpE.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-42863461":()=>m(()=>import("./15.html-E0fpSswL.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-443b0d00":()=>m(()=>import("./16.html-eRejMn1V.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-6d2f3654":()=>m(()=>import("./index.html-hDDJnrx3.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-5d5c2d30":()=>m(()=>import("./index.html-ApqmxQB-.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-177e1f06":()=>m(()=>import("./baz.html-9BihpDVQ.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-5d5821d6":()=>m(()=>import("./index.html-eAiC7-an.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-0b6fc5f8":()=>m(()=>import("./ray.html-6qhKLuzq.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-3706649a":()=>m(()=>import("./404.html-luCqRhsx.js"),__vite__mapDeps([])).then(({data:e})=>e),"v-7e978520":()=>m(()=>import("./index.html-_XizYYHU.js"),__vite__mapDeps([])).then(({data:e})=>e)},xf=JSON.parse('{"base":"/saf-training/","lang":"en-US","title":"MITRE SAF Training","description":"The MITRE Security Automation Framework Training for Security Guidance, Hardening, Validation & Visualization","head":[],"locales":{}}');var kf=([e,t,n])=>e==="meta"&&t.name?`${e}.${t.name}`:["title","base"].includes(e)?e:e==="template"&&t.id?`${e}.${t.id}`:JSON.stringify([e,t,n]),Df=e=>{const t=new Set,n=[];return e.forEach(r=>{const o=kf(r);t.has(o)||(t.add(o),n.push(r))}),n},$f=e=>e[0]==="/"?e:`/${e}`,Hi=e=>e[e.length-1]==="/"||e.endsWith(".html")?e:`${e}/`,Bn=e=>/^(https?:)?\/\//.test(e),Vf=/.md((\?|#).*)?$/,cr=(e,t="/")=>!!(Bn(e)||e.startsWith("/")&&!e.startsWith(t)&&!Vf.test(e)),ji=e=>/^[a-z][a-z0-9+.-]*:/.test(e),yr=e=>Object.prototype.toString.call(e)==="[object Object]",js=e=>e[e.length-1]==="/"?e.slice(0,-1):e,zi=e=>e[0]==="/"?e.slice(1):e,Mf=(e,t)=>{const n=Object.keys(e).sort((r,o)=>{const s=o.split("/").length-r.split("/").length;return s!==0?s:o.length-r.length});for(const r of n)if(t.startsWith(r))return r;return"/"};const Wi={"v-8daa1a0e":C(()=>m(()=>import("./index.html-t_gVJwTz.js"),__vite__mapDeps([0,1]))),"v-2e3eac9e":C(()=>m(()=>import("./slides.html-B2B8Bdjz.js"),__vite__mapDeps([2,1]))),"v-1473bf53":C(()=>m(()=>import("./index.html-8pQwloAX.js"),__vite__mapDeps([3,1]))),"v-4e65ec78":C(()=>m(()=>import("./disable.html-5NqgCrK4.js"),__vite__mapDeps([4,1]))),"v-c151bf32":C(()=>m(()=>import("./encrypt.html-I_A4umri.js"),__vite__mapDeps([5,1]))),"v-438ffe52":C(()=>m(()=>import("./markdown.html-rSX699YS.js"),__vite__mapDeps([6,1]))),"v-6e19edb7":C(()=>m(()=>import("./page.html-or7bBPx6.js"),__vite__mapDeps([7,1]))),"v-fffb8e28":C(()=>m(()=>import("./index.html-98-4RhBA.js"),__vite__mapDeps([8,1]))),"v-2ae58894":C(()=>m(()=>import("./LinuxInstall.html-HH2hNmrG.js"),__vite__mapDeps([9,1]))),"v-59a19ede":C(()=>m(()=>import("./MacInstall.html-cKSBTWFZ.js"),__vite__mapDeps([10,1]))),"v-08a5d2dc":C(()=>m(()=>import("./index.html-tCAK-GS-.js"),__vite__mapDeps([11,1]))),"v-4f4ac476":C(()=>m(()=>import("./WindowsInstall.html-nI6hSHXE.js"),__vite__mapDeps([12,1]))),"v-255cf054":C(()=>m(()=>import("./02.html-83uoV4YN.js"),__vite__mapDeps([13,1]))),"v-2711c8f3":C(()=>m(()=>import("./03.html-SGHRv8KZ.js"),__vite__mapDeps([14,1]))),"v-28c6a192":C(()=>m(()=>import("./04.html-QTDREr1e.js"),__vite__mapDeps([15,1]))),"v-7fe15663":C(()=>m(()=>import("./index.html-Z3mheRnb.js"),__vite__mapDeps([16,1]))),"v-2ecf6c9a":C(()=>m(()=>import("./02.html-RkT9KmKF.js"),__vite__mapDeps([17,1]))),"v-30844539":C(()=>m(()=>import("./03.html-3kNnUOXI.js"),__vite__mapDeps([18,1]))),"v-32391dd8":C(()=>m(()=>import("./04.html-kHDLQ3LJ.js"),__vite__mapDeps([19,1]))),"v-33edf677":C(()=>m(()=>import("./05.html-062HFELG.js"),__vite__mapDeps([20,1]))),"v-35a2cf16":C(()=>m(()=>import("./06.html-lc3SD2e2.js"),__vite__mapDeps([21,22,1]))),"v-3757a7b5":C(()=>m(()=>import("./07.html-qDxv00q_.js"),__vite__mapDeps([23,1]))),"v-390c8054":C(()=>m(()=>import("./08.html-sjks5uS2.js"),__vite__mapDeps([24,1]))),"v-3ac158f3":C(()=>m(()=>import("./09.html-MGReamcl.js"),__vite__mapDeps([25,1]))),"v-604bf69d":C(()=>m(()=>import("./10.html-PAAE66Lf.js"),__vite__mapDeps([26,1]))),"v-6200cf3c":C(()=>m(()=>import("./11.html-BW22Z0ue.js"),__vite__mapDeps([27,1]))),"v-63b5a7db":C(()=>m(()=>import("./12.html-1CYTXiAL.js"),__vite__mapDeps([28,1]))),"v-32f5f052":C(()=>m(()=>import("./Appendix A - Writing Plural Resources.html-4Herbn7U.js"),__vite__mapDeps([29,1]))),"v-937704e2":C(()=>m(()=>import("./Appendix B - Resource Examples.html-NvdExdec.js"),__vite__mapDeps([30,1]))),"v-f978fcb6":C(()=>m(()=>import("./Appendix C - Adding Your Resource to InSpec.html-_XcPf7tQ.js"),__vite__mapDeps([31,1]))),"v-4faaa59d":C(()=>m(()=>import("./Appendix D - Example Pipeline for Validating an InSpec Profile.html-Vev_NvRH.js"),__vite__mapDeps([32,1]))),"v-b1912590":C(()=>m(()=>import("./Appendix E - More Resource Examples.html-9JW9f5Xc.js"),__vite__mapDeps([33,1]))),"v-0cb10646":C(()=>m(()=>import("./index.html-G1U3UcXR.js"),__vite__mapDeps([34,35,1]))),"v-23e7d8ca":C(()=>m(()=>import("./02.html-Jq26wnCy.js"),__vite__mapDeps([36,37,1]))),"v-259cb169":C(()=>m(()=>import("./03.html-D2HIBd9A.js"),__vite__mapDeps([38,1]))),"v-27518a08":C(()=>m(()=>import("./04.html-m68ZwoFB.js"),__vite__mapDeps([39,1]))),"v-290662a7":C(()=>m(()=>import("./05.html-0lRQQ4nw.js"),__vite__mapDeps([40,1]))),"v-2abb3b46":C(()=>m(()=>import("./06.html-5FBl__Sb.js"),__vite__mapDeps([41,1]))),"v-2c7013e5":C(()=>m(()=>import("./07.html-jyFSwk5R.js"),__vite__mapDeps([42,1]))),"v-2e24ec84":C(()=>m(()=>import("./08.html-2opCG8t_.js"),__vite__mapDeps([43,1]))),"v-2fd9c523":C(()=>m(()=>import("./09.html-AlfnR0Ux.js"),__vite__mapDeps([44,45,37,1]))),"v-556462cd":C(()=>m(()=>import("./10.html-2qHElRRB.js"),__vite__mapDeps([46,1]))),"v-57193b6c":C(()=>m(()=>import("./11.html-rnOvxXZY.js"),__vite__mapDeps([47,1]))),"v-58ce140b":C(()=>m(()=>import("./12.html-rZiMrh4C.js"),__vite__mapDeps([48,1]))),"v-5a82ecaa":C(()=>m(()=>import("./13.html-tygutURX.js"),__vite__mapDeps([49,1]))),"v-2c0932a6":C(()=>m(()=>import("./index.html-PyNKfLDl.js"),__vite__mapDeps([50,35,1]))),"v-709259d0":C(()=>m(()=>import("./02.html-lzkjXVrJ.js"),__vite__mapDeps([51,1]))),"v-7247326f":C(()=>m(()=>import("./03.html-vZVl6Xr3.js"),__vite__mapDeps([52,1]))),"v-73fc0b0e":C(()=>m(()=>import("./04.html-m9u74lA0.js"),__vite__mapDeps([53,1]))),"v-75b0e3ad":C(()=>m(()=>import("./05.html-QT4wpcJj.js"),__vite__mapDeps([54,55,1]))),"v-7765bc4c":C(()=>m(()=>import("./06.html-frOaGo-L.js"),__vite__mapDeps([56,55,1]))),"v-791a94eb":C(()=>m(()=>import("./07.html-VaK3TpAa.js"),__vite__mapDeps([57,1]))),"v-7acf6d8a":C(()=>m(()=>import("./08.html-K8xznDoB.js"),__vite__mapDeps([58,1]))),"v-7c844629":C(()=>m(()=>import("./09.html-4RF22-MU.js"),__vite__mapDeps([59,1]))),"v-bbe2385a":C(()=>m(()=>import("./10.html-UQrjWLRc.js"),__vite__mapDeps([60,1]))),"v-b878871c":C(()=>m(()=>import("./11.html-2AyGulZr.js"),__vite__mapDeps([61,1]))),"v-b50ed5de":C(()=>m(()=>import("./12.html-d7EmjCqn.js"),__vite__mapDeps([62,1]))),"v-b1a524a0":C(()=>m(()=>import("./13.html-Joxnu3ZF.js"),__vite__mapDeps([63,1]))),"v-a41d3f32":C(()=>m(()=>import("./index.html-meQfTImk.js"),__vite__mapDeps([64,35,1]))),"v-76a3f766":C(()=>m(()=>import("./02.html-H7TT4yNP.js"),__vite__mapDeps([65,1]))),"v-7858d005":C(()=>m(()=>import("./03.html-PQ7jRNRe.js"),__vite__mapDeps([66,1]))),"v-7a0da8a4":C(()=>m(()=>import("./04.html-dNsPtG5f.js"),__vite__mapDeps([67,1]))),"v-7bc28143":C(()=>m(()=>import("./05.html-4j0hrmXw.js"),__vite__mapDeps([68,1]))),"v-7d7759e2":C(()=>m(()=>import("./06.html-Z9aZdM7f.js"),__vite__mapDeps([69,1]))),"v-7f2c3281":C(()=>m(()=>import("./07.html-R_zBHwCS.js"),__vite__mapDeps([70,1]))),"v-fe3de9c0":C(()=>m(()=>import("./08.html-svY7cBD7.js"),__vite__mapDeps([71,1]))),"v-fad43882":C(()=>m(()=>import("./09.html-1G4jKolf.js"),__vite__mapDeps([72,1]))),"v-afbefd2e":C(()=>m(()=>import("./10.html-u5hZO5fC.js"),__vite__mapDeps([73,1]))),"v-ac554bf0":C(()=>m(()=>import("./11.html-quYRa6mK.js"),__vite__mapDeps([74,1]))),"v-a8eb9ab2":C(()=>m(()=>import("./12.html-UW4blz-R.js"),__vite__mapDeps([75,1]))),"v-a581e974":C(()=>m(()=>import("./13.html-CbzjgJYT.js"),__vite__mapDeps([76,1]))),"v-a2183836":C(()=>m(()=>import("./14.html-86sveI8i.js"),__vite__mapDeps([77,1]))),"v-9eae86f8":C(()=>m(()=>import("./15.html-08iC1CW-.js"),__vite__mapDeps([78,1]))),"v-9b44d5ba":C(()=>m(()=>import("./16.html-c-vzbbVY.js"),__vite__mapDeps([79,1]))),"v-97db247c":C(()=>m(()=>import("./17.html-KmjmQR7T.js"),__vite__mapDeps([80,1]))),"v-9471733e":C(()=>m(()=>import("./18.html-6inXR1OS.js"),__vite__mapDeps([81,1]))),"v-9107c200":C(()=>m(()=>import("./19.html-1B95xKZW.js"),__vite__mapDeps([82,1]))),"v-45f286ac":C(()=>m(()=>import("./20.html-RbtoI_3n.js"),__vite__mapDeps([83,1]))),"v-4288d56e":C(()=>m(()=>import("./21.html-4DDoeaAC.js"),__vite__mapDeps([84,1]))),"v-3f1f2430":C(()=>m(()=>import("./22.html-Ry3VKHZL.js"),__vite__mapDeps([85,1]))),"v-3bb572f2":C(()=>m(()=>import("./23.html-pORlyVF1.js"),__vite__mapDeps([86,1]))),"v-384bc1b4":C(()=>m(()=>import("./24.html-lwVkZ7GS.js"),__vite__mapDeps([87,1]))),"v-34e21076":C(()=>m(()=>import("./25.html-YC7ycXrU.js"),__vite__mapDeps([88,1]))),"v-31785f38":C(()=>m(()=>import("./26.html-cOT61I-e.js"),__vite__mapDeps([89,1]))),"v-2e0eadfa":C(()=>m(()=>import("./27.html-2xCYxsc-.js"),__vite__mapDeps([90,1]))),"v-2aa4fcbc":C(()=>m(()=>import("./28.html-BAr0L6-C.js"),__vite__mapDeps([91,1]))),"v-273b4b7e":C(()=>m(()=>import("./29.html-9ri5Q3n2.js"),__vite__mapDeps([92,1]))),"v-10283f91":C(()=>m(()=>import("./index.html-V9ksUymT.js"),__vite__mapDeps([93,1]))),"v-08816f43":C(()=>m(()=>import("./02.html-DlHv5Kqx.js"),__vite__mapDeps([94,35,1]))),"v-0a3647e2":C(()=>m(()=>import("./03.html-XmOm_OT4.js"),__vite__mapDeps([95,1]))),"v-0beb2081":C(()=>m(()=>import("./04.html-Ri1HP-KJ.js"),__vite__mapDeps([96,97,1]))),"v-0d9ff920":C(()=>m(()=>import("./05.html-X2vAQGAS.js"),__vite__mapDeps([98,22,1]))),"v-0f54d1bf":C(()=>m(()=>import("./06.html-17AIvNzS.js"),__vite__mapDeps([99,1]))),"v-1109aa5e":C(()=>m(()=>import("./07.html-AUiZ2lcd.js"),__vite__mapDeps([100,1]))),"v-12be82fd":C(()=>m(()=>import("./08.html-HVHUJuV0.js"),__vite__mapDeps([101,1]))),"v-14735b9c":C(()=>m(()=>import("./09.html-7jlh-VKk.js"),__vite__mapDeps([102,45,1]))),"v-39fdf946":C(()=>m(()=>import("./10.html-YuhR_QQh.js"),__vite__mapDeps([103,97,1]))),"v-3bb2d1e5":C(()=>m(()=>import("./11.html-nXdLMfTA.js"),__vite__mapDeps([104,1]))),"v-3d67aa84":C(()=>m(()=>import("./12.html-yx0oh7Ku.js"),__vite__mapDeps([105,1]))),"v-3f1c8323":C(()=>m(()=>import("./13.html-haLlr8ba.js"),__vite__mapDeps([106,1]))),"v-40d15bc2":C(()=>m(()=>import("./14.html-XM-PcHav.js"),__vite__mapDeps([107,1]))),"v-42863461":C(()=>m(()=>import("./15.html-jDJTYMdE.js"),__vite__mapDeps([108,1]))),"v-443b0d00":C(()=>m(()=>import("./16.html-pDGZD3f7.js"),__vite__mapDeps([109,1]))),"v-6d2f3654":C(()=>m(()=>import("./index.html-ZqaeHJt5.js"),__vite__mapDeps([110,35,1]))),"v-5d5c2d30":C(()=>m(()=>import("./index.html-xZM-XY16.js"),__vite__mapDeps([111,1]))),"v-177e1f06":C(()=>m(()=>import("./baz.html-oFKZe-U6.js"),__vite__mapDeps([112,1]))),"v-5d5821d6":C(()=>m(()=>import("./index.html-idwPg_0C.js"),__vite__mapDeps([113,1]))),"v-0b6fc5f8":C(()=>m(()=>import("./ray.html-FZ_hvbSE.js"),__vite__mapDeps([114,1]))),"v-3706649a":C(()=>m(()=>import("./404.html-_yth5SbW.js"),__vite__mapDeps([115,1]))),"v-7e978520":C(()=>m(()=>import("./index.html-xFacQNme.js"),__vite__mapDeps([116,1])))};var Nf=Symbol(""),qi=Symbol(""),Bf=Gt({key:"",path:"",title:"",lang:"",frontmatter:{},headers:[]}),fe=()=>{const e=ge(qi);if(!e)throw new Error("pageData() is called without provider.");return e},Ui=Symbol(""),Oe=()=>{const e=ge(Ui);if(!e)throw new Error("usePageFrontmatter() is called without provider.");return e},Gi=Symbol(""),Ff=()=>{const e=ge(Gi);if(!e)throw new Error("usePageHead() is called without provider.");return e},Hf=Symbol(""),Ki=Symbol(""),zs=()=>{const e=ge(Ki);if(!e)throw new Error("usePageLang() is called without provider.");return e},Yi=Symbol(""),jf=()=>{const e=ge(Yi);if(!e)throw new Error("usePageLayout() is called without provider.");return e},zf=K(Sf),Ws=Symbol(""),fn=()=>{const e=ge(Ws);if(!e)throw new Error("useRouteLocale() is called without provider.");return e},bn=K(xf),Ji=()=>bn,Qi=Symbol(""),qs=()=>{const e=ge(Qi);if(!e)throw new Error("useSiteLocaleData() is called without provider.");return e},Wf=Symbol(""),qf="Layout",Uf="NotFound",At=hr({resolveLayouts:e=>e.reduce((t,n)=>({...t,...n.layouts}),{}),resolvePageData:async e=>{const t=zf.value[e];return await(t==null?void 0:t())??Bf},resolvePageFrontmatter:e=>e.frontmatter,resolvePageHead:(e,t,n)=>{const r=ue(t.description)?t.description:n.description,o=[...ee(t.head)?t.head:[],...n.head,["title",{},e],["meta",{name:"description",content:r}]];return Df(o)},resolvePageHeadTitle:(e,t)=>[e.title,t.title].filter(n=>!!n).join(" | "),resolvePageLang:(e,t)=>e.lang||t.lang||"en-US",resolvePageLayout:(e,t)=>{let n;if(e.path){const r=e.frontmatter.layout;ue(r)?n=r:n=qf}else n=Uf;return t[n]},resolveRouteLocale:(e,t)=>Mf(e,t),resolveSiteLocaleData:(e,t)=>({...e,...e.locales[t]})}),mo=z({name:"ClientOnly",setup(e,t){const n=K(!1);return ve(()=>{n.value=!0}),()=>{var r,o;return n.value?(o=(r=t.slots).default)==null?void 0:o.call(r):null}}}),Us=z({name:"Content",props:{pageKey:{type:String,required:!1,default:""}},setup(e){const t=fe(),n=O(()=>Wi[e.pageKey||t.value.key]);return()=>n.value?d(n.value):d("div","404 Not Found")}}),ht=(e={})=>e,Me=e=>Bn(e)?e:`/saf-training/${zi(e)}`;const Gf={};/*! + * vue-router v4.2.5 + * (c) 2023 Eduardo San Martin Morote + * @license MIT + */const mn=typeof window<"u";function Kf(e){return e.__esModule||e[Symbol.toStringTag]==="Module"}const pe=Object.assign;function ko(e,t){const n={};for(const r in t){const o=t[r];n[r]=pt(o)?o.map(e):e(o)}return n}const er=()=>{},pt=Array.isArray,Yf=/\/$/,Jf=e=>e.replace(Yf,"");function Do(e,t,n="/"){let r,o={},s="",l="";const a=t.indexOf("#");let i=t.indexOf("?");return a=0&&(i=-1),i>-1&&(r=t.slice(0,i),s=t.slice(i+1,a>-1?a:t.length),o=e(s)),a>-1&&(r=r||t.slice(0,a),l=t.slice(a,t.length)),r=e0(r??t,n),{fullPath:r+(s&&"?")+s+l,path:r,query:o,hash:l}}function Qf(e,t){const n=t.query?e(t.query):"";return t.path+(n&&"?")+n+(t.hash||"")}function Ul(e,t){return!t||!e.toLowerCase().startsWith(t.toLowerCase())?e:e.slice(t.length)||"/"}function Xf(e,t,n){const r=t.matched.length-1,o=n.matched.length-1;return r>-1&&r===o&&Dn(t.matched[r],n.matched[o])&&Xi(t.params,n.params)&&e(t.query)===e(n.query)&&t.hash===n.hash}function Dn(e,t){return(e.aliasOf||e)===(t.aliasOf||t)}function Xi(e,t){if(Object.keys(e).length!==Object.keys(t).length)return!1;for(const n in e)if(!Zf(e[n],t[n]))return!1;return!0}function Zf(e,t){return pt(e)?Gl(e,t):pt(t)?Gl(t,e):e===t}function Gl(e,t){return pt(t)?e.length===t.length&&e.every((n,r)=>n===t[r]):e.length===1&&e[0]===t}function e0(e,t){if(e.startsWith("/"))return e;if(!e)return t;const n=t.split("/"),r=e.split("/"),o=r[r.length-1];(o===".."||o===".")&&r.push("");let s=n.length-1,l,a;for(l=0;l 1&&s--;else break;return n.slice(0,s).join("/")+"/"+r.slice(l-(l===r.length?1:0)).join("/")}var ur;(function(e){e.pop="pop",e.push="push"})(ur||(ur={}));var tr;(function(e){e.back="back",e.forward="forward",e.unknown=""})(tr||(tr={}));function t0(e){if(!e)if(mn){const t=document.querySelector("base");e=t&&t.getAttribute("href")||"/",e=e.replace(/^\w+:\/\/[^\/]+/,"")}else e="/";return e[0]!=="/"&&e[0]!=="#"&&(e="/"+e),Jf(e)}const n0=/^[^#]+#/;function r0(e,t){return e.replace(n0,"#")+t}function o0(e,t){const n=document.documentElement.getBoundingClientRect(),r=e.getBoundingClientRect();return{behavior:t.behavior,left:r.left-n.left-(t.left||0),top:r.top-n.top-(t.top||0)}}const go=()=>({left:window.pageXOffset,top:window.pageYOffset});function s0(e){let t;if("el"in e){const n=e.el,r=typeof n=="string"&&n.startsWith("#"),o=typeof n=="string"?r?document.getElementById(n.slice(1)):document.querySelector(n):n;if(!o)return;t=o0(o,e)}else t=e;"scrollBehavior"in document.documentElement.style?window.scrollTo(t):window.scrollTo(t.left!=null?t.left:window.pageXOffset,t.top!=null?t.top:window.pageYOffset)}function Kl(e,t){return(history.state?history.state.position-t:-1)+e}const ls=new Map;function l0(e,t){ls.set(e,t)}function a0(e){const t=ls.get(e);return ls.delete(e),t}let i0=()=>location.protocol+"//"+location.host;function Zi(e,t){const{pathname:n,search:r,hash:o}=t,s=e.indexOf("#");if(s>-1){let a=o.includes(e.slice(s))?e.slice(s).length:1,i=o.slice(a);return i[0]!=="/"&&(i="/"+i),Ul(i,"")}return Ul(n,e)+r+o}function c0(e,t,n,r){let o=[],s=[],l=null;const a=({state:p})=>{const v=Zi(e,location),_=n.value,w=t.value;let T=0;if(p){if(n.value=v,t.value=p,l&&l===_){l=null;return}T=w?p.position-w.position:0}else r(v);o.forEach(b=>{b(n.value,_,{delta:T,type:ur.pop,direction:T?T>0?tr.forward:tr.back:tr.unknown})})};function i(){l=n.value}function c(p){o.push(p);const v=()=>{const _=o.indexOf(p);_>-1&&o.splice(_,1)};return s.push(v),v}function u(){const{history:p}=window;p.state&&p.replaceState(pe({},p.state,{scroll:go()}),"")}function f(){for(const p of s)p();s=[],window.removeEventListener("popstate",a),window.removeEventListener("beforeunload",u)}return window.addEventListener("popstate",a),window.addEventListener("beforeunload",u,{passive:!0}),{pauseListeners:i,listen:c,destroy:f}}function Yl(e,t,n,r=!1,o=!1){return{back:e,current:t,forward:n,replaced:r,position:window.history.length,scroll:o?go():null}}function u0(e){const{history:t,location:n}=window,r={value:Zi(e,n)},o={value:t.state};o.value||s(r.value,{back:null,current:r.value,forward:null,position:t.length-1,replaced:!0,scroll:null},!0);function s(i,c,u){const f=e.indexOf("#"),p=f>-1?(n.host&&document.querySelector("base")?e:e.slice(f))+i:i0()+e+i;try{t[u?"replaceState":"pushState"](c,"",p),o.value=c}catch(v){console.error(v),n[u?"replace":"assign"](p)}}function l(i,c){const u=pe({},t.state,Yl(o.value.back,i,o.value.forward,!0),c,{position:o.value.position});s(i,u,!0),r.value=i}function a(i,c){const u=pe({},o.value,t.state,{forward:i,scroll:go()});s(u.current,u,!0);const f=pe({},Yl(r.value,i,null),{position:u.position+1},c);s(i,f,!1),r.value=i}return{location:r,state:o,push:a,replace:l}}function d0(e){e=t0(e);const t=u0(e),n=c0(e,t.state,t.location,t.replace);function r(s,l=!0){l||n.pauseListeners(),history.go(s)}const o=pe({location:"",base:e,go:r,createHref:r0.bind(null,e)},t,n);return Object.defineProperty(o,"location",{enumerable:!0,get:()=>t.location.value}),Object.defineProperty(o,"state",{enumerable:!0,get:()=>t.state.value}),o}function f0(e){return typeof e=="string"||e&&typeof e=="object"}function ec(e){return typeof e=="string"||typeof e=="symbol"}const Ot={path:"/",name:void 0,params:{},query:{},hash:"",fullPath:"/",matched:[],meta:{},redirectedFrom:void 0},tc=Symbol("");var Jl;(function(e){e[e.aborted=4]="aborted",e[e.cancelled=8]="cancelled",e[e.duplicated=16]="duplicated"})(Jl||(Jl={}));function $n(e,t){return pe(new Error,{type:e,[tc]:!0},t)}function wt(e,t){return e instanceof Error&&tc in e&&(t==null||!!(e.type&t))}const Ql="[^/]+?",p0={sensitive:!1,strict:!1,start:!0,end:!0},v0=/[.+*?^${}()[\]/\\]/g;function h0(e,t){const n=pe({},p0,t),r=[];let o=n.start?"^":"";const s=[];for(const c of e){const u=c.length?[]:[90];n.strict&&!c.length&&(o+="/");for(let f=0;f t.length?t.length===1&&t[0]===80?1:-1:0}function g0(e,t){let n=0;const r=e.score,o=t.score;for(;n 0&&t[t.length-1]<0}const _0={type:0,value:""},b0=/[a-zA-Z0-9_]/;function y0(e){if(!e)return[[]];if(e==="/")return[[_0]];if(!e.startsWith("/"))throw new Error(`Invalid path "${e}"`);function t(v){throw new Error(`ERR (${n})/"${c}": ${v}`)}let n=0,r=n;const o=[];let s;function l(){s&&o.push(s),s=[]}let a=0,i,c="",u="";function f(){c&&(n===0?s.push({type:0,value:c}):n===1||n===2||n===3?(s.length>1&&(i==="*"||i==="+")&&t(`A repeatable param (${c}) must be alone in its segment. eg: '/:ids+.`),s.push({type:1,value:c,regexp:u,repeatable:i==="*"||i==="+",optional:i==="*"||i==="?"})):t("Invalid state to consume buffer"),c="")}function p(){c+=i}for(;a {l(P)}:er}function l(u){if(ec(u)){const f=r.get(u);f&&(r.delete(u),n.splice(n.indexOf(f),1),f.children.forEach(l),f.alias.forEach(l))}else{const f=n.indexOf(u);f>-1&&(n.splice(f,1),u.record.name&&r.delete(u.record.name),u.children.forEach(l),u.alias.forEach(l))}}function a(){return n}function i(u){let f=0;for(;f =0&&(u.record.path!==n[f].record.path||!nc(u,n[f]));)f++;n.splice(f,0,u),u.record.name&&!ea(u)&&r.set(u.record.name,u)}function c(u,f){let p,v={},_,w;if("name"in u&&u.name){if(p=r.get(u.name),!p)throw $n(1,{location:u});w=p.record.name,v=pe(Zl(f.params,p.keys.filter(P=>!P.optional).map(P=>P.name)),u.params&&Zl(u.params,p.keys.map(P=>P.name))),_=p.stringify(v)}else if("path"in u)_=u.path,p=n.find(P=>P.re.test(_)),p&&(v=p.parse(_),w=p.record.name);else{if(p=f.name?r.get(f.name):n.find(P=>P.re.test(f.path)),!p)throw $n(1,{location:u,currentLocation:f});w=p.record.name,v=pe({},f.params,u.params),_=p.stringify(v)}const T=[];let b=p;for(;b;)T.unshift(b.record),b=b.parent;return{name:w,path:_,params:v,matched:T,meta:O0(T)}}return e.forEach(u=>s(u)),{addRoute:s,resolve:c,removeRoute:l,getRoutes:a,getRecordMatcher:o}}function Zl(e,t){const n={};for(const r of t)r in e&&(n[r]=e[r]);return n}function T0(e){return{path:e.path,redirect:e.redirect,name:e.name,meta:e.meta||{},aliasOf:void 0,beforeEnter:e.beforeEnter,props:A0(e),children:e.children||[],instances:{},leaveGuards:new Set,updateGuards:new Set,enterCallbacks:{},components:"components"in e?e.components||null:e.component&&{default:e.component}}}function A0(e){const t={},n=e.props||!1;if("component"in e)t.default=n;else for(const r in e.components)t[r]=typeof n=="object"?n[r]:n;return t}function ea(e){for(;e;){if(e.record.aliasOf)return!0;e=e.parent}return!1}function O0(e){return e.reduce((t,n)=>pe(t,n.meta),{})}function ta(e,t){const n={};for(const r in e)n[r]=r in t?t[r]:e[r];return n}function nc(e,t){return t.children.some(n=>n===e||nc(e,n))}const rc=/#/g,L0=/&/g,I0=/\//g,R0=/=/g,P0=/\?/g,oc=/\+/g,C0=/%5B/g,S0=/%5D/g,sc=/%5E/g,x0=/%60/g,lc=/%7B/g,k0=/%7C/g,ac=/%7D/g,D0=/%20/g;function Gs(e){return encodeURI(""+e).replace(k0,"|").replace(C0,"[").replace(S0,"]")}function $0(e){return Gs(e).replace(lc,"{").replace(ac,"}").replace(sc,"^")}function as(e){return Gs(e).replace(oc,"%2B").replace(D0,"+").replace(rc,"%23").replace(L0,"%26").replace(x0,"`").replace(lc,"{").replace(ac,"}").replace(sc,"^")}function V0(e){return as(e).replace(R0,"%3D")}function M0(e){return Gs(e).replace(rc,"%23").replace(P0,"%3F")}function N0(e){return e==null?"":M0(e).replace(I0,"%2F")}function to(e){try{return decodeURIComponent(""+e)}catch{}return""+e}function B0(e){const t={};if(e===""||e==="?")return t;const r=(e[0]==="?"?e.slice(1):e).split("&");for(let o=0;o s&&as(s)):[r&&as(r)]).forEach(s=>{s!==void 0&&(t+=(t.length?"&":"")+n,s!=null&&(t+="="+s))})}return t}function F0(e){const t={};for(const n in e){const r=e[n];r!==void 0&&(t[n]=pt(r)?r.map(o=>o==null?null:""+o):r==null?r:""+r)}return t}const H0=Symbol(""),ra=Symbol(""),_o=Symbol(""),Ks=Symbol(""),is=Symbol("");function qn(){let e=[];function t(r){return e.push(r),()=>{const o=e.indexOf(r);o>-1&&e.splice(o,1)}}function n(){e=[]}return{add:t,list:()=>e.slice(),reset:n}}function Ht(e,t,n,r,o){const s=r&&(r.enterCallbacks[o]=r.enterCallbacks[o]||[]);return()=>new Promise((l,a)=>{const i=f=>{f===!1?a($n(4,{from:n,to:t})):f instanceof Error?a(f):f0(f)?a($n(2,{from:t,to:f})):(s&&r.enterCallbacks[o]===s&&typeof f=="function"&&s.push(f),l())},c=e.call(r&&r.instances[o],t,n,i);let u=Promise.resolve(c);e.length<3&&(u=u.then(i)),u.catch(f=>a(f))})}function $o(e,t,n,r){const o=[];for(const s of e)for(const l in s.components){let a=s.components[l];if(!(t!=="beforeRouteEnter"&&!s.instances[l]))if(j0(a)){const c=(a.__vccOpts||a)[t];c&&o.push(Ht(c,n,r,s,l))}else{let i=a();o.push(()=>i.then(c=>{if(!c)return Promise.reject(new Error(`Couldn't resolve component "${l}" at "${s.path}"`));const u=Kf(c)?c.default:c;s.components[l]=u;const p=(u.__vccOpts||u)[t];return p&&Ht(p,n,r,s,l)()}))}}return o}function j0(e){return typeof e=="object"||"displayName"in e||"props"in e||"__vccOpts"in e}function cs(e){const t=ge(_o),n=ge(Ks),r=O(()=>t.resolve(sn(e.to))),o=O(()=>{const{matched:i}=r.value,{length:c}=i,u=i[c-1],f=n.matched;if(!u||!f.length)return-1;const p=f.findIndex(Dn.bind(null,u));if(p>-1)return p;const v=oa(i[c-2]);return c>1&&oa(u)===v&&f[f.length-1].path!==v?f.findIndex(Dn.bind(null,i[c-2])):p}),s=O(()=>o.value>-1&&U0(n.params,r.value.params)),l=O(()=>o.value>-1&&o.value===n.matched.length-1&&Xi(n.params,r.value.params));function a(i={}){return q0(i)?t[sn(e.replace)?"replace":"push"](sn(e.to)).catch(er):Promise.resolve()}return{route:r,href:O(()=>r.value.href),isActive:s,isExactActive:l,navigate:a}}const z0=z({name:"RouterLink",compatConfig:{MODE:3},props:{to:{type:[String,Object],required:!0},replace:Boolean,activeClass:String,exactActiveClass:String,custom:Boolean,ariaCurrentValue:{type:String,default:"page"}},useLink:cs,setup(e,{slots:t}){const n=hr(cs(e)),{options:r}=ge(_o),o=O(()=>({[sa(e.activeClass,r.linkActiveClass,"router-link-active")]:n.isActive,[sa(e.exactActiveClass,r.linkExactActiveClass,"router-link-exact-active")]:n.isExactActive}));return()=>{const s=t.default&&t.default(n);return e.custom?s:d("a",{"aria-current":n.isExactActive?e.ariaCurrentValue:null,href:n.href,onClick:n.navigate,class:o.value},s)}}}),W0=z0;function q0(e){if(!(e.metaKey||e.altKey||e.ctrlKey||e.shiftKey)&&!e.defaultPrevented&&!(e.button!==void 0&&e.button!==0)){if(e.currentTarget&&e.currentTarget.getAttribute){const t=e.currentTarget.getAttribute("target");if(/\b_blank\b/i.test(t))return}return e.preventDefault&&e.preventDefault(),!0}}function U0(e,t){for(const n in t){const r=t[n],o=e[n];if(typeof r=="string"){if(r!==o)return!1}else if(!pt(o)||o.length!==r.length||r.some((s,l)=>s!==o[l]))return!1}return!0}function oa(e){return e?e.aliasOf?e.aliasOf.path:e.path:""}const sa=(e,t,n)=>e??t??n,G0=z({name:"RouterView",inheritAttrs:!1,props:{name:{type:String,default:"default"},route:Object},compatConfig:{MODE:3},setup(e,{attrs:t,slots:n}){const r=ge(is),o=O(()=>e.route||r.value),s=ge(ra,0),l=O(()=>{let c=sn(s);const{matched:u}=o.value;let f;for(;(f=u[c])&&!f.components;)c++;return c}),a=O(()=>o.value.matched[l.value]);On(ra,O(()=>l.value+1)),On(H0,a),On(is,o);const i=K();return de(()=>[i.value,a.value,e.name],([c,u,f],[p,v,_])=>{u&&(u.instances[f]=c,v&&v!==u&&c&&c===p&&(u.leaveGuards.size||(u.leaveGuards=v.leaveGuards),u.updateGuards.size||(u.updateGuards=v.updateGuards))),c&&u&&(!v||!Dn(u,v)||!p)&&(u.enterCallbacks[f]||[]).forEach(w=>w(c))},{flush:"post"}),()=>{const c=o.value,u=e.name,f=a.value,p=f&&f.components[u];if(!p)return la(n.default,{Component:p,route:c});const v=f.props[u],_=v?v===!0?c.params:typeof v=="function"?v(c):v:null,T=d(p,pe({},_,t,{onVnodeUnmounted:b=>{b.component.isUnmounted&&(f.instances[u]=null)},ref:i}));return la(n.default,{Component:T,route:c})||T}}});function la(e,t){if(!e)return null;const n=e(t);return n.length===1?n[0]:n}const ic=G0;function K0(e){const t=w0(e.routes,e),n=e.parseQuery||B0,r=e.stringifyQuery||na,o=e.history,s=qn(),l=qn(),a=qn(),i=De(Ot);let c=Ot;mn&&e.scrollBehavior&&"scrollRestoration"in history&&(history.scrollRestoration="manual");const u=ko.bind(null,R=>""+R),f=ko.bind(null,N0),p=ko.bind(null,to);function v(R,W){let M,J;return ec(R)?(M=t.getRecordMatcher(R),J=W):J=R,t.addRoute(J,M)}function _(R){const W=t.getRecordMatcher(R);W&&t.removeRoute(W)}function w(){return t.getRoutes().map(R=>R.record)}function T(R){return!!t.getRecordMatcher(R)}function b(R,W){if(W=pe({},W||i.value),typeof R=="string"){const E=Do(n,R,W.path),L=t.resolve({path:E.path},W),S=o.createHref(E.fullPath);return pe(E,L,{params:p(L.params),hash:to(E.hash),redirectedFrom:void 0,href:S})}let M;if("path"in R)M=pe({},R,{path:Do(n,R.path,W.path).path});else{const E=pe({},R.params);for(const L in E)E[L]==null&&delete E[L];M=pe({},R,{params:f(E)}),W.params=f(W.params)}const J=t.resolve(M,W),ae=R.hash||"";J.params=u(p(J.params));const h=Qf(r,pe({},R,{hash:$0(ae),path:J.path})),g=o.createHref(h);return pe({fullPath:h,hash:ae,query:r===na?F0(R.query):R.query||{}},J,{redirectedFrom:void 0,href:g})}function P(R){return typeof R=="string"?Do(n,R,i.value.path):pe({},R)}function y(R,W){if(c!==R)return $n(8,{from:W,to:R})}function I(R){return G(R)}function V(R){return I(pe(P(R),{replace:!0}))}function A(R){const W=R.matched[R.matched.length-1];if(W&&W.redirect){const{redirect:M}=W;let J=typeof M=="function"?M(R):M;return typeof J=="string"&&(J=J.includes("?")||J.includes("#")?J=P(J):{path:J},J.params={}),pe({query:R.query,hash:R.hash,params:"path"in J?{}:R.params},J)}}function G(R,W){const M=c=b(R),J=i.value,ae=R.state,h=R.force,g=R.replace===!0,E=A(M);if(E)return G(pe(P(E),{state:typeof E=="object"?pe({},ae,E.state):ae,force:h,replace:g}),W||M);const L=M;L.redirectedFrom=W;let S;return!h&&Xf(r,J,M)&&(S=$n(16,{to:L,from:J}),Qe(J,J,!0,!1)),(S?Promise.resolve(S):F(L,J)).catch(x=>wt(x)?wt(x,2)?x:mt(x):Y(x,L,J)).then(x=>{if(x){if(wt(x,2))return G(pe({replace:g},P(x.to),{state:typeof x.to=="object"?pe({},ae,x.to.state):ae,force:h}),W||L)}else x=H(L,J,!0,g,ae);return Q(L,J,x),x})}function B(R,W){const M=y(R,W);return M?Promise.reject(M):Promise.resolve()}function D(R){const W=Et.values().next().value;return W&&typeof W.runWithContext=="function"?W.runWithContext(R):R()}function F(R,W){let M;const[J,ae,h]=Y0(R,W);M=$o(J.reverse(),"beforeRouteLeave",R,W);for(const E of J)E.leaveGuards.forEach(L=>{M.push(Ht(L,R,W))});const g=B.bind(null,R,W);return M.push(g),Pe(M).then(()=>{M=[];for(const E of s.list())M.push(Ht(E,R,W));return M.push(g),Pe(M)}).then(()=>{M=$o(ae,"beforeRouteUpdate",R,W);for(const E of ae)E.updateGuards.forEach(L=>{M.push(Ht(L,R,W))});return M.push(g),Pe(M)}).then(()=>{M=[];for(const E of h)if(E.beforeEnter)if(pt(E.beforeEnter))for(const L of E.beforeEnter)M.push(Ht(L,R,W));else M.push(Ht(E.beforeEnter,R,W));return M.push(g),Pe(M)}).then(()=>(R.matched.forEach(E=>E.enterCallbacks={}),M=$o(h,"beforeRouteEnter",R,W),M.push(g),Pe(M))).then(()=>{M=[];for(const E of l.list())M.push(Ht(E,R,W));return M.push(g),Pe(M)}).catch(E=>wt(E,8)?E:Promise.reject(E))}function Q(R,W,M){a.list().forEach(J=>D(()=>J(R,W,M)))}function H(R,W,M,J,ae){const h=y(R,W);if(h)return h;const g=W===Ot,E=mn?history.state:{};M&&(J||g?o.replace(R.fullPath,pe({scroll:g&&E&&E.scroll},ae)):o.push(R.fullPath,ae)),i.value=R,Qe(R,W,M,g),mt()}let te;function Te(){te||(te=o.listen((R,W,M)=>{if(!gt.listening)return;const J=b(R),ae=A(J);if(ae){G(pe(ae,{replace:!0}),J).catch(er);return}c=J;const h=i.value;mn&&l0(Kl(h.fullPath,M.delta),go()),F(J,h).catch(g=>wt(g,12)?g:wt(g,2)?(G(g.to,J).then(E=>{wt(E,20)&&!M.delta&&M.type===ur.pop&&o.go(-1,!1)}).catch(er),Promise.reject()):(M.delta&&o.go(-M.delta,!1),Y(g,J,h))).then(g=>{g=g||H(J,h,!1),g&&(M.delta&&!wt(g,8)?o.go(-M.delta,!1):M.type===ur.pop&&wt(g,20)&&o.go(-1,!1)),Q(J,h,g)}).catch(er)}))}let be=qn(),U=qn(),ne;function Y(R,W,M){mt(R);const J=U.list();return J.length?J.forEach(ae=>ae(R,W,M)):console.error(R),Promise.reject(R)}function Re(){return ne&&i.value!==Ot?Promise.resolve():new Promise((R,W)=>{be.add([R,W])})}function mt(R){return ne||(ne=!R,Te(),be.list().forEach(([W,M])=>R?M(R):W()),be.reset()),R}function Qe(R,W,M,J){const{scrollBehavior:ae}=e;if(!mn||!ae)return Promise.resolve();const h=!M&&a0(Kl(R.fullPath,0))||(J||!M)&&history.state&&history.state.scroll||null;return Kt().then(()=>ae(R,W,h)).then(g=>g&&s0(g)).catch(g=>Y(g,R,W))}const Se=R=>o.go(R);let qe;const Et=new Set,gt={currentRoute:i,listening:!0,addRoute:v,removeRoute:_,hasRoute:T,getRoutes:w,resolve:b,options:e,push:I,replace:V,go:Se,back:()=>Se(-1),forward:()=>Se(1),beforeEach:s.add,beforeResolve:l.add,afterEach:a.add,onError:U.add,isReady:Re,install(R){const W=this;R.component("RouterLink",W0),R.component("RouterView",ic),R.config.globalProperties.$router=W,Object.defineProperty(R.config.globalProperties,"$route",{enumerable:!0,get:()=>sn(i)}),mn&&!qe&&i.value===Ot&&(qe=!0,I(o.location).catch(ae=>{}));const M={};for(const ae in Ot)Object.defineProperty(M,ae,{get:()=>i.value[ae],enumerable:!0});R.provide(_o,W),R.provide(Ks,Qa(M)),R.provide(is,i);const J=R.unmount;Et.add(R),R.unmount=function(){Et.delete(R),Et.size<1&&(c=Ot,te&&te(),te=null,i.value=Ot,qe=!1,ne=!1),J()}}};function Pe(R){return R.reduce((W,M)=>W.then(()=>D(M)),Promise.resolve())}return gt}function Y0(e,t){const n=[],r=[],o=[],s=Math.max(t.matched.length,e.matched.length);for(let l=0;l Dn(c,a))?r.push(a):n.push(a));const i=e.matched[l];i&&(t.matched.find(c=>Dn(c,i))||o.push(i))}return[n,r,o]}function Je(){return ge(_o)}function St(){return ge(Ks)}var Be=Uint8Array,yn=Uint16Array,J0=Int32Array,cc=new Be([0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,0,0,0]),uc=new Be([0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13,0,0]),Q0=new Be([16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15]),dc=function(e,t){for(var n=new yn(31),r=0;r<31;++r)n[r]=t+=1<>1|(_e&21845)<<1;$t=($t&52428)>>2|($t&13107)<<2,$t=($t&61680)>>4|($t&3855)<<4,us[_e]=(($t&65280)>>8|($t&255)<<8)>>1}var nr=function(e,t,n){for(var r=e.length,o=0,s=new yn(t);o >i]=c}else for(a=new yn(r),o=0;o >15-e[o]);return a},Er=new Be(288);for(var _e=0;_e<144;++_e)Er[_e]=8;for(var _e=144;_e<256;++_e)Er[_e]=9;for(var _e=256;_e<280;++_e)Er[_e]=7;for(var _e=280;_e<288;++_e)Er[_e]=8;var vc=new Be(32);for(var _e=0;_e<32;++_e)vc[_e]=5;var t2=nr(Er,9,1),n2=nr(vc,5,1),Vo=function(e){for(var t=e[0],n=1;n t&&(t=e[n]);return t},ut=function(e,t,n){var r=t/8|0;return(e[r]|e[r+1]<<8)>>(t&7)&n},Mo=function(e,t){var n=t/8|0;return(e[n]|e[n+1]<<8|e[n+2]<<16)>>(t&7)},r2=function(e){return(e+7)/8|0},Ys=function(e,t,n){return(t==null||t<0)&&(t=0),(n==null||n>e.length)&&(n=e.length),new Be(e.subarray(t,n))},o2=["unexpected EOF","invalid block type","invalid length/literal","invalid distance","stream finished","no stream handler",,"no callback","invalid UTF-8 data","extra field too long","date not in range 1980-2099","filename too long","stream finishing","invalid zip data"],et=function(e,t,n){var r=new Error(t||o2[e]);if(r.code=e,Error.captureStackTrace&&Error.captureStackTrace(r,et),!n)throw r;return r},s2=function(e,t,n,r){var o=e.length,s=r?r.length:0;if(!o||t.f&&!t.l)return n||new Be(0);var l=!n,a=l||t.i!=2,i=t.i;l&&(n=new Be(o*3));var c=function(ae){var h=n.length;if(ae>h){var g=new Be(Math.max(h*2,ae));g.set(n),n=g}},u=t.f||0,f=t.p||0,p=t.b||0,v=t.l,_=t.d,w=t.m,T=t.n,b=o*8;do{if(!v){u=ut(e,f,1);var P=ut(e,f+1,3);if(f+=3,P)if(P==1)v=t2,_=n2,w=9,T=5;else if(P==2){var A=ut(e,f,31)+257,G=ut(e,f+10,15)+4,B=A+ut(e,f+5,31)+1;f+=14;for(var D=new Be(B),F=new Be(19),Q=0;Q >4;if(y<16)D[Q++]=y;else{var U=0,ne=0;for(y==16?(ne=3+ut(e,f,3),f+=2,U=D[Q-1]):y==17?(ne=3+ut(e,f,7),f+=3):y==18&&(ne=11+ut(e,f,127),f+=7);ne--;)D[Q++]=U}}var Y=D.subarray(0,A),Re=D.subarray(A);w=Vo(Y),T=Vo(Re),v=nr(Y,w,1),_=nr(Re,T,1)}else et(1);else{var y=r2(f)+4,I=e[y-4]|e[y-3]<<8,V=y+I;if(V>o){i&&et(0);break}a&&c(p+I),n.set(e.subarray(y,V),p),t.b=p+=I,t.p=f=V*8,t.f=u;continue}if(f>b){i&&et(0);break}}a&&c(p+131072);for(var mt=(1< >4;if(f+=U&15,f>b){i&&et(0);break}if(U||et(2),qe<256)n[p++]=qe;else if(qe==256){Se=f,v=null;break}else{var Et=qe-254;if(qe>264){var Q=qe-257,gt=cc[Q];Et=ut(e,f,(1< >4;Pe||et(3),f+=Pe&15;var Re=e2[R];if(R>3){var gt=uc[R];Re+=Mo(e,f)&(1< b){i&&et(0);break}a&&c(p+131072);var W=p+Et;if(p >4>7||(e[0]<<8|e[1])%31)&&et(6,"invalid zlib data"),(e[1]>>5&1)==+!t&&et(6,"invalid zlib data: "+(e[1]&32?"need":"unexpected")+" dictionary"),(e[1]>>3&4)+2};function i2(e,t){return s2(e.subarray(a2(e,t&&t.dictionary),-4),{i:2},t&&t.out,t&&t.dictionary)}var aa=typeof TextEncoder<"u"&&new TextEncoder,ds=typeof TextDecoder<"u"&&new TextDecoder,c2=0;try{ds.decode(l2,{stream:!0}),c2=1}catch{}var u2=function(e){for(var t="",n=0;;){var r=e[n++],o=(r>127)+(r>223)+(r>239);if(n+o>e.length)return{s:t,r:Ys(e,n-1)};o?o==3?(r=((r&15)<<18|(e[n++]&63)<<12|(e[n++]&63)<<6|e[n++]&63)-65536,t+=String.fromCharCode(55296|r>>10,56320|r&1023)):o&1?t+=String.fromCharCode((r&31)<<6|e[n++]&63):t+=String.fromCharCode((r&15)<<12|(e[n++]&63)<<6|e[n++]&63):t+=String.fromCharCode(r)}};function d2(e,t){if(t){for(var n=new Be(e.length),r=0;r >1)),l=0,a=function(u){s[l++]=u},r=0;r s.length){var i=new Be(l+8+(o-r<<1));i.set(s),s=i}var c=e.charCodeAt(r);c<128||t?a(c):c<2048?(a(192|c>>6),a(128|c&63)):c>55295&&c<57344?(c=65536+(c&1047552)|e.charCodeAt(++r)&1023,a(240|c>>18),a(128|c>>12&63),a(128|c>>6&63),a(128|c&63)):(a(224|c>>12),a(128|c>>6&63),a(128|c&63))}return Ys(s,0,l)}function f2(e,t){if(t){for(var n="",r=0;r {var r;return d("svg",{xmlns:"http://www.w3.org/2000/svg",class:["icon",`${e}-icon`],viewBox:"0 0 1024 1024",fill:t,"aria-label":`${e} icon`},(r=n.default)==null?void 0:r.call(n))};we.displayName="IconBase";const wr=({size:e=48,stroke:t=4,wrapper:n=!0,height:r=2*e})=>{const o=d("svg",{xmlns:"http://www.w3.org/2000/svg",width:e,height:e,preserveAspectRatio:"xMidYMid",viewBox:"25 25 50 50"},[d("animateTransform",{attributeName:"transform",type:"rotate",dur:"2s",keyTimes:"0;1",repeatCount:"indefinite",values:"0;360"}),d("circle",{cx:"50",cy:"50",r:"20",fill:"none",stroke:"currentColor","stroke-width":t,"stroke-linecap":"round"},[d("animate",{attributeName:"stroke-dasharray",dur:"1.5s",keyTimes:"0;0.5;1",repeatCount:"indefinite",values:"1,200;90,200;1,200"}),d("animate",{attributeName:"stroke-dashoffset",dur:"1.5s",keyTimes:"0;0.5;1",repeatCount:"indefinite",values:"0;-35px;-125px"})])]);return n?d("div",{class:"loading-icon-wrapper",style:`display:flex;align-items:center;justify-content:center;height:${r}px`},o):o};wr.displayName="LoadingIcon";const hc=(e,{slots:t})=>{var n;return(n=t.default)==null?void 0:n.call(t)},p2=(e="")=>{if(e){if(typeof e=="number")return new Date(e);const t=Date.parse(e.toString());if(!Number.isNaN(t))return new Date(t)}return null},mc=(e,t)=>{let n=1;for(let r=0;r >6;return n+=n<<3,n^=n>>11,n%t},gc=Array.isArray,v2=e=>typeof e=="function",h2=e=>typeof e=="string";var Js=e=>/^(https?:)?\/\//.test(e),m2=/.md((\?|#).*)?$/,g2=(e,t="/")=>!!(Js(e)||e.startsWith("/")&&!e.startsWith(t)&&!m2.test(e)),_c=e=>Object.prototype.toString.call(e)==="[object Object]";function _2(){const e=K(!1);return Nn()&&ve(()=>{e.value=!0}),e}function b2(e){return _2(),O(()=>!!e())}const y2=e=>typeof e=="function",No=e=>typeof e=="number",Pt=e=>typeof e=="string",cn=(e,t)=>Pt(e)&&e.startsWith(t),Vr=(e,t)=>Pt(e)&&e.endsWith(t),bc=Object.entries,E2=Object.fromEntries,Yt=Object.keys,w2=e=>(e.endsWith(".md")&&(e=`${e.slice(0,-3)}.html`),!e.endsWith("/")&&!e.endsWith(".html")&&(e=`${e}.html`),e=e.replace(/(^|\/)(?:README|index).html$/i,"$1"),e),yc=e=>{const[t,n=""]=e.split("#");return t?`${w2(t)}${n?`#${n}`:""}`:e},ia=e=>_c(e)&&Pt(e.name),ca=(e,t=!1)=>e?gc(e)?e.map(n=>Pt(n)?{name:n}:ia(n)?n:null).filter(n=>n!==null):Pt(e)?[{name:e}]:ia(e)?[e]:(console.error(`Expect "author" to be \`AuthorInfo[] | AuthorInfo | string[] | string ${t?"":"| false"} | undefined\`, but got`,e),[]):[],Ec=(e,t)=>{if(e){if(gc(e)&&e.every(Pt))return e;if(Pt(e))return[e];console.error(`Expect ${t||"value"} to be \`string[] | string | undefined\`, but got`,e)}return[]},T2=e=>Ec(e,"category"),A2=e=>Ec(e,"tag"),bo=e=>cn(e,"/"),wc=/#.*$/u,O2=e=>{const t=wc.exec(e);return t?t[0]:""},ua=e=>decodeURI(e).replace(wc,"").replace(/(index)?\.html$/i,"").replace(/(README|index)?\.md$/i,""),Tc=(e,t)=>{if(t===void 0)return!1;const n=ua(e.path),r=ua(t),o=O2(t);return o?o===e.hash&&(!r||n===r):n===r},dr=e=>{const t=atob(e);return f2(i2(d2(t,!0)))},L2=e=>Js(e)?e:`https://github.com/${e}`,Ac=e=>!Js(e)||/github\.com/.test(e)?"GitHub":/bitbucket\.org/.test(e)?"Bitbucket":/gitlab\.com/.test(e)?"GitLab":/gitee\.com/.test(e)?"Gitee":null,no=(e,...t)=>{const n=e.resolve(...t),r=n.matched[n.matched.length-1];if(!(r!=null&&r.redirect))return n;const{redirect:o}=r,s=v2(o)?o(n):o,l=h2(s)?{path:s}:s;return no(e,{hash:n.hash,query:n.query,params:n.params,...l})},I2=e=>{var t;if(!(e.metaKey||e.altKey||e.ctrlKey||e.shiftKey)&&!e.defaultPrevented&&!(e.button!==void 0&&e.button!==0)&&!(e.currentTarget&&((t=e.currentTarget.getAttribute("target"))!=null&&t.match(/\b_blank\b/i))))return e.preventDefault(),!0},Ye=({to:e="",class:t="",...n},{slots:r})=>{var a;const o=Je(),s=yc(e),l=(i={})=>I2(i)?o.push(e).catch():Promise.resolve();return d("a",{...n,class:["vp-link",t],href:cn(s,"/")?Me(s):s,onClick:l},(a=r.default)==null?void 0:a.call(r))};Ye.displayName="VPLink";const Oc=()=>d(we,{name:"github"},()=>d("path",{d:"M511.957 21.333C241.024 21.333 21.333 240.981 21.333 512c0 216.832 140.544 400.725 335.574 465.664 24.49 4.395 32.256-10.07 32.256-23.083 0-11.69.256-44.245 0-85.205-136.448 29.61-164.736-64.64-164.736-64.64-22.315-56.704-54.4-71.765-54.4-71.765-44.587-30.464 3.285-29.824 3.285-29.824 49.195 3.413 75.179 50.517 75.179 50.517 43.776 75.008 114.816 53.333 142.762 40.79 4.523-31.66 17.152-53.377 31.19-65.537-108.971-12.458-223.488-54.485-223.488-242.602 0-53.547 19.114-97.323 50.517-131.67-5.035-12.33-21.93-62.293 4.779-129.834 0 0 41.258-13.184 134.912 50.346a469.803 469.803 0 0 1 122.88-16.554c41.642.213 83.626 5.632 122.88 16.554 93.653-63.488 134.784-50.346 134.784-50.346 26.752 67.541 9.898 117.504 4.864 129.834 31.402 34.347 50.474 78.123 50.474 131.67 0 188.586-114.73 230.016-224.042 242.09 17.578 15.232 33.578 44.672 33.578 90.454v135.85c0 13.142 7.936 27.606 32.854 22.87C862.25 912.597 1002.667 728.747 1002.667 512c0-271.019-219.648-490.667-490.71-490.667z"}));Oc.displayName="GitHubIcon";const Lc=()=>d(we,{name:"gitlab"},()=>d("path",{d:"M229.333 78.688C223.52 62 199.895 62 193.895 78.688L87.958 406.438h247.5c-.188 0-106.125-327.75-106.125-327.75zM33.77 571.438c-4.875 15 .563 31.687 13.313 41.25l464.812 345L87.77 406.438zm301.5-165 176.813 551.25 176.812-551.25zm655.125 165-54-165-424.312 551.25 464.812-345c12.938-9.563 18.188-26.25 13.5-41.25zM830.27 78.688c-5.812-16.688-29.437-16.688-35.437 0l-106.125 327.75h247.5z"}));Lc.displayName="GitLabIcon";const Ic=()=>d(we,{name:"gitee"},()=>d("path",{d:"M512 992C246.92 992 32 777.08 32 512S246.92 32 512 32s480 214.92 480 480-214.92 480-480 480zm242.97-533.34H482.39a23.7 23.7 0 0 0-23.7 23.7l-.03 59.28c0 13.08 10.59 23.7 23.7 23.7h165.96a23.7 23.7 0 0 1 23.7 23.7v11.85a71.1 71.1 0 0 1-71.1 71.1H375.71a23.7 23.7 0 0 1-23.7-23.7V423.11a71.1 71.1 0 0 1 71.1-71.1h331.8a23.7 23.7 0 0 0 23.7-23.7l.06-59.25a23.73 23.73 0 0 0-23.7-23.73H423.11a177.78 177.78 0 0 0-177.78 177.75v331.83c0 13.08 10.62 23.7 23.7 23.7h349.62a159.99 159.99 0 0 0 159.99-159.99V482.33a23.7 23.7 0 0 0-23.7-23.7z"}));Ic.displayName="GiteeIcon";const Rc=()=>d(we,{name:"bitbucket"},()=>d("path",{d:"M575.256 490.862c6.29 47.981-52.005 85.723-92.563 61.147-45.714-20.004-45.714-92.562-1.133-113.152 38.29-23.442 93.696 7.424 93.696 52.005zm63.451-11.996c-10.276-81.152-102.29-134.839-177.152-101.156-47.433 21.138-79.433 71.424-77.129 124.562 2.853 69.705 69.157 126.866 138.862 120.576S647.3 548.571 638.708 478.83zm136.558-309.723c-25.161-33.134-67.986-38.839-105.728-45.13-106.862-17.151-216.576-17.7-323.438 1.134-35.438 5.706-75.447 11.996-97.719 43.996 36.572 34.304 88.576 39.424 135.424 45.129 84.553 10.862 171.447 11.447 256 .585 47.433-5.705 99.987-10.276 135.424-45.714zm32.585 591.433c-16.018 55.99-6.839 131.438-66.304 163.986-102.29 56.576-226.304 62.867-338.87 42.862-59.43-10.862-129.135-29.696-161.72-85.723-14.3-54.858-23.442-110.848-32.585-166.84l3.438-9.142 10.276-5.157c170.277 112.567 408.576 112.567 579.438 0 26.844 8.01 6.84 40.558 6.29 60.014zm103.424-549.157c-19.42 125.148-41.728 249.71-63.415 374.272-6.29 36.572-41.728 57.162-71.424 72.558-106.862 53.724-231.424 62.866-348.562 50.286-79.433-8.558-160.585-29.696-225.134-79.433-30.28-23.443-30.28-63.415-35.986-97.134-20.005-117.138-42.862-234.277-57.161-352.585 6.839-51.42 64.585-73.728 107.447-89.71 57.16-21.138 118.272-30.866 178.87-36.571 129.134-12.58 261.157-8.01 386.304 28.562 44.581 13.13 92.563 31.415 122.844 69.705 13.714 17.7 9.143 40.01 6.29 60.014z"}));Rc.displayName="BitbucketIcon";const Pc=()=>d(we,{name:"source"},()=>d("path",{d:"M601.92 475.2c0 76.428-8.91 83.754-28.512 99.594-14.652 11.88-43.956 14.058-78.012 16.434-18.81 1.386-40.392 2.97-62.172 6.534-18.612 2.97-36.432 9.306-53.064 17.424V299.772c37.818-21.978 63.36-62.766 63.36-109.692 0-69.894-56.826-126.72-126.72-126.72S190.08 120.186 190.08 190.08c0 46.926 25.542 87.714 63.36 109.692v414.216c-37.818 21.978-63.36 62.766-63.36 109.692 0 69.894 56.826 126.72 126.72 126.72s126.72-56.826 126.72-126.72c0-31.086-11.286-59.598-29.7-81.576 13.266-9.504 27.522-17.226 39.996-19.206 16.038-2.574 32.868-3.762 50.688-5.148 48.312-3.366 103.158-7.326 148.896-44.55 61.182-49.698 74.25-103.158 75.24-187.902V475.2h-126.72zM316.8 126.72c34.848 0 63.36 28.512 63.36 63.36s-28.512 63.36-63.36 63.36-63.36-28.512-63.36-63.36 28.512-63.36 63.36-63.36zm0 760.32c-34.848 0-63.36-28.512-63.36-63.36s28.512-63.36 63.36-63.36 63.36 28.512 63.36 63.36-28.512 63.36-63.36 63.36zM823.68 158.4h-95.04V63.36h-126.72v95.04h-95.04v126.72h95.04v95.04h126.72v-95.04h95.04z"}));Pc.displayName="SourceIcon";const lt=(e,t)=>{var r;const n=(r=(t==null?void 0:t._instance)||Nn())==null?void 0:r.appContext.components;return n?e in n||it(e)in n||vr(it(e))in n:!1},R2=()=>b2(()=>typeof window<"u"&&window.navigator&&"userAgent"in window.navigator),P2=()=>{const e=R2();return O(()=>e.value&&/\b(?:Android|iPhone)/i.test(navigator.userAgent))},Tr=e=>{const t=fn();return O(()=>e[t.value])};function da(e,t){var n;const r=De();return di(()=>{r.value=e()},{...t,flush:(n=t==null?void 0:t.flush)!=null?n:"sync"}),Gt(r)}function Qs(e,t){let n,r,o;const s=K(!0),l=()=>{s.value=!0,o()};de(e,l,{flush:"sync"});const a=typeof t=="function"?t:t.get,i=typeof t=="function"?void 0:t.set,c=ri((u,f)=>(r=u,o=f,{get(){return s.value&&(n=a(),s.value=!1),r(),n},set(p){i==null||i(p)}}));return Object.isExtensible(c)&&(c.trigger=l),c}function Jt(e){return Ba()?(ud(e),!0):!1}function at(e){return typeof e=="function"?e():sn(e)}const Ar=typeof window<"u"&&typeof document<"u";typeof WorkerGlobalScope<"u"&&globalThis instanceof WorkerGlobalScope;const C2=Object.prototype.toString,S2=e=>C2.call(e)==="[object Object]",an=()=>{},fs=x2();function x2(){var e,t;return Ar&&((e=window==null?void 0:window.navigator)==null?void 0:e.userAgent)&&(/iP(ad|hone|od)/.test(window.navigator.userAgent)||((t=window==null?void 0:window.navigator)==null?void 0:t.maxTouchPoints)>2&&/iPad|Macintosh/.test(window==null?void 0:window.navigator.userAgent))}function Cc(e,t){function n(...r){return new Promise((o,s)=>{Promise.resolve(e(()=>t.apply(this,r),{fn:t,thisArg:this,args:r})).then(o).catch(s)})}return n}const Sc=e=>e();function k2(e,t=!0,n=!0,r=!1){let o=0,s,l=!0,a=an,i;const c=()=>{s&&(clearTimeout(s),s=void 0,a(),a=an)};return f=>{const p=at(e),v=Date.now()-o,_=()=>i=f();return c(),p<=0?(o=Date.now(),_()):(v>p&&(n||!l)?(o=Date.now(),_()):t&&(i=new Promise((w,T)=>{a=r?T:w,s=setTimeout(()=>{o=Date.now(),l=!0,w(_()),c()},Math.max(0,p-v))})),!n&&!s&&(s=setTimeout(()=>l=!0,p)),l=!1,i)}}function D2(e=Sc){const t=K(!0);function n(){t.value=!1}function r(){t.value=!0}const o=(...s)=>{t.value&&e(...s)};return{isActive:Gt(t),pause:n,resume:r,eventFilter:o}}function $2(e){let t;function n(){return t||(t=e()),t}return n.reset=async()=>{const r=t;t=void 0,r&&await r},n}function V2(e){return e||Nn()}function M2(...e){if(e.length!==1)return uo(...e);const t=e[0];return typeof t=="function"?Gt(ri(()=>({get:t,set:an}))):K(t)}function N2(e,t=200,n=!1,r=!0,o=!1){return Cc(k2(t,n,r,o),e)}function B2(e,t,n={}){const{eventFilter:r=Sc,...o}=n;return de(e,Cc(r,t),o)}function F2(e,t,n={}){const{eventFilter:r,...o}=n,{eventFilter:s,pause:l,resume:a,isActive:i}=D2(r);return{stop:B2(e,t,{...o,eventFilter:s}),pause:l,resume:a,isActive:i}}function yo(e,t=!0,n){V2()?ve(e,n):t?e():Kt(e)}function H2(e,t,n={}){const{immediate:r=!0}=n,o=K(!1);let s=null;function l(){s&&(clearTimeout(s),s=null)}function a(){o.value=!1,l()}function i(...c){l(),o.value=!0,s=setTimeout(()=>{o.value=!1,s=null,e(...c)},at(t))}return r&&(o.value=!0,Ar&&i()),Jt(a),{isPending:Gt(o),start:i,stop:a}}function ro(e=!1,t={}){const{truthyValue:n=!0,falsyValue:r=!1}=t,o=$e(e),s=K(e);function l(a){if(arguments.length)return s.value=a,s.value;{const i=at(n);return s.value=s.value===i?at(r):i,s.value}}return o?l:[s,l]}function He(e){var t;const n=at(e);return(t=n==null?void 0:n.$el)!=null?t:n}const vt=Ar?window:void 0,xc=Ar?window.document:void 0,kc=Ar?window.navigator:void 0;function Ie(...e){let t,n,r,o;if(typeof e[0]=="string"||Array.isArray(e[0])?([n,r,o]=e,t=vt):[t,n,r,o]=e,!t)return an;Array.isArray(n)||(n=[n]),Array.isArray(r)||(r=[r]);const s=[],l=()=>{s.forEach(u=>u()),s.length=0},a=(u,f,p,v)=>(u.addEventListener(f,p,v),()=>u.removeEventListener(f,p,v)),i=de(()=>[He(t),at(o)],([u,f])=>{if(l(),!u)return;const p=S2(f)?{...f}:f;s.push(...n.flatMap(v=>r.map(_=>a(u,v,_,p))))},{immediate:!0,flush:"post"}),c=()=>{i(),l()};return Jt(c),c}let fa=!1;function j2(e,t,n={}){const{window:r=vt,ignore:o=[],capture:s=!0,detectIframe:l=!1}=n;if(!r)return an;fs&&!fa&&(fa=!0,Array.from(r.document.body.children).forEach(p=>p.addEventListener("click",an)),r.document.documentElement.addEventListener("click",an));let a=!0;const i=p=>o.some(v=>{if(typeof v=="string")return Array.from(r.document.querySelectorAll(v)).some(_=>_===p.target||p.composedPath().includes(_));{const _=He(v);return _&&(p.target===_||p.composedPath().includes(_))}}),u=[Ie(r,"click",p=>{const v=He(e);if(!(!v||v===p.target||p.composedPath().includes(v))){if(p.detail===0&&(a=!i(p)),!a){a=!0;return}t(p)}},{passive:!0,capture:s}),Ie(r,"pointerdown",p=>{const v=He(e);a=!i(p)&&!!(v&&!p.composedPath().includes(v))},{passive:!0}),l&&Ie(r,"blur",p=>{setTimeout(()=>{var v;const _=He(e);((v=r.document.activeElement)==null?void 0:v.tagName)==="IFRAME"&&!(_!=null&&_.contains(r.document.activeElement))&&t(p)},0)})].filter(Boolean);return()=>u.forEach(p=>p())}function z2(){const e=K(!1);return Nn()&&ve(()=>{e.value=!0}),e}function Fn(e){const t=z2();return O(()=>(t.value,!!e()))}function Dc(e,t={}){const{window:n=vt}=t,r=Fn(()=>n&&"matchMedia"in n&&typeof n.matchMedia=="function");let o;const s=K(!1),l=c=>{s.value=c.matches},a=()=>{o&&("removeEventListener"in o?o.removeEventListener("change",l):o.removeListener(l))},i=di(()=>{r.value&&(a(),o=n.matchMedia(at(e)),"addEventListener"in o?o.addEventListener("change",l):o.addListener(l),s.value=o.matches)});return Jt(()=>{i(),a(),o=void 0}),s}function pa(e,t={}){const{controls:n=!1,navigator:r=kc}=t,o=Fn(()=>r&&"permissions"in r);let s;const l=typeof e=="string"?{name:e}:e,a=K(),i=()=>{s&&(a.value=s.state)},c=$2(async()=>{if(o.value){if(!s)try{s=await r.permissions.query(l),Ie(s,"change",i),i()}catch{a.value="prompt"}return s}});return c(),n?{state:a,isSupported:o,query:c}:a}function W2(e={}){const{navigator:t=kc,read:n=!1,source:r,copiedDuring:o=1500,legacy:s=!1}=e,l=Fn(()=>t&&"clipboard"in t),a=pa("clipboard-read"),i=pa("clipboard-write"),c=O(()=>l.value||s),u=K(""),f=K(!1),p=H2(()=>f.value=!1,o);function v(){l.value&&a.value!=="denied"?t.clipboard.readText().then(b=>{u.value=b}):u.value=T()}c.value&&n&&Ie(["copy","cut"],v);async function _(b=at(r)){c.value&&b!=null&&(l.value&&i.value!=="denied"?await t.clipboard.writeText(b):w(b),u.value=b,f.value=!0,p.start())}function w(b){const P=document.createElement("textarea");P.value=b??"",P.style.position="absolute",P.style.opacity="0",document.body.appendChild(P),P.select(),document.execCommand("copy"),P.remove()}function T(){var b,P,y;return(y=(P=(b=document==null?void 0:document.getSelection)==null?void 0:b.call(document))==null?void 0:P.toString())!=null?y:""}return{isSupported:c,text:u,copied:f,copy:_}}const Mr=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{},Nr="__vueuse_ssr_handlers__",q2=U2();function U2(){return Nr in Mr||(Mr[Nr]=Mr[Nr]||{}),Mr[Nr]}function G2(e,t){return q2[e]||t}function K2(e){return e==null?"any":e instanceof Set?"set":e instanceof Map?"map":e instanceof Date?"date":typeof e=="boolean"?"boolean":typeof e=="string"?"string":typeof e=="object"?"object":Number.isNaN(e)?"any":"number"}const Y2={boolean:{read:e=>e==="true",write:e=>String(e)},object:{read:e=>JSON.parse(e),write:e=>JSON.stringify(e)},number:{read:e=>Number.parseFloat(e),write:e=>String(e)},any:{read:e=>e,write:e=>String(e)},string:{read:e=>e,write:e=>String(e)},map:{read:e=>new Map(JSON.parse(e)),write:e=>JSON.stringify(Array.from(e.entries()))},set:{read:e=>new Set(JSON.parse(e)),write:e=>JSON.stringify(Array.from(e))},date:{read:e=>new Date(e),write:e=>e.toISOString()}},va="vueuse-storage";function Hn(e,t,n,r={}){var o;const{flush:s="pre",deep:l=!0,listenToStorageChanges:a=!0,writeDefaults:i=!0,mergeDefaults:c=!1,shallow:u,window:f=vt,eventFilter:p,onError:v=D=>{console.error(D)},initOnMounted:_}=r,w=(u?De:K)(typeof t=="function"?t():t);if(!n)try{n=G2("getDefaultStorage",()=>{var D;return(D=vt)==null?void 0:D.localStorage})()}catch(D){v(D)}if(!n)return w;const T=at(t),b=K2(T),P=(o=r.serializer)!=null?o:Y2[b],{pause:y,resume:I}=F2(w,()=>V(w.value),{flush:s,deep:l,eventFilter:p});return f&&a&&yo(()=>{Ie(f,"storage",B),Ie(f,va,G),_&&B()}),_||B(),w;function V(D){try{if(D==null)n.removeItem(e);else{const F=P.write(D),Q=n.getItem(e);Q!==F&&(n.setItem(e,F),f&&f.dispatchEvent(new CustomEvent(va,{detail:{key:e,oldValue:Q,newValue:F,storageArea:n}})))}}catch(F){v(F)}}function A(D){const F=D?D.newValue:n.getItem(e);if(F==null)return i&&T!=null&&n.setItem(e,P.write(T)),T;if(!D&&c){const Q=P.read(F);return typeof c=="function"?c(Q,T):b==="object"&&!Array.isArray(Q)?{...T,...Q}:Q}else return typeof F!="string"?F:P.read(F)}function G(D){B(D.detail)}function B(D){if(!(D&&D.storageArea!==n)){if(D&&D.key==null){w.value=T;return}if(!(D&&D.key!==e)){y();try{(D==null?void 0:D.newValue)!==P.write(w.value)&&(w.value=A(D))}catch(F){v(F)}finally{D?Kt(I):I()}}}}}function J2(e){return Dc("(prefers-color-scheme: dark)",e)}function Q2(e,t,n={}){const{window:r=vt,...o}=n;let s;const l=Fn(()=>r&&"MutationObserver"in r),a=()=>{s&&(s.disconnect(),s=void 0)},i=de(()=>He(e),f=>{a(),l.value&&r&&f&&(s=new MutationObserver(t),s.observe(f,o))},{immediate:!0}),c=()=>s==null?void 0:s.takeRecords(),u=()=>{a(),i()};return Jt(u),{isSupported:l,stop:u,takeRecords:c}}function X2(e,t,n={}){const{window:r=vt,...o}=n;let s;const l=Fn(()=>r&&"ResizeObserver"in r),a=()=>{s&&(s.disconnect(),s=void 0)},i=O(()=>Array.isArray(e)?e.map(f=>He(f)):[He(e)]),c=de(i,f=>{if(a(),l.value&&r){s=new ResizeObserver(t);for(const p of f)p&&s.observe(p,o)}},{immediate:!0,flush:"post",deep:!0}),u=()=>{a(),c()};return Jt(u),{isSupported:l,stop:u}}function Z2(e,t={width:0,height:0},n={}){const{window:r=vt,box:o="content-box"}=n,s=O(()=>{var f,p;return(p=(f=He(e))==null?void 0:f.namespaceURI)==null?void 0:p.includes("svg")}),l=K(t.width),a=K(t.height),{stop:i}=X2(e,([f])=>{const p=o==="border-box"?f.borderBoxSize:o==="content-box"?f.contentBoxSize:f.devicePixelContentBoxSize;if(r&&s.value){const v=He(e);if(v){const _=r.getComputedStyle(v);l.value=Number.parseFloat(_.width),a.value=Number.parseFloat(_.height)}}else if(p){const v=Array.isArray(p)?p:[p];l.value=v.reduce((_,{inlineSize:w})=>_+w,0),a.value=v.reduce((_,{blockSize:w})=>_+w,0)}else l.value=f.contentRect.width,a.value=f.contentRect.height},n);yo(()=>{const f=He(e);f&&(l.value="offsetWidth"in f?f.offsetWidth:t.width,a.value="offsetHeight"in f?f.offsetHeight:t.height)});const c=de(()=>He(e),f=>{l.value=f?t.width:0,a.value=f?t.height:0});function u(){i(),c()}return{width:l,height:a,stop:u}}const ha=["fullscreenchange","webkitfullscreenchange","webkitendfullscreen","mozfullscreenchange","MSFullscreenChange"];function Xs(e,t={}){const{document:n=xc,autoExit:r=!1}=t,o=O(()=>{var b;return(b=He(e))!=null?b:n==null?void 0:n.querySelector("html")}),s=K(!1),l=O(()=>["requestFullscreen","webkitRequestFullscreen","webkitEnterFullscreen","webkitEnterFullScreen","webkitRequestFullScreen","mozRequestFullScreen","msRequestFullscreen"].find(b=>n&&b in n||o.value&&b in o.value)),a=O(()=>["exitFullscreen","webkitExitFullscreen","webkitExitFullScreen","webkitCancelFullScreen","mozCancelFullScreen","msExitFullscreen"].find(b=>n&&b in n||o.value&&b in o.value)),i=O(()=>["fullScreen","webkitIsFullScreen","webkitDisplayingFullscreen","mozFullScreen","msFullscreenElement"].find(b=>n&&b in n||o.value&&b in o.value)),c=["fullscreenElement","webkitFullscreenElement","mozFullScreenElement","msFullscreenElement"].find(b=>n&&b in n),u=Fn(()=>o.value&&n&&l.value!==void 0&&a.value!==void 0&&i.value!==void 0),f=()=>c?(n==null?void 0:n[c])===o.value:!1,p=()=>{if(i.value){if(n&&n[i.value]!=null)return n[i.value];{const b=o.value;if((b==null?void 0:b[i.value])!=null)return!!b[i.value]}}return!1};async function v(){if(!(!u.value||!s.value)){if(a.value)if((n==null?void 0:n[a.value])!=null)await n[a.value]();else{const b=o.value;(b==null?void 0:b[a.value])!=null&&await b[a.value]()}s.value=!1}}async function _(){if(!u.value||s.value)return;p()&&await v();const b=o.value;l.value&&(b==null?void 0:b[l.value])!=null&&(await b[l.value](),s.value=!0)}async function w(){await(s.value?v():_())}const T=()=>{const b=p();(!b||b&&f())&&(s.value=b)};return Ie(n,ha,T,!1),Ie(()=>He(o),ha,T,!1),r&&Jt(v),{isSupported:u,isFullscreen:s,enter:_,exit:v,toggle:w}}function Bo(e){return typeof Window<"u"&&e instanceof Window?e.document.documentElement:typeof Document<"u"&&e instanceof Document?e.documentElement:e}function $c(e){const t=window.getComputedStyle(e);if(t.overflowX==="scroll"||t.overflowY==="scroll"||t.overflowX==="auto"&&e.clientWidth 1?!0:(t.preventDefault&&t.preventDefault(),!1)}const Br=new WeakMap;function Vc(e,t=!1){const n=K(t);let r=null,o;de(M2(e),a=>{const i=Bo(at(a));if(i){const c=i;Br.get(c)||Br.set(c,o),n.value&&(c.style.overflow="hidden")}},{immediate:!0});const s=()=>{const a=Bo(at(e));!a||n.value||(fs&&(r=Ie(a,"touchmove",i=>{ep(i)},{passive:!1})),a.style.overflow="hidden",n.value=!0)},l=()=>{var a;const i=Bo(at(e));!i||!n.value||(fs&&(r==null||r()),i.style.overflow=(a=Br.get(i))!=null?a:"",Br.delete(i),n.value=!1)};return Jt(l),O({get(){return n.value},set(a){a?s():l()}})}function Mc(e,t,n={}){const{window:r=vt}=n;return Hn(e,t,r==null?void 0:r.sessionStorage,n)}let tp=0;function np(e,t={}){const n=K(!1),{document:r=xc,immediate:o=!0,manual:s=!1,id:l=`vueuse_styletag_${++tp}`}=t,a=K(e);let i=()=>{};const c=()=>{if(!r)return;const f=r.getElementById(l)||r.createElement("style");f.isConnected||(f.id=l,t.media&&(f.media=t.media),r.head.appendChild(f)),!n.value&&(i=de(a,p=>{f.textContent=p},{immediate:!0}),n.value=!0)},u=()=>{!r||!n.value||(i(),r.head.removeChild(r.getElementById(l)),n.value=!1)};return o&&!s&&yo(c),s||Jt(u),{id:l,css:a,unload:u,load:c,isLoaded:Gt(n)}}function rp(e={}){const{window:t=vt,behavior:n="auto"}=e;if(!t)return{x:K(0),y:K(0)};const r=K(t.scrollX),o=K(t.scrollY),s=O({get(){return r.value},set(a){scrollTo({left:a,behavior:n})}}),l=O({get(){return o.value},set(a){scrollTo({top:a,behavior:n})}});return Ie(t,"scroll",()=>{r.value=t.scrollX,o.value=t.scrollY},{capture:!1,passive:!0}),{x:s,y:l}}function op(e={}){const{window:t=vt,initialWidth:n=Number.POSITIVE_INFINITY,initialHeight:r=Number.POSITIVE_INFINITY,listenOrientation:o=!0,includeScrollbar:s=!0}=e,l=K(n),a=K(r),i=()=>{t&&(s?(l.value=t.innerWidth,a.value=t.innerHeight):(l.value=t.document.documentElement.clientWidth,a.value=t.document.documentElement.clientHeight))};if(i(),yo(i),Ie("resize",i,{passive:!0}),o){const c=Dc("(orientation: portrait)");de(c,()=>i())}return{width:l,height:a}}const Nc=({type:e="info",text:t="",vertical:n,color:r},{slots:o})=>{var s;return d("span",{class:["vp-badge",e,{diy:r}],style:{verticalAlign:n??!1,backgroundColor:r??!1}},((s=o.default)==null?void 0:s.call(o))||t)};Nc.displayName="Badge";var sp=z({name:"FontIcon",props:{icon:{type:String,default:""},color:{type:String,default:""},size:{type:[String,Number],default:""}},setup(e){const t=O(()=>{const r=["font-icon icon"],o=`iconfont icon-${e.icon}`;return r.push(o),r}),n=O(()=>{const r={};return e.color&&(r.color=e.color),e.size&&(r["font-size"]=Number.isNaN(Number(e.size))?e.size:`${e.size}px`),Yt(r).length?r:null});return()=>e.icon?d("span",{key:e.icon,class:t.value,style:n.value}):null}});const Bc=()=>d(we,{name:"back-to-top"},()=>[d("path",{d:"M512 843.2c-36.2 0-66.4-13.6-85.8-21.8-10.8-4.6-22.6 3.6-21.8 15.2l7 102c.4 6.2 7.6 9.4 12.6 5.6l29-22c3.6-2.8 9-1.8 11.4 2l41 64.2c3 4.8 10.2 4.8 13.2 0l41-64.2c2.4-3.8 7.8-4.8 11.4-2l29 22c5 3.8 12.2.6 12.6-5.6l7-102c.8-11.6-11-20-21.8-15.2-19.6 8.2-49.6 21.8-85.8 21.8z"}),d("path",{d:"m795.4 586.2-96-98.2C699.4 172 513 32 513 32S324.8 172 324.8 488l-96 98.2c-3.6 3.6-5.2 9-4.4 14.2L261.2 824c1.8 11.4 14.2 17 23.6 10.8L419 744s41.4 40 94.2 40c52.8 0 92.2-40 92.2-40l134.2 90.8c9.2 6.2 21.6.6 23.6-10.8l37-223.8c.4-5.2-1.2-10.4-4.8-14zM513 384c-34 0-61.4-28.6-61.4-64s27.6-64 61.4-64c34 0 61.4 28.6 61.4 64S547 384 513 384z"})]);Bc.displayName="BackToTopIcon";var lp={"/":{backToTop:"Back to top"}},ap=z({name:"BackToTop",props:{threshold:{type:Number,default:100},noProgress:Boolean},setup(e){const t=Oe(),n=Tr(lp),r=De(),{height:o}=Z2(r),{height:s}=op(),{y:l}=rp(),a=O(()=>t.value.backToTop!==!1&&l.value>e.threshold),i=O(()=>l.value/(o.value-s.value)*100);return ve(()=>{r.value=document.body}),()=>d(Ut,{name:"fade"},()=>a.value?d("button",{type:"button",class:"vp-back-to-top-button","aria-label":n.value.backToTop,"data-balloon-pos":"left",onClick:()=>{window.scrollTo({top:0,behavior:"smooth"})}},[e.noProgress?null:d("span",{class:"vp-scroll-progress",role:"progressbar","aria-labelledby":"loadinglabel","aria-valuenow":i.value},d("svg",d("circle",{cx:"50%",cy:"50%",style:{"stroke-dasharray":`calc(${Math.PI*i.value}% - ${4*Math.PI}px) calc(${Math.PI*100}% - ${4*Math.PI}px)`}}))),d(Bc)]):null)}});const ip=ht({enhance:({app:e})=>{lt("Badge")||e.component("Badge",Nc),lt("FontIcon")||e.component("FontIcon",sp)},setup:()=>{np(`@import url("//at.alicdn.com/t/font_2410206_h4r1xw8ppng.css"); +`)},rootComponents:[()=>d(ap,{})]});function cp(e,t,n){var r,o,s;t===void 0&&(t=50),n===void 0&&(n={});var l=(r=n.isImmediate)!=null&&r,a=(o=n.callback)!=null&&o,i=n.maxWait,c=Date.now(),u=[];function f(){if(i!==void 0){var v=Date.now()-c;if(v+t>=i)return i-v}return t}var p=function(){var v=[].slice.call(arguments),_=this;return new Promise(function(w,T){var b=l&&s===void 0;if(s!==void 0&&clearTimeout(s),s=setTimeout(function(){if(s=void 0,c=Date.now(),!l){var y=e.apply(_,v);a&&a(y),u.forEach(function(I){return(0,I.resolve)(y)}),u=[]}},f()),b){var P=e.apply(_,v);return a&&a(P),w(P)}u.push({resolve:w,reject:T})})};return p.cancel=function(v){s!==void 0&&clearTimeout(s),u.forEach(function(_){return(0,_.reject)(v)}),u=[]},p}const up=({headerLinkSelector:e,headerAnchorSelector:t,delay:n,offset:r=5})=>{const o=Je(),l=cp(()=>{var w,T;const a=Math.max(window.scrollY,document.documentElement.scrollTop,document.body.scrollTop);if(Math.abs(a-0) p.some(P=>P.hash===b.hash));for(let b=0;b<_.length;b++){const P=_[b],y=_[b+1],I=a>=(((w=P.parentElement)==null?void 0:w.offsetTop)??0)-r,V=!y||a<(((T=y.parentElement)==null?void 0:T.offsetTop)??0)-r;if(!(I&&V))continue;const G=decodeURIComponent(o.currentRoute.value.hash),B=decodeURIComponent(P.hash);if(G===B)return;if(f){for(let D=b+1;D<_.length;D++)if(G===decodeURIComponent(_[D].hash))return}ma(o,B);return}},n);ve(()=>{window.addEventListener("scroll",l)}),Vs(()=>{window.removeEventListener("scroll",l)})},ma=async(e,t)=>{const{scrollBehavior:n}=e.options;e.options.scrollBehavior=void 0,await e.replace({query:e.currentRoute.value.query,hash:t}).finally(()=>e.options.scrollBehavior=n)},dp=".vp-sidebar-link, .toc-link",fp=".header-anchor",pp=200,vp=5,hp=ht({setup(){up({headerLinkSelector:dp,headerAnchorSelector:fp,delay:pp,offset:vp})}});let Fc=e=>ue(e.title)?{title:e.title}:null;const Hc=Symbol(""),mp=e=>{Fc=e},gp=()=>ge(Hc),_p=e=>{e.provide(Hc,Fc)};var bp={"/":{title:"Catalog",empty:"No catalog"}},yp=z({name:"AutoCatalog",props:{base:{type:String,default:""},level:{type:Number,default:3},index:Boolean,hideHeading:Boolean},setup(e){const t=gp(),n=Tr(bp),r=fe(),o=Je(),s=Ji(),l=K(o.getRoutes().map(({meta:c,path:u})=>{const f=t(c);if(!f)return null;const p=u.split("/").length;return{level:Vr(u,"/")?p-2:p-1,base:u.replace(/\/[^/]+\/?$/,"/"),path:u,...f}}).filter(c=>_c(c)&&ue(c.title))),a=()=>{const c=e.base?$f(Hi(e.base)):r.value.path.replace(/\/[^/]+$/,"/"),u=c.split("/").length-2,f=[];return l.value.filter(({level:p,path:v})=>{if(!cn(v,c)||v===c)return!1;if(c==="/"){const _=Yt(s.value.locales).filter(w=>w!=="/");if(v==="/404.html"||_.some(w=>cn(v,w)))return!1}return p-u<=e.level&&(Vr(v,".html")&&!Vr(v,"/index.html")||Vr(v,"/"))}).sort(({title:p,level:v,order:_},{title:w,level:T,order:b})=>v-T||(No(_)?No(b)?_>0?b>0?_-b:-1:b<0?_-b:1:_:No(b)?b:p.localeCompare(w))).forEach(p=>{var w;const{base:v,level:_}=p;switch(_-u){case 1:f.push(p);break;case 2:{const T=f.find(b=>b.path===v);T&&(T.children??(T.children=[])).push(p);break}default:{const T=f.find(b=>b.path===v.replace(/\/[^/]+\/$/,"/"));if(T){const b=(w=T.children)==null?void 0:w.find(P=>P.path===v);b&&(b.children??(b.children=[])).push(p)}}}}),f},i=O(()=>a());return()=>{const c=i.value.some(u=>u.children);return d("div",{class:["vp-catalog-wrapper",{index:e.index}]},[e.hideHeading?null:d("h2",{class:"vp-catalog-main-title"},n.value.title),i.value.length?d(e.index?"ol":"ul",{class:["vp-catalogs",{deep:c}]},i.value.map(({children:u=[],title:f,path:p,content:v})=>{const _=d(Ye,{class:"vp-catalog-title",to:p},()=>v?d(v):f);return d("li",{class:"vp-catalog"},c?[d("h3",{id:f,class:["vp-catalog-child-title",{"has-children":u.length}]},[d("a",{href:`#${f}`,class:"header-anchor","aria-hidden":!0},"#"),_]),u.length?d(e.index?"ol":"ul",{class:"vp-child-catalogs"},u.map(({children:w=[],content:T,path:b,title:P})=>d("li",{class:"vp-child-catalog"},[d("div",{class:["vp-catalog-sub-title",{"has-children":w.length}]},[d("a",{href:`#${P}`,class:"header-anchor"},"#"),d(Ye,{class:"vp-catalog-title",to:b},()=>T?d(T):P)]),w.length?d(e.index?"ol":"div",{class:e.index?"vp-sub-catalogs":"vp-sub-catalogs-wrapper"},w.map(({content:y,path:I,title:V})=>e.index?d("li",{class:"vp-sub-catalog"},d(Ye,{to:I},()=>y?d(y):V)):d(Ye,{class:"vp-sub-catalog-link",to:I},()=>y?d(y):V))):null]))):null]:d("div",{class:"vp-catalog-child-title"},_))})):d("p",{class:"vp-empty-catalog"},n.value.empty)])}}}),Ep=ht({enhance:({app:e})=>{_p(e),lt("AutoCatalog",e)||e.component("AutoCatalog",yp)}});const wp=d("svg",{class:"external-link-icon",xmlns:"http://www.w3.org/2000/svg","aria-hidden":"true",focusable:"false",x:"0px",y:"0px",viewBox:"0 0 100 100",width:"15",height:"15"},[d("path",{fill:"currentColor",d:"M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"}),d("polygon",{fill:"currentColor",points:"45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"})]),jc=z({name:"ExternalLinkIcon",props:{locales:{type:Object,required:!1,default:()=>({})}},setup(e){const t=fn(),n=O(()=>e.locales[t.value]??{openInNewWindow:"open in new window"});return()=>d("span",[wp,d("span",{class:"external-link-icon-sr-only"},n.value.openInNewWindow)])}});var Tp={};const Ap=Tp,Op=ht({enhance({app:e}){e.component("ExternalLinkIcon",d(jc,{locales:Ap}))}});/** + * NProgress, (c) 2013, 2014 Rico Sta. Cruz - http://ricostacruz.com/nprogress + * @license MIT + */const ie={settings:{minimum:.08,easing:"ease",speed:200,trickle:!0,trickleRate:.02,trickleSpeed:800,barSelector:'[role="bar"]',parent:"body",template:''},status:null,set:e=>{const t=ie.isStarted();e=Fo(e,ie.settings.minimum,1),ie.status=e===1?null:e;const n=ie.render(!t),r=n.querySelector(ie.settings.barSelector),o=ie.settings.speed,s=ie.settings.easing;return n.offsetWidth,Lp(l=>{Fr(r,{transform:"translate3d("+ga(e)+"%,0,0)",transition:"all "+o+"ms "+s}),e===1?(Fr(n,{transition:"none",opacity:"1"}),n.offsetWidth,setTimeout(function(){Fr(n,{transition:"all "+o+"ms linear",opacity:"0"}),setTimeout(function(){ie.remove(),l()},o)},o)):setTimeout(()=>l(),o)}),ie},isStarted:()=>typeof ie.status=="number",start:()=>{ie.status||ie.set(0);const e=()=>{setTimeout(()=>{ie.status&&(ie.trickle(),e())},ie.settings.trickleSpeed)};return ie.settings.trickle&&e(),ie},done:e=>!e&&!ie.status?ie:ie.inc(.3+.5*Math.random()).set(1),inc:e=>{let t=ie.status;return t?(typeof e!="number"&&(e=(1-t)*Fo(Math.random()*t,.1,.95)),t=Fo(t+e,0,.994),ie.set(t)):ie.start()},trickle:()=>ie.inc(Math.random()*ie.settings.trickleRate),render:e=>{if(ie.isRendered())return document.getElementById("nprogress");_a(document.documentElement,"nprogress-busy");const t=document.createElement("div");t.id="nprogress",t.innerHTML=ie.settings.template;const n=t.querySelector(ie.settings.barSelector),r=e?"-100":ga(ie.status||0),o=document.querySelector(ie.settings.parent);return Fr(n,{transition:"all 0 linear",transform:"translate3d("+r+"%,0,0)"}),o!==document.body&&_a(o,"nprogress-custom-parent"),o==null||o.appendChild(t),t},remove:()=>{ba(document.documentElement,"nprogress-busy"),ba(document.querySelector(ie.settings.parent),"nprogress-custom-parent");const e=document.getElementById("nprogress");e&&Ip(e)},isRendered:()=>!!document.getElementById("nprogress")},Fo=(e,t,n)=>e n?n:e,ga=e=>(-1+e)*100,Lp=function(){const e=[];function t(){const n=e.shift();n&&n(t)}return function(n){e.push(n),e.length===1&&t()}}(),Fr=function(){const e=["Webkit","O","Moz","ms"],t={};function n(l){return l.replace(/^-ms-/,"ms-").replace(/-([\da-z])/gi,function(a,i){return i.toUpperCase()})}function r(l){const a=document.body.style;if(l in a)return l;let i=e.length;const c=l.charAt(0).toUpperCase()+l.slice(1);let u;for(;i--;)if(u=e[i]+c,u in a)return u;return l}function o(l){return l=n(l),t[l]??(t[l]=r(l))}function s(l,a,i){a=o(a),l.style[a]=i}return function(l,a){for(const i in a){const c=a[i];c!==void 0&&Object.prototype.hasOwnProperty.call(a,i)&&s(l,i,c)}}}(),zc=(e,t)=>(typeof e=="string"?e:Zs(e)).indexOf(" "+t+" ")>=0,_a=(e,t)=>{const n=Zs(e),r=n+t;zc(n,t)||(e.className=r.substring(1))},ba=(e,t)=>{const n=Zs(e);if(!zc(e,t))return;const r=n.replace(" "+t+" "," ");e.className=r.substring(1,r.length-1)},Zs=e=>(" "+(e.className||"")+" ").replace(/\s+/gi," "),Ip=e=>{e&&e.parentNode&&e.parentNode.removeChild(e)},Rp=()=>{ve(()=>{const e=Je(),t=new Set;t.add(e.currentRoute.value.path),e.beforeEach(n=>{t.has(n.path)||ie.start()}),e.afterEach(n=>{t.add(n.path),ie.done()})})},Pp=ht({setup(){Rp()}}),Cp=JSON.parse(`{"encrypt":{"config":{"/demo/encrypt.html":["$2a$10$cDmUXhcee33m7GcYQeWI1eWzDdjGieU3Z4j8WV3l7kztFv4LHpGWy"]}},"author":{"name":"The MITRE SAF Team","url":"https://saf.mitre.org/training"},"logo":"/logo.svg","repo":"mitre/saf-training","docsDir":"src","footer":"Apache-2.0 | Copyright © 2022 - The MITRE Corporation","displayFooter":true,"locales":{"/":{"lang":"en-US","navbarLocales":{"langName":"English","selectLangAriaLabel":"Select language"},"metaLocales":{"author":"Author","date":"Writing Date","origin":"Original","views":"Page views","category":"Category","tag":"Tag","readingTime":"Reading Time","words":"Words","toc":"On This Page","prev":"Prev","next":"Next","lastUpdated":"Last update","contributors":"Contributors","editLink":"Edit this page on GitHub","print":"Print"},"outlookLocales":{"themeColor":"Theme Color","darkmode":"Theme Mode","fullscreen":"Full Screen"},"encryptLocales":{"iconLabel":"Page Encrypted","placeholder":"Enter password","remember":"Remember password","errorHint":"Please enter the correct password!"},"routeLocales":{"skipToContent":"Skip to main content","notFoundTitle":"Page not found","notFoundMsg":["There’s nothing here.","How did we get here?","That’s a Four-Oh-Four.","Looks like we've got some broken links."],"back":"Go back","home":"Take me home","openInNewWindow":"Open in new window"},"navbar":["/",{"text":"Classes","icon":"lightbulb","children":[{"text":"SAF User Class","link":"/courses/user/","icon":"creative"},{"text":"Beginner Security Automation Developer Class","link":"/courses/beginner/","icon":"creative"},{"text":"Advanced Security Automation Developer Class","link":"/courses/advanced/","icon":"creative"},{"text":"Security Guidance Developer Class","link":"/courses/guidance/","icon":"creative"},{"text":"InSpec Profile Development & Testing","link":"/courses/profile-dev-test","icon":"creative"}]},{"text":"Resources","icon":"book","children":[{"text":"Class Resources","link":"/resources/README.md"},{"text":"Codespace Resources","link":"/resources/02.md"},{"text":"Training Development Docs","link":"/resources/03.md"}]},{"text":"Installation","icon":"note","link":"/installation/"}],"sidebar":{"/":["",{"icon":"creative","text":"User Class","link":"courses/user/README.md","prefix":"courses/user","children":"structure","collapsible":true},{"icon":"creative","text":"Beginner Class","prefix":"courses/beginner","children":"structure","collapsible":true},{"icon":"creative","text":"Advanced Class","prefix":"courses/advanced/","children":"structure","collapsible":true},{"icon":"creative","text":"Guidance Class","prefix":"courses/guidance/","children":"structure","collapsible":true},{"icon":"creative","text":"InSpec Profile Development & Testing","prefix":"courses/profile-dev-test/","children":"structure","collapsible":true}]}}}}`),Sp=K(Cp),Wc=()=>Sp,qc=Symbol(""),xp=()=>{const e=ge(qc);if(!e)throw new Error("useThemeLocaleData() is called without provider.");return e},kp=(e,t)=>{const{locales:n,...r}=e;return{...r,...n==null?void 0:n[t]}},Dp=ht({enhance({app:e}){const t=Wc(),n=e._context.provides[Ws],r=O(()=>kp(t.value,n.value));e.provide(qc,r),Object.defineProperties(e.config.globalProperties,{$theme:{get(){return t.value}},$themeLocale:{get(){return r.value}}})}});var $p={provider:"Giscus",lightTheme:"https://unpkg.com/vuepress-theme-hope@2.0.0-rc.10/templates/giscus/light.css",darkTheme:"https://unpkg.com/vuepress-theme-hope@2.0.0-rc.10/templates/giscus/dark.css",repo:"mitre/saf-training",repoId:"R_kgDOH3sAZQ",category:"Ideas",categoryId:"DIC_kwDOH3sAZc4CRApY",mapping:"pathname"};const Vp=$p;let Mp=Vp;const Uc=Symbol(""),Gc=()=>ge(Uc),Np=Gc,Bp=e=>{e.provide(Uc,Mp)},ya=["ar","ca","de","en","eo","es","fa","fr","he","id","it","ja","ko","nl","pl","pt","ro","ru","th","tr","uk","vi","zh-CN","zh-TW"];var Fp=z({name:"GiscusComment",props:{identifier:{type:String,required:!0},darkmode:Boolean},setup(e){const t=Np(),n=!!(t.repo&&t.repoId&&t.category&&t.categoryId),{repo:r,repoId:o,category:s,categoryId:l}=t,a=K(!1),i=O(()=>{const u=zs().value;if(ya.includes(u))return u;const f=u.split("-")[0];return ya.includes(f)?f:"en"}),c=O(()=>({repo:r,repoId:o,category:s,categoryId:l,lang:i.value,theme:e.darkmode?t.darkTheme||"dark":t.lightTheme||"light",mapping:t.mapping||"pathname",term:e.identifier,inputPosition:t.inputPosition||"top",reactionsEnabled:t.reactionsEnabled===!1?"0":"1",strict:t.strict===!1?"0":"1",loading:t.lazyLoading===!1?"eager":"lazy",emitMetadata:"0"}));return ve(async()=>{await m(()=>import("./giscus-08zh9c_o.js"),__vite__mapDeps([])),a.value=!0}),()=>n?d("div",{id:"comment",class:["giscus-wrapper",{"input-top":t.inputPosition!=="bottom"}]},a.value?d("giscus-widget",c.value):d(wr)):null}}),Hp=z({name:"CommentService",props:{darkmode:Boolean},setup(e){const t=Gc(),n=fe(),r=Oe(),o=t.comment!==!1,s=O(()=>r.value.comment||o&&r.value.comment!==!1);return()=>d(Fp,{identifier:r.value.commentID||n.value.path,darkmode:e.darkmode,style:{display:s.value?"block":"none"}})}}),jp=ht({enhance:({app:e})=>{Bp(e),e.component("CommentService",Hp)}}),zp={"/":{copy:"Copy code",copied:"Copied",hint:"Copied successfully"}},Wp=['.theme-hope-content div[class*="language-"] pre'];const qp=800,Up=2e3,Gp=zp,Kp=Wp,Ea=!1,Ho=new Map,Yp=()=>{const{copy:e}=W2({legacy:!0}),t=Tr(Gp),n=fe(),r=P2(),o=a=>{if(!a.hasAttribute("copy-code-registered")){const i=document.createElement("button");i.type="button",i.classList.add("copy-code-button"),i.innerHTML='',i.setAttribute("aria-label",t.value.copy),i.setAttribute("data-copied",t.value.copied),a.parentElement&&a.parentElement.insertBefore(i,a),a.setAttribute("copy-code-registered","")}},s=()=>Kt().then(()=>new Promise(a=>{setTimeout(()=>{Kp.forEach(i=>{document.querySelectorAll(i).forEach(o)}),a()},qp)})),l=(a,i,c)=>{let{innerText:u=""}=i;/language-(shellscript|shell|bash|sh|zsh)/.test(a.classList.toString())&&(u=u.replace(/^ *(\$|>) /gm,"")),e(u).then(()=>{c.classList.add("copied"),clearTimeout(Ho.get(c));const f=setTimeout(()=>{c.classList.remove("copied"),c.blur(),Ho.delete(c)},Up);Ho.set(c,f)})};ve(()=>{(!r.value||Ea)&&s(),Ie("click",a=>{const i=a.target;if(i.matches('div[class*="language-"] > button.copy')){const c=i.parentElement,u=i.nextElementSibling;u&&l(c,u,i)}else if(i.matches('div[class*="language-"] div.copy-icon')){const c=i.parentElement,u=c.parentElement,f=c.nextElementSibling;f&&l(u,f,c)}}),de(()=>n.value.path,()=>{(!r.value||Ea)&&s()})})};var Jp=ht({setup:()=>{Yp()}});const Kc=({active:e=!1},{slots:t})=>{var n;return d("div",{class:["code-group-item",{active:e}],"aria-selected":e},(n=t.default)==null?void 0:n.call(t))};Kc.displayName="CodeGroupItem";const Qp=z({name:"CodeGroup",slots:Object,setup(e,{slots:t}){const n=K(-1),r=De([]),o=(a=n.value)=>{n.value=a {n.value=a>0?a-1:r.value.length-1,r.value[n.value].focus()},l=(a,i)=>{a.key===" "||a.key==="Enter"?(a.preventDefault(),n.value=i):a.key==="ArrowRight"?(a.preventDefault(),o(i)):a.key==="ArrowLeft"&&(a.preventDefault(),s(i))};return()=>{var i;const a=(((i=t.default)==null?void 0:i.call(t))||[]).filter(c=>c.type.name==="CodeGroupItem").map(c=>(c.props===null&&(c.props={}),c));return a.length===0?null:(n.value<0||n.value>a.length-1?(n.value=a.findIndex(c=>"active"in c.props),n.value===-1&&(n.value=0)):a.forEach((c,u)=>{c.props.active=u===n.value}),d("div",{class:"code-group"},[d("div",{class:"code-group-nav"},a.map((c,u)=>{const f=u===n.value;return d("button",{type:"button",ref:p=>{p&&(r.value[u]=p)},class:["code-group-nav-tab",{active:f}],"aria-pressed":f,"aria-expanded":f,onClick:()=>{n.value=u},onKeydown:p=>l(p,u)},c.props.title)})),a]))}}}),Yc=({title:e,desc:t="",logo:n,background:r,color:o,link:s})=>{const l=[n?d("img",{class:"vp-card-logo",src:Me(n),loading:"lazy","no-view":""}):null,d("div",{class:"vp-card-content"},[d("div",{class:"vp-card-title",innerHTML:e}),d("hr"),d("div",{class:"vp-card-desc",innerHTML:t})])],a={};return r&&(a.background=r),o&&(a.color=o),s?cr(s)?d("a",{class:"vp-card",href:s,target:"_blank",style:a},l):d(Ye,{to:s,class:"vp-card",style:a},()=>l):d("div",{class:"vp-card",style:a},l)};Yc.displayName="VPCard";const Hr=Hn("VUEPRESS_CODE_TAB_STORE",{});var Xp=z({name:"CodeTabs",props:{active:{type:Number,default:0},data:{type:Array,required:!0},id:{type:String,required:!0},tabId:{type:String,default:""}},slots:Object,setup(e,{slots:t}){const n=K(e.active),r=De([]),o=()=>{e.tabId&&(Hr.value[e.tabId]=e.data[n.value].id)},s=(c=n.value)=>{n.value=c {n.value=c>0?c-1:r.value.length-1,r.value[n.value].focus()},a=(c,u)=>{c.key===" "||c.key==="Enter"?(c.preventDefault(),n.value=u):c.key==="ArrowRight"?(c.preventDefault(),s()):c.key==="ArrowLeft"&&(c.preventDefault(),l()),e.tabId&&(Hr.value[e.tabId]=e.data[n.value].id)},i=()=>{if(e.tabId){const c=e.data.findIndex(({id:u})=>Hr.value[e.tabId]===u);if(c!==-1)return c}return e.active};return ve(()=>{n.value=i(),de(()=>Hr.value[e.tabId],(c,u)=>{if(e.tabId&&c!==u){const f=e.data.findIndex(({id:p})=>p===c);f!==-1&&(n.value=f)}})}),()=>e.data.length?d("div",{class:"vp-code-tabs"},[d("div",{class:"vp-code-tabs-nav",role:"tablist"},e.data.map(({id:c},u)=>{const f=u===n.value;return d("button",{type:"button",ref:p=>{p&&(r.value[u]=p)},class:["vp-code-tab-nav",{active:f}],role:"tab","aria-controls":`codetab-${e.id}-${u}`,"aria-selected":f,onClick:()=>{n.value=u,o()},onKeydown:p=>a(p,u)},t[`title${u}`]({value:c,isActive:f}))})),e.data.map(({id:c},u)=>{const f=u===n.value;return d("div",{class:["vp-code-tab",{active:f}],id:`codetab-${e.id}-${u}`,role:"tabpanel","aria-expanded":f},[d("div",{class:"vp-code-tab-title"},t[`title${u}`]({value:c,isActive:f})),t[`tab${u}`]({value:c,isActive:f})])})]):null}});const Zp=()=>d(we,{name:"back"},()=>d("path",{d:"M1014.749 449.156v125.688H260.626l345.64 345.64-89.239 89.237L19.307 512l497.72-497.721 89.238 89.238-345.64 345.64h754.124z"})),e3=()=>d(we,{name:"home"},()=>d("path",{d:"M780.106 420.978L506.994 147.866 233.882 420.978h.045v455.11H780.06v-455.11h.046zm90.977 90.976V876.09a91.022 91.022 0 01-91.023 91.022H233.927a91.022 91.022 0 01-91.022-91.022V511.954l-67.22 67.175-64.307-64.307 431.309-431.31c35.498-35.498 93.115-35.498 128.614 0l431.309 431.31-64.307 64.307L871.083 512z"})),t3='',n3='',r3='';var o3={useBabel:!1,jsLib:[],cssLib:[],codepenLayout:"left",codepenEditors:"101",babel:"https://unpkg.com/@babel/standalone/babel.min.js",vue:"https://unpkg.com/vue/dist/vue.global.prod.js",react:"https://unpkg.com/react/umd/react.production.min.js",reactDOM:"https://unpkg.com/react-dom/umd/react-dom.production.min.js"};const jo=o3,wa={html:{types:["html","slim","haml","md","markdown","vue"],map:{html:"none",vue:"none",md:"markdown"}},js:{types:["js","javascript","coffee","coffeescript","ts","typescript","ls","livescript"],map:{js:"none",javascript:"none",coffee:"coffeescript",ls:"livescript",ts:"typescript"}},css:{types:["css","less","sass","scss","stylus","styl"],map:{css:"none",styl:"stylus"}}},s3=(e,t,n)=>{const r=document.createElement(e);return yr(t)&&Yt(t).forEach(o=>{if(o.indexOf("data"))r[o]=t[o];else{const s=o.replace("data","");r.dataset[s]=t[o]}}),n&&n.forEach(o=>{r.appendChild(o)}),r},el=e=>({...jo,...e,jsLib:Array.from(new Set([...jo.jsLib||[],...e.jsLib||[]])),cssLib:Array.from(new Set([...jo.cssLib||[],...e.cssLib||[]]))}),Ln=(e,t)=>{if(e[t]!==void 0)return e[t];const n=new Promise(r=>{var s;const o=document.createElement("script");o.src=t,(s=document.querySelector("body"))==null||s.appendChild(o),o.onload=()=>{r()}});return e[t]=n,n},l3=(e,t)=>{if(t.css&&Array.from(e.childNodes).every(n=>n.nodeName!=="STYLE")){const n=s3("style",{innerHTML:t.css});e.appendChild(n)}},a3=(e,t,n)=>{const r=n.getScript();if(r&&Array.from(t.childNodes).every(o=>o.nodeName!=="SCRIPT")){const o=document.createElement("script");o.appendChild(document.createTextNode(`{const document=window.document.querySelector('#${e} .vp-code-demo-display').shadowRoot; +${r}}`)),t.appendChild(o)}},i3=e=>{const t=Yt(e),n={html:[],js:[],css:[],isLegal:!1};return["html","js","css"].forEach(r=>{const o=t.filter(s=>wa[r].types.includes(s));if(o.length){const s=o[0];n[r]=[e[s].replace(/^\n|\n$/g,""),wa[r].map[s]||s]}}),n.isLegal=(!n.html.length||n.html[1]==="none")&&(!n.js.length||n.js[1]==="none")&&(!n.css.length||n.css[1]==="none"),n},Jc=e=>e.replace(/
/g,"
").replace(/<((\S+)[^<]*?)\s+\/>/g,"<$1>$2>"),Qc=e=>`+${Jc(e)} +`,c3=e=>`${e.replace("export default ","const $reactApp = ").replace(/App\.__style__(\s*)=(\s*)`([\s\S]*)?`/,"")}; +ReactDOM.createRoot(document.getElementById("app")).render(React.createElement($reactApp))`,u3=e=>e.replace(/export\s+default\s*\{(\n*[\s\S]*)\n*\}\s*;?$/u,"Vue.createApp({$1}).mount('#app')").replace(/export\s+default\s*define(Async)?Component\s*\(\s*\{(\n*[\s\S]*)\n*\}\s*\)\s*;?$/u,"Vue.createApp({$1}).mount('#app')").trim(),Xc=e=>`(function(exports){var module={};module.exports=exports;${e};return module.exports.__esModule?module.exports.default:module.exports;})({})`,d3=(e,t)=>{const n=el(t),r=e.js[0]||"";return{...n,html:Jc(e.html[0]||""),js:r,css:e.css[0]||"",isLegal:e.isLegal,getScript:()=>{var o;return n.useBabel?((o=window.Babel.transform(r,{presets:["es2015"]}))==null?void 0:o.code)||"":r}}},f3=/([\s\S]+)<\/template>/u,p3=/ +2. Review the Fundamentals | MITRE SAF Training + + + + + +Skip to main content+ + +2. Review the Fundamentals
About 2 minInSpec Content Review
In the beginner class, we explained the structure and output of InSpec Profiles. Let's review some content, then practice by revisiting, running, and viewing results of an InSpec profile.
InSpec Profile Structure
Remember that a
profile
is a set of automated tests that usually relates directly back to a Security Requirements Benchmark.Profiles have two (2) required elements:
- An
inspec.yml
file- A
controls
directoryand optional elements such as:
- A
libraries
directory- A
files
directory- An
inputs.yml
file- A
README.md
fileInSpec can create the profile structure for you using the following command:
$ inspec init profile my_inspec_profile +
This will give you the required files along with some optional files.
$ tree my_inspec_profile + + my_inspec_profile + ├── README.md + ├── controls + │ └── example.rb + └── inspec.yml +
Default File Structures
Control File Structure
Let's take a look at the default ruby file in the
controls
directory.controls/example.rbtitle 'sample section' + +# you can also use plain tests +describe file('/tmp') do + it { should be_directory } +end + +# you add controls here +control 'tmp-1.0' do # A unique ID for this control + impact 0.7 # The criticality, if this control fails. + title 'Create /tmp directory' # A human-readable title + desc 'An optional description...' + describe file('/tmp') do # The actual test + it { should be_directory } + end +end +
This example shows two tests. Both tests check for the existence of the
/tmp
directory. The second test provides additional information about the test. Let's break down each component.
control
(line 9) is followed by the control's name. Each control in a profile has a unique name.impact
(line 10) measures the relative importance of the test and must be a value between 0.0 and 1.0.title
(line 11) defines the control's purpose.desc
(line 12) provides a more complete description of what the control checks for.describe
(lines 13 — 15) defines the test. Here, the test checks for the existence of the/tmp
directory.Describe Block Structure
As with many test frameworks, InSpec code resembles natural language. Here's the format of a describe block.
describe < entity > do + it { < expectation > } +end +
Resources And Matchers
ResourcesInSpec uses resources like the
file
resource to aid in control development. These resources can often be used as the< entity >
in the describe block. Find a list of resources in the InSpec documentationMatchersInSpec uses matchers like the
cmp
oreq
to aid in control development. These matchers can often be used as the< expectation >
in the describe block where the expectation is checking a requirement of that entity. Find a list of matchers in the InSpec documentationinspec.yml File Structure
Let's take a look at the default
inspec.yml
.inspec.ymlname: my_inspec_profile +title: InSpec Profile +maintainer: The Authors +copyright: The Authors +copyright_email: you@example.com +license: Apache-2.0 +summary: An InSpec Compliance Profile +version: 0.1.0 +supports: + platform: os + +# Optional sections + +depends: + - name: < name of the profile from which you can include controls > + path: < relative path to that profile > + +gem_dependencies: + - name: < name of the gem > + version: < version of the gem > + +inputs: + - name: < name of the input > + desc: < description of the input > + type: < data type of the input (String, Array, Numeric, Hash) > + value: < default value for the input > +
This example shows default metadata of the InSpec profile along with the optional sections. Find more information about inputs and overlays in the beginner class.
Tips
version
(line 8) is the version of that inspec profile. The best practice is for the version to be bumped after making changes to the profile to pull the latest updates.Difference between 'inspec.yml' and 'inputs.yml'
inspec.yml inputs.yml Required Optional Should not be renamed Can be renamed Needs to be at the root of the profile Can be anywhere Automatically used during execution inspec exec profile1
Needs to be passed in during execution inspec exec profile1 --input-file <path>
Purpose is to define default input values and profile metadata Purpose is to override default input values with parameters for the local environments Defined by the author of the profile Defined by the user of the profile inspec.yml- name: superusers + desc: 'List of users with admin privileges' + type: Array + value: + - 'admin' + - 'root' +
inputs.ymlsuperusers: + - 'codespaces' + - 'kali' +