diff --git a/src/assets/img/saf-lifecycle.jpg b/src/assets/img/saf-lifecycle.jpg new file mode 100644 index 000000000..43c5c21f0 Binary files /dev/null and b/src/assets/img/saf-lifecycle.jpg differ diff --git a/src/courses/advanced/02.md b/src/courses/advanced/02.md index bbd3facad..09c05d74c 100644 --- a/src/courses/advanced/02.md +++ b/src/courses/advanced/02.md @@ -4,27 +4,30 @@ next: 03.md title: 2. Review the Fundamentals author: Aaron Lippold --- + ## InSpec Content Review -In the [beginner class](../beginner/README.md), we explained the structure and output of InSpec Profiles. Let's review some content, then practice by revisiting, running, and viewing results of an InSpec profile. +In the [beginner class](../beginner/README.md), we explained the structure of InSpec profiles and controls, how to run them, and how to understand their results. Let's do a brief review of that fundamental content and then practice those basic skills again. ### InSpec Profile Structure -Remember that a `profile` is a set of automated tests that usually relates directly back to a Security Requirements Benchmark. +Remember that a `profile` is a set of automated tests that usually relates directly back to some upstream security guidance document. Profiles have two (2) **required** elements: -- An `inspec.yml` file + +- An `inspec.yml` file - A `controls` directory -and **optional** elements such as: -- A `libraries` directory +and **optional** elements such as: + +- A `libraries` directory - A `files` directory -- An `inputs.yml` file +- An `inputs.yml` file - A `README.md` file InSpec can create the profile structure for you using the following command: ```sh -$ inspec init profile my_inspec_profile +inspec init profile my_inspec_profile ``` This will give you the required files along with some optional files. @@ -43,7 +46,7 @@ $ tree my_inspec_profile #### Control File Structure -Let's take a look at the default ruby file in the `controls` directory. +Let's take a look at the default Ruby file in the `controls` directory. ::: code-tabs @tab controls/example.rb @@ -66,6 +69,7 @@ control 'tmp-1.0' do # A unique ID for this control end end ``` + ::: This example shows two tests. Both tests check for the existence of the `/tmp` directory. The second test provides additional information about the test. Let's break down each component. @@ -90,10 +94,10 @@ end ::: tabs @tab Resources -InSpec uses resources like the `file` resource to aid in control development. These resources can often be used as the `< entity >` in the describe block. Find a list of resources in the [InSpec documentation ](https://docs.chef.io/inspec/resources/) +InSpec uses resources like the `file` resource to aid in control development. These resources can often be used as the `< entity >` in the describe block. Find a list of resources in the [InSpec documentation](https://docs.chef.io/inspec/resources/) @tab Matchers -InSpec uses matchers like the `cmp` or `eq` to aid in control development. These matchers can often be used as the `< expectation >` in the describe block where the expectation is checking a requirement of that entity. Find a list of matchers in the [InSpec documentation ](https://docs.chef.io/inspec/matchers/) +InSpec uses matchers like the `cmp` or `eq` to aid in control development. These matchers can often be used as the `< expectation >` in the describe block where the expectation is checking a requirement of that entity. Find a list of matchers in the [InSpec documentation](https://docs.chef.io/inspec/matchers/) ::: @@ -132,6 +136,7 @@ inputs: type: < data type of the input (String, Array, Numeric, Hash) > value: < default value for the input > ``` + ::: This example shows default metadata of the InSpec profile along with the optional sections. Find more information about [inputs](../beginner/06.md) and [overlays](../beginner/10.md) in the beginner class. @@ -172,4 +177,4 @@ superusers: - 'kali' ``` -::: \ No newline at end of file +::: diff --git a/src/courses/advanced/03.md b/src/courses/advanced/03.md index ca34cfa12..5c1e21e69 100644 --- a/src/courses/advanced/03.md +++ b/src/courses/advanced/03.md @@ -6,13 +6,14 @@ author: Aaron Lippold headerDepth: 3 --- ## Revisiting the NGINX Web Server InSpec Profile + In the [beginner class](../beginner/05.md), we wrote and ran an InSpec profile against a test container. We then generated a report on our results and loaded them into Heimdall for analysis. Let's recap this process with some practice. ### The Target -InSpec is a framework which is used to validate the security configuration of a certain target. In this case, we are interested in validating that an NGINX server complies with our requirements. +InSpec is a framework used to validate the security configuration of a target. In this case, we are interested in validating that an NGINX server complies with our requirements. -First let's find our nginx container id using the `docker ps` command: +First, let's find our NGINX container ID using the `docker ps` command: ```shell docker ps @@ -26,13 +27,17 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8ba6b8av5n7s nginx:latest "/docker.…" 2 weeks ago Up 1 hour 80/tcp nginx ``` -We can then use the container name of our nginx container `nginx` to target the inspec validation scans at that container. +We can then use the name of our NGINX container, `nginx`, to target the InSpec validation scans at that container. + ### The Requirements -InSpec profiles are a set of automated tests that relate back to a security requirements benchmark, so the controls are always motivated by the requirements. +InSpec profiles are a set of automated tests that relate back to a security guidance document, so the controls are always motivated by the requirements. ::: details Review +In the beginner class, we worked with a simple requirements set to implement in InSpec. + +```sh 1. NGINX should be installed as version 1.27.0 or later. 2. The following NGINX modules should be installed: * `http_ssl` @@ -43,16 +48,18 @@ InSpec profiles are a set of automated tests that relate back to a security requ * be owned by the `root` user and group. * not be readable, writeable, or executable by others. 5. The NGINX shell access should be restricted to admin users. +``` ::: ### The Controls -InSpec profiles consist of automated tests, that align to security requirements, written in ruby files inside the controls directory. +InSpec profiles consist of automated tests, that align to security requirements, written in Ruby files inside the controls directory. ::: details Review -If you don't have `my_nginx` profile, run the following command to initialize your InSpec profile. +If you don't have the `my_nginx` profile, run the following command to initialize your InSpec profile. + ``` inspec init profile my_nginx ``` @@ -158,6 +165,7 @@ end ``` ::: + ### Running the Controls To run `inspec exec` on the target, ensure that you are in the directory that has `my_nginx` profile. @@ -169,8 +177,9 @@ To run `inspec exec` on the target, ensure that you are in the directory that ha ```sh inspec exec my_nginx -t docker://nginx --input-file inputs-linux.yml ``` - + @tab output + ```sh Profile: InSpec Profile (my_nginx) Version: 0.1.0 @@ -199,14 +208,19 @@ inspec exec my_nginx -t docker://nginx --input-file inputs-linux.yml Profile Summary: 4 successful controls, 1 control failure, 0 controls skipped Test Summary: 10 successful, 1 failure, 0 skipped ``` + ::: + ### Reporting Results + In the [beginner class](../beginner/08.md), we mentioned that you can specify an InSpec reporter to indicate the format in which you desire the results. If you want to read the results on the command line as well as save them in a JSON file, you can run this command. + ```sh -inspec exec my_nginx -t docker://nginx --input-file inputs-linux.yml --reporter cli json:my_nginx_results.json +inspec exec my_nginx -t docker://nginx --input-file inputs-linux.yml --reporter cli json:my_nginx_results.json --enhanced-outcomes ``` ### Visualizing Results -You can use this output file to upload and visualize your results in [Heimdall ](https://heimdall-lite.mitre.org/). + +You can use this output file to upload and visualize your results in [Heimdall](https://heimdall-lite.mitre.org/). ![NGINX Heimdall Report View](../../assets/img/NGINX_Heimdall_Report_View.png) diff --git a/src/courses/advanced/04.md b/src/courses/advanced/04.md index c38bb23ff..f7bf199a5 100644 --- a/src/courses/advanced/04.md +++ b/src/courses/advanced/04.md @@ -12,9 +12,9 @@ Now that you have learned about making and running InSpec profiles, let's dig de ### Core Resources -As you saw in the [Beginner class](../beginner/README.md), when writing InSpec code, many core resources are available because they are included in the main InSpec code base. +As you saw in the [Beginner class](../beginner/README.md), when writing InSpec code, many resources are automatically available because they come "batteries included" with InSpec. -* You can [explore the core InSpec resources](https://www.inspec.io/docs/reference/resources/) to existing resources. +* You can [explore the core InSpec resources](https://www.inspec.io/docs/reference/resources/) on Chef's documentation website. * You can also [examine the source code](https://github.com/inspec/inspec/tree/master/lib/inspec/resources) to see what's available. For example, you can see how `file` and other InSpec resources are implemented. ### Local Resources @@ -30,30 +30,9 @@ Note that the `libraries` directory is not created by default within a profile w Once you create and populate a custom resource Ruby file inside the `libraries` directory, it can be utilized inside your local profile just like the core resources. -### 6.1. Resource Overview +### Resource Structure -Resources may be added to profiles in the libraries folder: -```bash -$ tree examples/profile -examples/profile -... -├── libraries -│ └── gordon_config.rb -``` - -### 6.2. Resource Structure - -The smallest possible InSpec resource takes this form: - -```ruby -class Tiny < Inspec.resource(1) - name 'tiny' -end -``` - -This is easy to write, but not particularly useful for testing. - -Resources are written as a regular Ruby class, which inherits from the base `inspec.resource` class. The number (1) specifies the version this resource plugin targets. As Chef InSpec evolves, this interface may change and may require a higher version. +Like InSpec controls, InSpec resources are written as regular Ruby classes, which means you have the full power of Ruby at your fingertips as you craft this resource. In addition to the resource name, the following attributes can be configured: @@ -67,67 +46,98 @@ The following methods are available to the resource: - `inspec` - Contains a registry of all other resources to interact with the operating system or target in general. - `skip_resource` - A resource may call this method to indicate that requirements aren’t met. All tests that use this resource will be marked as skipped. -The following example shows a full resource using attributes and methods to provide simple access to a configuration file: +### The `etc_hosts` example + +Let's look at a simple default resource to get an idea how these resources are used. We'll take a look at the [source code](https://github.com/inspec/inspec/blob/526b52657be571ba1573c12d666dc1f6330f2307/lib/inspec/resources/etc_hosts.rb) for the InSpec resource that models an operating system's hostfile, which is a simple file where we can map IP addresses (e.g. 198.162.8.1) to domain names (e.g. my-heimdall-deployment.my-domain.dev) without having to add a record to a DNS server somewhere. + + ```ruby -class GordonConfig < Inspec.resource(1) - name 'gordon_config' - - # Restrict to only run on the below platforms (if none were given, all OS's supported) - supports platform_family: 'fedora' - supports platform: 'centos', release: '6.9' - # Supports `*` for wildcard matcher in the release - supports platform: 'centos', release: '7.*' - - desc ' - Resource description ... - ' - - example ' - describe gordon_config do - its("signal") { should eq "on" } +require "inspec/utils/parser" +require "inspec/utils/file_reader" + +module Inspec::Resources + class EtcHosts < Inspec.resource(1) + name "etc_hosts" + supports platform: "unix" + supports platform: "windows" + desc 'Use the etc_hosts InSpec audit resource to find an + ip_address and its associated hosts' + example <<~EXAMPLE + describe etc_hosts.where { ip_address == '127.0.0.1' } do + its('ip_address') { should cmp '127.0.0.1' } + its('primary_name') { should cmp 'localhost' } + its('all_host_names') { should eq [['localhost', 'localhost.localdomain', 'localhost4', 'localhost4.localdomain4']] } + end + EXAMPLE + + attr_reader :params + + include Inspec::Utils::CommentParser + include FileReader + + DEFAULT_UNIX_PATH = "/etc/hosts".freeze + DEFAULT_WINDOWS_PATH = 'C:\windows\system32\drivers\etc\hosts'.freeze + + def initialize(hosts_path = nil) + content = read_file_content(hosts_path || default_hosts_file_path) + + @params = parse_conf(content.lines) end - ' - # Load the configuration file on initialization - def initialize(path = nil) - @path = path || '/etc/gordon.conf' - @params = SimpleConfig.new( read_content ) - end + FilterTable.create + .register_column(:ip_address, field: "ip_address") + .register_column(:primary_name, field: "primary_name") + .register_column(:all_host_names, field: "all_host_names") + .install_filter_methods_on_resource(self, :params) - # Expose all parameters of the configuration file. - def method_missing(name) - @params[name] - end + def to_s + "Hosts File" + end + + private + + def default_hosts_file_path + inspec.os.windows? ? DEFAULT_WINDOWS_PATH : DEFAULT_UNIX_PATH + end + + def parse_conf(lines) + lines.reject(&:empty?).reject(&comment?).map(&parse_data).map(&format_data) + end + + def comment? + parse_options = { comment_char: "#", standalone_comments: false } + + ->(data) { parse_comment_line(data, parse_options).first.empty? } + end + + def parse_data + ->(data) { [data.split[0], data.split[1], data.split[1..-1]] } + end - private - - def read_content - f = inspec.file(@path) - # Test if the path exist and that it's a file - if f.file? - # Retrieve the file's contents - f.content - else - # If the file doesn't exist, skip all tests that use gordon_config - raise Inspec::Exceptions::ResourceSkipped, "Can't read config at #{@path}" + def format_data + ->(data) { %w{ip_address primary_name all_host_names}.zip(data).to_h } end end end ``` -Let's break down each component of the resource. +If you've ever done object-oriented programming, you might be seeing some familiar concepts in this resource file. Let's break down some of the subcomponents of the resource. +#### require (lines 1 and 2) +The `require` statement pulls in code written in other files or Ruby modules that we will use in this resource. In this case we are importing some simple utility functions defined elsewhere in the InSpec codebase. #### class -The class is where the Ruby file is defined. +The `class` is where a Ruby class definition is given. Classes define the structure and function of an object that we can instantiate to model something. #### name -The name is how we will call upon this resource within our controls, in the example above that would be `gordon_config`. +The `name` defines what token we can use to invoke this resource within our controls. Remember all those `describe` blocks we wrote that invoked the `nginx` resource? We used the term `nginx` to invoke the resource because that token is the defined `name` of the resource in its class definition. #### supports -Supports are used to define or restrict the Ruby resource to work in specific ways, as shown in the example above that is used to restrict our class to specific platforms. +The `supports` keyword is used to define what types of platforms should be able to use this resource. The example above only supports the Windows and Unix-based operating systems, but other resources could state that they only support specific cloud providers or specific Linux distro releases. #### desc -A simple description of the purpose of this resource. +A simple, human-friendly description of the purpose of this resource. This is also what gets printed when you run ` help` in the InSpec shell. #### examples -A simple use case example. The example is usually a `describe` block using the resource, given as a multi-line comment. +One or more simple sample usages of the resource. It typically consists of a `describe` block using the resource, given as a multi-line string via the squiggly `heredoc` syntax. +#### private +This is a keyword that asserts that every function definition that shows up below this line in the class file should be considered _private_, or not accessible to users who instantiate an object out of this class. For example, when using this resource in a control file, you cannot invoke the `parse_data` function, because it is a private function that should really only be invoked by the resource class itself when the object is first created. #### initialize method -An initilize method is required if your resource needs to be able to accept a parameter when called in a test (e.g. `file('this/path/is/a/parameter')`) +An `initialize` method is required if your resource needs to be able to accept a parameter when called in a test (e.g. the `file` resource takes in a string parameter that specifies the location of the file being examined: `file('this/path/is/a/parameter')`). #### functionality methods -These methods return data from the resource so that you can use it in tests. \ No newline at end of file +These methods return data from the resource so that you can use it in tests. There can be just a few of them, or there can be a whole bunch. These methods are how we define the custom matchers that can be invoked in an InSpec control file. We'll build some simple examples in the next section. \ No newline at end of file diff --git a/src/courses/advanced/05.md b/src/courses/advanced/05.md index eb85b667e..800946c5f 100644 --- a/src/courses/advanced/05.md +++ b/src/courses/advanced/05.md @@ -6,38 +6,22 @@ author: Aaron Lippold headerDepth: 3 --- -Let's practice creating our own custom resource. Let's say we want to write tests that examine the current state of a local Git repository. We want to create a `git` resource to handle all of InSpec's interactions with the Git repo under the hood, so that we can focus on writing clean and easy-to-read profile code. - -Let's take a look at this InSpec video that walks through this example and then try it out ourselves. - - -
- -
- -### Create new InSpec profile +Let's practice creating our own custom resource. Suppose we want to write tests that examine the current state of a local Git repository. We will create a `git` resource that can handle all of InSpec's interactions with a Git repo under the hood, allowing us to focus on writing clean and easy-to-read code within a control. + +### Create a New InSpec Profile + Let's start by creating a new profile: ::: code-tabs @tab Command + ```bash inspec init profile git ``` + @tab Output + ```bash ─────────────────────────── InSpec Code Generator ─────────────────────────── @@ -47,11 +31,12 @@ Creating new profile at /workspaces/saf-training-lab-environment/git • Creating file controls/example.rb • Creating file README.md ``` + ::: -### Develop controls to test / run profile +### Develop Example Use-case Tests -To write tests, we first need to know and have what we are testing! In your Codespaces environment, there is a git repository that we will test under the `resources` folder. The git repository will be the test target, similarly to how the docker containers acted as test targets in previous sections. Unzip the target git repository using the following command: +To write resources, we first need to know what we are testing! In your Codespaces environment, there is a git repository that we will test under the `resources` folder. The git repository will be the test target, similarly to how the docker containers acted as test targets in previous sections. Unzip the target git repository using the following command: ```sh unzip ./resources/git_test.zip @@ -59,7 +44,8 @@ unzip ./resources/git_test.zip This will generate a `git_test` repository which we will use for these examples. -Now let's write some controls and test that they run. You can put these controls in the `example.rb` file generated in the `controls` folder of your `git` InSpec profile. These controls are written using the `command` resource which is provided by InSpec. We will write a `git` resource in this section to improve this test. **Note that you will need to put the full directory path of the `.git` file from your `git_test` repository as the `git_dir` value on line 4 of `example.rb`. To get the full path of your current location in the terminal, use `pwd`.** +Now let's write some tests and confirm that they run. You can put these tests in the `example.rb` file generated in the `controls` folder of your `git` InSpec profile. These tests are written using the `command` resource which is provided by InSpec. We will write a `git` resource in this section to improve this test. **Note that you will need to put the full directory path of the `.git` file from your `git_test` repository as the `git_dir` value on line 4 of `example.rb`. To get the full path of your current location in the terminal, use `pwd`.** + ```ruby # encoding: utf-8 # copyright: 2018, The Authors @@ -68,7 +54,7 @@ git_dir = "/workspaces/saf-training-lab-environment/git_test/.git" # The following banches should exist describe command("git --git-dir #{git_dir} branch") do - its('stdout') { should match /master/ } + its('stdout') { should match /main/ } end describe command("git --git-dir #{git_dir} branch") do @@ -77,7 +63,7 @@ end # What is the current branch describe command("git --git-dir #{git_dir} branch") do - its('stdout') { should match /^\* master/ } + its('stdout') { should match /^\* main/ } end # What is the latest commit @@ -95,11 +81,13 @@ end Run the profile. @tab Command + ```bash inspec exec git ``` @tab Output + ```bash Profile: InSpec Profile (git) Version: 0.1.0 @@ -107,11 +95,11 @@ Target: local:// Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` - ✔ stdout is expected to match /master/ + ✔ stdout is expected to match /main/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` ✔ stdout is expected to match /testBranch/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` - ✔ stdout is expected to match /^\* master/ + ✔ stdout is expected to match /^\* main/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log -1 --pretty=format:'%h'` ✔ stdout is expected to match /edc207f/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log --skip=1 -1 --pretty=format:'%h'` @@ -119,36 +107,43 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 5 successful, 0 failures, 0 skipped ``` + ::: Our tests pass, but they all use the `command` resource. It's not best practice to do this -- for one thing, it makes our tests more complicated, and the output too long. ::: warning But What If I Don't Care About The Tests Being Complicated And The Output Being Too Long? Some test writers like to wrap their favorite bash commands in a `command` block and call it a day. - However, best practice is to write clean and maintainable InSpec code even if you yourself have no trouble using the `command` resource to do everything. + However, best practice is to write clean and maintainable InSpec code even if you yourself have no trouble using the `command` resource to do everything. Recall that other developers and assessors need to be able to understand how your tests function. Nobody likes trying to debug someone else's profile that assumes that the operator knows exactly how the profile writer's favorite terminal commands work. ::: Let's rewrite these tests in a way that abstracts away the complexity of working with the `git` command into a resource. -### Rewrite test -Let's rewrite the first test in our example file to make it more readable with a `git` resource as follows: +### Rewrite a Test + +Let's rewrite the first test in our example file to make it more readable by inventing a `git` resource that can simplify our test as follows: + ```ruby # The following banches should exist describe git(git_dir) do - its('branches') { should include 'master' } + its('branches') { should include 'main' } end ``` + Now let's run the profile. ::: code-tabs @tab Command + ```bash inspec exec git ``` + @tab Output + ```bash [2023-02-22T03:21:41+00:00] ERROR: Failed to load profile git: Failed to load source for controls/example.rb: undefined method `git' for # @@ -162,6 +157,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 0 successful, 0 failures, 0 skipped ``` + ::: We should get an error because the git method and resource are not defined yet. We should fix that. @@ -169,28 +165,33 @@ We should get an error because the git method and resource are not defined yet. ### Develop the git resource Let's start by creating a new file called `git.rb` in the `libraries` directory. If you do not already have a `libraries` directory, you can make one in the `git` InSpec profile directory. The content of the file should look like this: + ```ruby # encoding: utf-8 # copyright: 2019, The Authors class Git < Inspec.resource(1) name 'git' - end ``` +This is - technically - a valid resource! It was very easy to write, but it is not particularly useful for testing. Let's run our tests again to see why not. + :::tip Setting Up a Resource Using InSpec Init Instead of just creating the `git.rb` file in the `libraries` directory, you can use InSpec to assist you in creating a resource. Run `inspec init resource ` and follow the prompts to create the foundation and see examples for a resource. ::: -Now run the profile again. +Run the profile again. ::: code-tabs @tab Command + ```bash inspec exec git ``` + @tab Output + ```bash [2023-02-22T03:25:57+00:00] ERROR: Failed to load profile git: Failed to load source for controls/example.rb: wrong number of arguments (given 1, expected 0) @@ -204,6 +205,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 0 successful, 0 failures, 0 skipped ``` + ::: This time we get another error letting us know that we have a resource that has been given the incorrect number of arguments. This means we have given an additional parameter to this resource that we have not yet accepted. @@ -211,6 +213,7 @@ This time we get another error letting us know that we have a resource that has Each resource will require an initialization method. For our git.rb file let's add that initialization method: + ```ruby # encoding: utf-8 # copyright: 2019, The Authors @@ -224,17 +227,21 @@ class Git < Inspec.resource(1) end ``` + This is saving the path we are passing in from the control into an instance method called `path`. -Now when we run the profile. +Now when we run the profile: ::: code-tabs @tab Command + ```bash inspec exec git ``` + @tab Output + ```bash Profile: InSpec Profile (git) Version: 0.1.0 @@ -247,7 +254,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` ✔ stdout is expected to match /testBranch/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` - ✔ stdout is expected to match /^\* master/ + ✔ stdout is expected to match /^\* main/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log -1 --pretty=format:'%h'` ✔ stdout is expected to match /edc207f/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log --skip=1 -1 --pretty=format:'%h'` @@ -255,11 +262,13 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 4 successful, 1 failure, 0 skipped ``` + ::: The test will run but we will get an error saying we do not have a `branches` method. Remember that the other 4 tests are still passing because they are not yet using the `git` resource, but are still relying on InSpec's `command` resource. Let's go back to our git.rb file to fix that by adding a `branches` method: + ```ruby # encoding: utf-8 # copyright: 2019, The Authors @@ -283,10 +292,13 @@ We have now defined the branches method. Let's see what the test output shows us ::: code-tabs @tab Command + ```bash inspec exec git ``` + @tab Output + ```bash Profile: InSpec Profile (git) Version: 0.1.0 @@ -294,12 +306,12 @@ Target: local:// Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce git - × branches is expected to include "master" - expected nil to include "master", but it does not respond to `include?` + × branches is expected to include "main" + expected nil to include "main", but it does not respond to `include?` Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` ✔ stdout is expected to match /testBranch/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git branch` - ✔ stdout is expected to match /^\* master/ + ✔ stdout is expected to match /^\* main/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log -1 --pretty=format:'%h'` ✔ stdout is expected to match /edc207f/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log --skip=1 -1 --pretty=format:'%h'` @@ -307,11 +319,13 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 4 successful, 1 failure, 0 skipped ``` + ::: -Now the error message says that the `branches` method is returning a null value when it's expecting an array or something that is able to accept the include method invoked on it. +Now the error message says that the `branches` method is returning a null value which is a problem because it actually needs to return something, like an array, that implements the predicate method `include?`. A predicate method is one that evaluates a condition, in this case whether a given branch is "included" in the set of branches, and returns true or false accordingly. + +To resolve this problem, we can use the `inspec` helper method to invoke the built-in `command` resource to extract this data as shown below: -We can use the InSpec helper method which enables you to invoke any other inspec resource as seen below: ```ruby # encoding: utf-8 # copyright: 2019, The Authors @@ -329,27 +343,32 @@ class Git < Inspec.resource(1) end ``` -We have borrowed the built-in `command` resource to handle running Git's CLI commands. + +You might notice some similarities between this code and what we originally started with in our `example.rb` file - this is intentional! Resources are used to encapsulate complicated behaviors, such as the mechanics of dealing with niche `git` subcommands, and expose clean interfaces for use by control authors. Now we see that we get a passing test! Now let's adjust our test to also check for our second branch that we created earlier as well as check our current branch: + ```ruby # The following banches should exist describe git(git_dir) do - its('branches') { should include 'master' } + its('branches') { should include 'main' } its('branches') { should include 'testBranch' } - its('current_branch') { should cmp 'master' } + its('current_branch') { should cmp 'main' } end ``` ::: code-tabs @tab Command + ```bash inspec exec git ``` + @tab Output + ```bash Profile: InSpec Profile (git) Version: 0.1.0 @@ -357,7 +376,7 @@ Target: local:// Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce git - ✔ branches is expected to include "master" + ✔ branches is expected to include "main" ✔ branches is expected to include "testBranch" × current_branch undefined method `current_branch' for #<#:0x00000000053fd0b8> @@ -368,9 +387,11 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 6 successful, 1 failure, 0 skipped ``` + ::: Let's head over to the git.rb file to create the `current_branch` method we are invoking in the above test: + ```ruby # encoding: utf-8 # copyright: 2019, The Authors @@ -401,11 +422,13 @@ Now we can run the profile again. ::: code-tabs @tab Command + ```bash inspec exec git ``` @tab Output + ```bash Profile: InSpec Profile (git) Version: 0.1.0 @@ -413,9 +436,9 @@ Target: local:// Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce git - ✔ branches is expected to include "master" + ✔ branches is expected to include "main" ✔ branches is expected to include "testBranch" - ✔ current_branch is expected to cmp == "master" + ✔ current_branch is expected to cmp == "main" Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log -1 --pretty=format:'%h'` ✔ stdout is expected to match /edc207f/ Command: `git --git-dir /workspaces/saf-training-lab-environment/git_test/.git log --skip=1 -1 --pretty=format:'%h'` @@ -423,10 +446,60 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 7 successful, 0 failures, 0 skipped ``` + ::: All the tests should pass! ::: tip Exercise! As a solo exercise, try to create a method in the git.rb file to check what the last commit is. -::: \ No newline at end of file +::: + +::: important This is Test-Driven Development! +Did you notice the overall arc of how we wrote this resource? We started with a set of tests before we even wrote any resource code, so we _knew_ we would start out with a failing profile. + +However, that failing profile helped us define how we should build our resource. Since we knew what sort of tests we wanted to be able to run, we knew what functions we needed to write to support them in the `git` resource. Test-driven development is an excellent method of defining requirements for your code before you even start writing it! +::: + +## Run the InSpec shell with a custom resource + +Invoking the InSpec shell with `inspec shell` will give you access to all the core InSpec resources by default, but InSpec does not automatically know about your locally defined resources unless you point them out. If you're testing a local resource, use the `--depends` flag and pass in the profile directory that your resource lives in. + +::: code-tabs + +@tab Command +```sh +inspec shell --depends git +``` +@tab Output +```sh +Welcome to the interactive InSpec Shell +To find out how to use it, type: help + +You are currently running on: + + Name: ubuntu + Families: debian, linux, unix, os + Release: 20.04 + Arch: x86_64 + +inspec> git('/workspaces/saf-training-lab-environment/git_test/.git').current_branch +=> "main" +``` + +::: + +::: warning +Note that we are passing in the _profile_ directory to the `--depends` flag, and not the profile's `libraries` directory. In our example, it's +``` sh +inspec shell --depends git +``` +and not +``` sh +inspec shell --depends git/libraries +``` +::: + +If you edit the resource class file, you'll need to exit the shell and re-launch it for the updates to be available. + +From here, we can examine our custom resource in a sandbox in the same way that we do with core resources. \ No newline at end of file diff --git a/src/courses/advanced/06.md b/src/courses/advanced/06.md index 301f50093..34325b38a 100644 --- a/src/courses/advanced/06.md +++ b/src/courses/advanced/06.md @@ -8,10 +8,13 @@ headerDepth: 3 ## Custom Resource - Docker -Now let's try a more complicated example. Let's say we want to create a resource that can parse a `docker-compose` file. +Let's try a more complicated example by creating a resource that can parse a Docker Compose file. -### Create new profile and setup Docker files -First, we need a test target! Check out the `resources/docker-compose.yml` file in Codespaces for what we can test. It looks like this: +If you've ever deployed containerized applications before, you might be familiar with [Docker Compose](https://docs.docker.com/compose/), which is a container orchestration feature of the Docker container runtime. Docker Compose works by reading a YAML specfile called the Compose file that defines attributes about a set of containers we want to deploy and how they connect together. We don't need to know too much about how to run Docker Compose for this class, but let's say that we want to write an InSpec resource for testing that our Compose files match the configuration we expect. + +### Create a new profile and set up Docker files + +First, we need a test target. Check out the `resources/docker-compose.yml` file in Codespaces for what we can test. It looks like this: ```yaml version: '3' @@ -31,15 +34,18 @@ services: tty: true ``` -We will continue writing our controls to check against this docker file: +We will continue writing our controls to check against this Compose file. ::: code-tabs @tab Command + ```bash inspec init profile docker-workstations ``` + @tab Output + ```bash ─────────────────────────── InSpec Code Generator ─────────────────────────── @@ -49,6 +55,7 @@ Creating new profile at /workspaces/saf-training-lab-environment/docker-workstat • Creating file controls/example.rb • Creating file README.md ``` + ::: ### Develop controls to test/run profile @@ -63,16 +70,18 @@ describe yaml('file_name') do end ``` -We test early and often. We know that the test we wrote is not complete, but we can see if we are on the right track. Remember that the command line output can help guide your development! +We test early and often as according to the test-driven development paradigm. We know that the test we wrote is not complete, but we can see if we are on the right track. Remember that the command line output can help guide your development! ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -84,9 +93,10 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 0 successful, 0 failures, 1 skipped ``` + ::: -We need to replace the `file_name` above with the location of the `docker-compose.yml` file. We also need to change the `setting` to grab the tag we want to retrieve. Finally we need to change `value` with the actual value as shown in the docker compose file. You can write multiple expectation statements in the describe block. +We need to replace the `file_name` above with the location of the `docker-compose.yml` file. We also need to change the `setting` to grab the tag we want to retrieve. Finally we need to change `value` with the actual value as shown in the docker compose file. You can write multiple expectation statements in the describe block. ```ruby describe yaml('/path/to/docker-compose.yml') do @@ -100,11 +110,13 @@ Now if we test this control using the following command we should see all the te ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -119,9 +131,12 @@ Test Summary: 2 successful, 0 failures, 0 skipped ``` ::: +Much like our `git ` example, this series of tests works, but it could be made better. We essentially parsed the Compose file with a simple YAML file parser which is fine for a one-off. However, if anybody else reads this code, it might not be clear what specific system component we are testing. Recall that we want InSpec tests to be extremely intuitive to read, even by people who did not write the code (and even by people who are not InSpec developers!). Furthermore, Compose files are very common! There's a high likelihood that you'd need to assess the contents of one again. Instead of writing a lot of repetitive code, we could create a resource specific to Compose files that exposes relevant attributes in an easy to access manner for reuse by our other controls - or even the broader security community if we choose to publish it publicly and/or get it merged into InSpec proper. + :::danger If you received an error above! - Concept Check If you saw this as your output: + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -142,10 +157,12 @@ describe yaml('/workspaces/saf-training-lab-environment/resources/docker-compose its(['services', 'workstation', 'volumes']) { should cmp '.:/root' } end ``` + ::: ### Rewrite test to utilize a new resource -Going back to the control, we will write it using a resource that doesn't exist called docker-compose-config that is going to take a path as a parameter. + +Going back to the control, we will write it using a resource that doesn't exist called docker-compose-config that is going to take a path as a parameter. :::details Test Driven Development Remember the idea of Test Driven Development (TDD), the red-green-clean cycle. This way of development is driven by the tests. In this way, you know when you have succeeded while developing something new! In other words, before writing a solution, first write the test (which will fail - red), so that you know exactly what the expectation should be and when you have achieved it. Then you can write the solution to make the test pass (green). Finally, clean up the solution to make it easy to read and efficient! @@ -155,6 +172,7 @@ Remember the idea of Test Driven Development (TDD), the red-green-clean cycle. T ::: code-tabs @tab Tests + ```ruby describe yaml('/workspaces/saf-training-lab-environment/resources/docker-compose.yml') do its(['services', 'workstation', 'image']) { should eq 'learnchef/inspec_workstation' } @@ -168,6 +186,7 @@ end ``` @tab Generic Tests + ```ruby describe yaml('/path/to/docker-compose.yml') do its(['services', 'workstation', 'image']) { should eq 'learnchef/inspec_workstation' } @@ -179,6 +198,7 @@ describe docker_compose_config('/path/to/docker-compose.yml') do its('services.workstation.volumes') { should cmp '.:/root' } end ``` + ::: Now we should see an error if we go back to terminal and run the same command to execute a scan. We should get an error saying the `docker_compose_config` method does not yet exist. That's because we have not yet defined this resource. @@ -186,10 +206,13 @@ Now we should see an error if we go back to terminal and run the same command to ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` + @tab Output + ```bash [2023-02-22T18:37:03+00:00] ERROR: Failed to load profile docker-workstations: Failed to load source for controls/example.rb: undefined method `docker_compose_config' for # @@ -203,9 +226,11 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 0 successful, 0 failures, 0 skipped ``` + ::: ### Develop the Docker resource + In the `libraries` directory of the profile we will make a `docker_compose_config.rb` file, , the content of the file should look like this: ```ruby @@ -225,11 +250,13 @@ Alternatively, you can use `inspec init resource ` to create ::: code-tabs @tab Command + ```bash inspec init resource docker_compose_config --overwrite ``` @tab Options + ```bash Enter Subdirectory under which to create files: ./docker-workstations Choose File layout, either 'resource-pack' or 'core': Resource Pack @@ -245,6 +272,7 @@ Creating new resource at /workspaces/saf-training-lab-environment/docker-worksta • Creating directory /workspaces/saf-training-lab-environment/docker-workstations/libraries • Creating file libraries/docker_compose_config.rb ``` + ::: Now when we save and run the profile again using: @@ -252,11 +280,13 @@ Now when we save and run the profile again using: ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash [2023-02-22T18:38:40+00:00] ERROR: Failed to load profile docker-workstations: Failed to load source for controls/example.rb: wrong number of arguments (given 1, expected 0) @@ -270,6 +300,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 0 successful, 0 failures, 0 skipped ``` + ::: We will get an error saying we gave it the wrong number of arguments: `was given 1 but expected 0`. This is because every class in Ruby that has a parameter must have an initialize function to accept that parameter. @@ -294,11 +325,13 @@ Now let's run the profile once more. ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -316,6 +349,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 2 successful, 2 failures, 0 skipped ``` + ::: This time the profile runs, but we get a message that the `docker_compose_config` resource does not have the `services` method. So let's define that method now: @@ -343,11 +377,13 @@ Start by just defining the `services` method. Then, let's run the profile once m ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -365,6 +401,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 2 successful, 2 failures, 0 skipped ``` + ::: Now we got a different failure that tells us a `nil` value was returned. So now we will go ahead and define the services method. We will use an already existing InSpec resource to parse the path file. @@ -393,11 +430,13 @@ Now let's run the profile once more. ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -415,6 +454,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 2 successful, 2 failures, 0 skipped ``` + ::: You will notice that it parses correctly, but instead of our result we end up getting a hash. We need to convert the hash into an object that appears like other objects so that we may use our dot notation. So we will wrap our hash in a Ruby class called a `Hashie::Mash`. This gives us a quick way to convert a hash into a Ruby object with a number of methods attached to it. You will have to import the Hashie library by running `gem install hashie` and import it in the resource file to be used. It and is written as follows: @@ -446,11 +486,13 @@ Lets run the profile again. ::: code-tabs @tab Command + ```bash inspec exec docker-workstations ``` @tab Output + ```bash Profile: InSpec Profile (docker-workstations) Version: 0.1.0 @@ -466,6 +508,7 @@ Target ID: 6dcb9e6f-5ede-5474-9521-595fadf5c7ce Test Summary: 4 successful, 0 failures, 0 skipped ``` + ::: Everything passed! diff --git a/src/courses/advanced/07.md b/src/courses/advanced/07.md index 4926cc37a..ce2e1a07f 100644 --- a/src/courses/advanced/07.md +++ b/src/courses/advanced/07.md @@ -1,75 +1,84 @@ --- order: 7 next: 08.md -title: 7. Exercise - Develop your own resources +title: 7. Exercise - Develop Your Own Resources author: Aaron Lippold headerDepth: 3 --- -Try writing your own resources and think about how you could implement them in a profile! +In this exercise, you will practice writing your own resources and consider how you could implement them in a profile. -**Suggested Resources to start on (Simple):** - - Docker - - `id` retrieves container id - - `image` retrieves image name - - `repo` retrieves the repo - - `tag` retrieves the tag - - `ports` retrieves the ports - - `command` retrieves command - - Git - - `branches` checks if branch exists - - `current_branch` retrieves current branch - - `last_commit` retrieves last commit from log - - `git_log` retrieve log of all commits - - `tag` retrieve tag for repo +**Suggested Simple Resources to Start With:** -**Suggested Resources to start on (Medium):** - - File resource - - `owner` tests if the owner of the file matches the specified value. - - `group` tests if the group to which a file belongs matches the specified value. - - `size` tests if a file’s size matches, is greater than, or is less than the specified value. - - `contents` tests if contents in the file match the value specified in a regular expression. - - `path` retrieves path to file - - Directory resource - - `owner` tests if the owner of the file matches the specified value. - - `group` tests if the group to which a file belongs matches the specified value. - - `size` tests if a file’s size matches, is greater than, or is less than the specified value. - - `contents` tests if contents in the file match the value specified in a regular expression. - - `path` retrieves path to directory - - Users - - `exist` tests if the named user exists - - `gid` tests the group identifier - - `group` tests the group to which the user belongs - - `groups` tests two (or more) groups to which the user belongs - - `home` tests the home directory path for the user - - `maxdays` tests the maximum number of days between password changes - - `mindays` tests the minimum number of days between password changes - - `shell` tests the path to the default shell for the user - - `uid` tests the user identifier - - `warndays` tests the number of days a user is warned before a password must be changed - - etc host allow/deny - - `daemon` daemon returns a string containing the daemon that is allowed in the rule. - - `client_list` client_list returns a 2d string array where each entry contains the clients specified for the rule. - - `options` options returns a 2d string array where each entry contains any options specified for the rule. +- **Docker** + - `id`: Retrieves the container ID. + - `image`: Retrieves the image name. + - `repo`: Retrieves the repository name. + - `tag`: Retrieves the tag. + - `ports`: Retrieves the ports. + - `command`: Retrieves the command. -**Suggested Resources to start on (Hard):** - - etc shadow - - `users` A list of strings, representing the usernames matched by the filter - - `passwords` A list of strings, representing the encrypted password strings for entries matched by the where filter. Each string may not be an encrypted password, but rather a * or similar which indicates that direct logins are not allowed. - - `last_changes` A list of integers, indicating the number of days since Jan 1 1970 since the password for each matching entry was changed. - - `min_days` A list of integers reflecting the minimum number of days a password must exist, before it may be changed, for the users that matched the filter. - - `max_days` A list of integers reflecting the maximum number of days after which the password must be changed for each user matching the filter. - - `warn_days` A list of integers reflecting the number of days a user is warned about an expiring password for each user matching the filter. - - `inactive_days` A list of integers reflecting the number of days a user must be inactive before the user account is disabled for each user matching the filter. - - `expiry_dates` A list of integers reflecting the number of days since Jan 1 1970 that a user account has been disabled, for each user matching the filter. Value is nil if the account has not expired. - - `count` The count property tests the number of records that the filter matched. - - etc fstab - - `device_name` is the name associated with the device. - - `mount_point` is the directory at which the file system is configured to be mounted. - - `file_system_type` is the type of file system of the device or partition. - - `mount_options` is the options for the device or partition. - - `dump_options` is a number used by dump to decide if a file system should be backed up. - - `file_system_options` is a number that specifies the order the file system should be checked. - - Tomcat server conf reader - - `parse_conf` parse the conf file - - `fetch_connectors` retrieves keys `port`, `protocol`, `timeout`, `redirect`, `sslprotocol`, `scheme`, `sslenable`, `clientauth`, `secure` +- **Git** + - `branches`: Checks if a branch exists. + - `current_branch`: Retrieves the current branch. + - `last_commit`: Retrieves the last commit from the log. + - `git_log`: Retrieves the log of all commits. + - `tag`: Retrieves the tag for the repository. + +**Suggested Medium Complexity Resources to Start With:** + +- **File Resource** + - `owner`: Tests if the owner of the file matches the specified value. + - `group`: Tests if the group to which a file belongs matches the specified value. + - `size`: Tests if a file’s size matches, is greater than, or is less than the specified value. + - `contents`: Tests if contents in the file match the value specified in a regular expression. + - `path`: Retrieves the path to the file. + +- **Directory Resource** + - `owner`: Tests if the owner of the directory matches the specified value. + - `group`: Tests if the group to which a directory belongs matches the specified value. + - `size`: Tests if a directory’s size matches, is greater than, or is less than the specified value. + - `contents`: Tests if contents in the directory match the value specified in a regular expression. + - `path`: Retrieves the path to the directory. + +- **Users** + - `exist`: Tests if the named user exists. + - `gid`: Tests the group identifier. + - `group`: Tests the group to which the user belongs. + - `groups`: Tests if the user belongs to two (or more) groups. + - `home`: Tests the home directory path for the user. + - `maxdays`: Tests the maximum number of days between password changes. + - `mindays`: Tests the minimum number of days between password changes. + - `shell`: Tests the path to the default shell for the user. + - `uid`: Tests the user identifier. + - `warndays`: Tests the number of days a user is warned before a password must be changed. + +- **etc/hosts.allow and etc/hosts.deny** + - `daemon`: Returns a string containing the daemon that is allowed in the rule. + - `client_list`: Returns a 2D string array where each entry contains the clients specified for the rule. + - `options`: Returns a 2D string array where each entry contains any options specified for the rule. + +**Suggested Hard Resources to Start With:** + +- **etc/shadow** + - `users`: A list of strings representing the usernames matched by the filter. + - `passwords`: A list of strings representing the encrypted password strings for entries matched by the filter. Each string may not be an encrypted password but rather a `*` or similar, indicating that direct logins are not allowed. + - `last_changes`: A list of integers indicating the number of days since Jan 1, 1970, since the password for each matching entry was changed. + - `min_days`: A list of integers reflecting the minimum number of days a password must exist before it may be changed for the users that matched the filter. + - `max_days`: A list of integers reflecting the maximum number of days after which the password must be changed for each user matching the filter. + - `warn_days`: A list of integers reflecting the number of days a user is warned about an expiring password for each user matching the filter. + - `inactive_days`: A list of integers reflecting the number of days a user must be inactive before the user account is disabled for each user matching the filter. + - `expiry_dates`: A list of integers reflecting the number of days since Jan 1, 1970, that a user account has been disabled for each user matching the filter. The value is `nil` if the account has not expired. + - `count`: Tests the number of records that the filter matched. + +- **etc/fstab** + - `device_name`: The name associated with the device. + - `mount_point`: The directory at which the file system is configured to be mounted. + - `file_system_type`: The type of file system of the device or partition. + - `mount_options`: The options for the device or partition. + - `dump_options`: A number used by dump to decide if a file system should be backed up. + - `file_system_options`: A number that specifies the order the file system should be checked. + +- **Tomcat Server Configuration Reader** + - `parse_conf`: Parses the configuration file. + - `fetch_connectors`: Retrieves keys such as `port`, `protocol`, `timeout`, `redirect`, `sslprotocol`, `scheme`, `sslenable`, `clientauth`, and `secure`. diff --git a/src/courses/advanced/08.md b/src/courses/advanced/08.md index d31631f23..2ada44084 100644 --- a/src/courses/advanced/08.md +++ b/src/courses/advanced/08.md @@ -12,7 +12,7 @@ headerDepth: 3 Now that we have a solid grasp on InSpec, let's discuss the bigger picture -- how we can use the validation content we wrote inside a real test case pipeline. -If you have taken the [SAF User](../user/README.md) class, you will be familiar with many of the activities we will be doing as part of the sample pipeline in the next few sections, including using Ansible to harden a test image, validating it with InSpec, and using the SAF CLI to assess our results. We will be bundling all those activities together into a pipeline workflow file so that we can automate them. +If you have taken the [SAF User](../user/README.md) class, you will be familiar with many of the activities we will be doing as part of the sample pipeline in the next few sections. These activities include using Ansible to harden a test image, validating it with InSpec, and using the SAF CLI to assess our results. We will bundle all those activities together into a pipeline workflow file so that we can automate them. ### Background @@ -20,19 +20,18 @@ Software developers create pipelines for the same reason that factory designers Pipelines also enable several paradigms in modern DevSecOps development, including continuous integration and continuous delivery (CD). -**Continuous Integration (CI)** is the practice of requiring all changes to a codebase to pass a test suite before they are committed. CI is implemented on a codebase to make sure that any time a bug is introduced to a codebase, it is caught and corrected as soon as someone tries to commit it, instead of months or years later in operations when it is much more difficult to fix. +- **Continuous Integration (CI)** is the practice of requiring all changes to a codebase to pass a test suite before they are committed. CI ensures that any time a bug is introduced to a codebase, it is caught and corrected as soon as someone tries to commit it, instead of months or years later in operations when it is much more difficult to fix. +- **Continuous Delivery (CD)** is the practice of automatically delivering software (such as by pushing code to live deployment) once it passes a test suite. This is a core practice of DevSecOps -- code should be developed incrementally, and small units of functionality should be delivered as soon as they are complete and pass all tests. -**Continuous Delivery** is the practice of automatically delivering software (such as, for example, by pushing code to live deployment) once it passes a test suite. This is a core practice of DevSecOps -- code should be developed incrementally and small units of functionality should be delivered as soon as they are complete and pass all tests. - -A fully mature DevSecOps pipeline will implement both strategies. Note that *both CI and CD both presuppose that you have a high-quality, easy to use test suite available*. We will create our demo pipeline using an InSpec profile as our test suite. +A fully mature DevSecOps pipeline will implement both strategies. Note that *both CI and CD presuppose that you have a high-quality, easy-to-use test suite available*. We will create our demo pipeline using an InSpec profile as our test suite. ## Pipeline Orchestrators -We will be building our sample pipeline using [GitHub Actions](https://docs.github.com/en/actions), the pipeline orchestration tool that is built into GitHub. We are using this feature because it is free to use unless we exceed usage limits, and because we can write up a pipeline workflow file right from our GitHub Codespaces lab environment. +We will build our sample pipeline using [GitHub Actions](https://docs.github.com/en/actions), the pipeline orchestration tool built into GitHub. We are using this feature because it is free to use unless we exceed usage limits, and because we can write up a pipeline workflow file right from our GitHub Codespaces lab environment. While we are using GitHub Actions as the simplest option for this class, there are many other pipeline orchestration tools. Common tools include: -- GitLab's GitLab [CI/CD](https://docs.gitlab.com/ee/ci/) +- GitLab's [CI/CD](https://docs.gitlab.com/ee/ci/) - [DroneCI](https://www.drone.io/) - Atlassian's [BitBucket Pipelines](https://bitbucket.org/product/features/pipelines) - [Jenkins](https://www.jenkins.io/) @@ -40,37 +39,50 @@ While we are using GitHub Actions as the simplest option for this class, there a ::: note Have you used pipeline orchestration software other than that mentioned here? ::: -Most of the general concepts discusses in this portion of the class will be covered by any pipeline orchestrator tool, though they wil likely have different terminology for individual features. +Most of the general concepts discussed in this portion of the class will be covered by any pipeline orchestrator tool, though they will likely have different terminology for individual features. ## Our Use Case Let's learn how to build pipelines by taking on the role of a developer who needs to create a pipeline for a hardened NGINX container image. We can borrow the InSpec profile we've already written for our container to make sure that any time we update the container image, we do not accidentally break any security controls. We need to: + - Deploy a test NGINX container image - Harden the container image - Run a validation scan (our InSpec profile) against the test system - Verify that the hardened image is, in fact, hardened to our satisfaction using the validation results from the test system -Real-world pipelines are often used this way in a *gold image pipeline,* which run on a defined frequency to continuously deliver a secure, updated machine image that can be used as a base for further applications. In this use case, we would take the hardened and validated image we produced as part of the pipeline and save it as our new gold image. This way, developers can grab a "known-good" image to host their applications without having to configure or keep it up to date themselves. +Real-world pipelines are often used this way in a *gold image pipeline,* which runs on a defined frequency to continuously deliver a secure, updated machine image that can be used as a base for further applications. In this use case, we would take the hardened and validated image we produced as part of the pipeline and save it as our new gold image. This way, developers can grab a "known-good" image to host their applications without having to configure or keep it up to date themselves. ## Pipeline Tasks Pipelines are conceptually broken down into a series of individual tasks. The tasks we need to complete in our pipeline include: -- Prep - configure our runner for later work (in our case, make sure InSpec is installed and ready to go) -- Lint - make sure code passes style requirements (in our case, `inspec check .`) -- Deploy the test suite (in our case, an NGINX container we want to use as a test system) -- Harden the test suite (we will use Ansible like we do in the [SAF User class](../user/10.md)) -- Validate - check configuration (in our case, run InSpec against our test system and generate a report) -- Verify - confirm if the validation run passed our expectations (in our case, use the SAF CLI to check that the Validation report met our threshold) -- Do something with results - e.g. publish our image if it met our expectations and passed the tests -GitHub Actions organizes the tasks inside a pipeline into **jobs**. A given pipeline can trigger multiple jobs, but for our sample gold image pipeline we really only need one for storing all of our tasks. +- **Prep** - configure our runner for later work (in our case, make sure InSpec is installed and ready to go) +- **Lint** - make sure code passes style requirements (in our case, `inspec check .`) +- **Deploy** the test suite (in our case, an NGINX container we want to use as a test system) +- **Harden** the test suite (we will use Ansible like we do in the [SAF User class](../user/10.md)) +- **Validate** - check configuration (in our case, run InSpec against our test system and generate a report) +- **Verify** - confirm if the validation run passed our expectations (in our case, use the SAF CLI to check that the validation report met our threshold) +- **Do something with results** - e.g., publish our image if it met our expectations and passed the tests + +GitHub Actions organizes the tasks inside a pipeline into **jobs**. A given pipeline can trigger multiple jobs, but for our sample gold image pipeline, we really only need one for storing all of our tasks. ### Runners Pipeline orchestrators all have some system for selecting a **runner** node that will be assigned to handle the tasks we define for the pipeline. Runners are any system -- containers or full virtual machines in a cloud environment -- that handle the actual task execution for a pipeline. -In the case of GitHub actions, when we trigger a pipeline, GitHub by default sends the jobs to its cloud environment to hosted runner nodes. The operating system of the runner for a particular job can be specified in the workflow file. See the [docs](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners) for details. +In the case of GitHub Actions, when we trigger a pipeline, GitHub by default sends the jobs to runner nodes hosted in its cloud environment. The operating system of the runner for a particular job can be specified in the workflow file. See the [docs](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners) for details. + +In the next sections, we will create a GitHub Action workflow to handle these jobs for us. We will commit the workflow file to our repository and watch it work! + +## Summary + +In this lesson, we covered: + +- The importance of CI/CD pipelines in DevSecOps. +- The role of pipeline orchestrators like GitHub Actions. +- The steps involved in creating a pipeline for a hardened NGINX container image. +- The tasks involved in a pipeline and how they are organized into jobs and runners. -In the next sections we will create a GitHub Action workflow to handle these jobs for us. We will commit the workflow file to our repository and watch it work! +By the end of this lesson, you should have a good understanding of how to create and manage CI/CD pipelines using GitHub Actions. diff --git a/src/courses/advanced/09.md b/src/courses/advanced/09.md index 1e4fe0e2e..da536c332 100644 --- a/src/courses/advanced/09.md +++ b/src/courses/advanced/09.md @@ -10,7 +10,7 @@ headerDepth: 3 Let's create a GitHub Action workflow to define our pipeline. -### The Workflow file +### The Workflow File Pipeline orchestration tools are usually configured in a predefined workflow file, which defines a set of tasks and the order they should run in. Workflow files live in the `.github` folder for GitHub Actions (the equivalent is the `gitlab-ci` file for GitLab CI, for example). @@ -31,17 +31,19 @@ Neither command has output, but you should see a new file if you examine your `. ```sh tree .github ``` + @tab Expected Output - .github folder structure + ``` .github └── workflows └── pipeline.yml ``` + ::: Open that file up for editing. - ### Workflow File - Complete Example For reference, this is the complete workflow file we will end up with at the end of the class: @@ -187,6 +189,7 @@ First, we'll define a `job`, the logical group for our tasks. In our `pipeline.y ::: code-tabs#shell @tab Adding a Job + ``` yaml jobs: gold-image: @@ -198,7 +201,9 @@ jobs: # path to our profile PROFILE: my_nginx ``` + @tab `pipeline.yml` after adding a job + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -218,14 +223,15 @@ jobs: # path to our profile PROFILE: my_nginx ``` + ::: - `gold-image` is an arbitrary name we gave this job. It would be more useful if we were running more than one. - `name` is a simple title for this job. - `runs-on` declares what operating system we want our runner node to be. We picked Ubuntu (and we suggest you do to to make sure the rest of the workflow commands work correctly). - `env` declares environment variables for use by any step of this job. We will go ahead and set a few variables for running InSpec later on: - - `CHEF_LICENSE` will automatically accept the license prompt when you run InSpec the first time so that we don' hang waiting for input! - - `PROFILE` is set to the path of the InSpec profile we will use to test. This will make it easier to refer to the profile multiple times and still make it easy to swap out. + - `CHEF_LICENSE` will automatically accept the license prompt when you run InSpec the first time so that we don't hang waiting for input! + - `PROFILE` is set to the path of the InSpec profile we will use to test. This will make it easier to refer to the profile multiple times and still make it easy to swap out. ### The Next Step @@ -233,13 +239,16 @@ Now that we have our job metadata in place, let's add an actual task for the run ::: code-tabs#shell @tab Adding a Step + ``` yaml steps: # updating all dependencies is always a good start - name: PREP - Update runner run: sudo apt-get update ``` + @tab `pipeline.yml` after adding a step + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -263,6 +272,7 @@ jobs: - name: PREP - Update runner run: sudo apt-get update ``` + ::: ::: warning Again, be very careful about your whitespacing when filling out this structure! @@ -271,12 +281,15 @@ jobs: We now have a valid workflow file that we can run. We can trigger this pipeline to run by simply committing what we have written so far to our repository -- because of the event trigger we set, GitHub will catch the commit event and trigger our pipeline for us. Let's do this now. At your terminal: ::: code-tabs#shell @tab Committing And Pushing Code + ``` sh git add .github git commit -s -m "adding the github workflow file" git push origin main ``` + @tab Output of Pushing Code + ``` sh $> git add . $> git commit -s -m "adding the github workflow file" @@ -318,9 +331,9 @@ Congratulations, you've run a pipeline! Now we just need to make it do something ::: details How Often Should I Push Code? Won't Each Push Trigger a Pipeline Run? It's up to you. -Some orchestration tools let you run pipelines locally, and in a real repo, you'd probably want to do this on a branch other than the `main` one to keep it clean. But in practice it has been the authors' experience that everyone winds up simply creating dozens of commits to the repo to trigger the pipeline and watch for the next spot where it breaks. There's nothing wrong with doing this. +Some orchestration tools let you run pipelines locally, and in a real repo, you'd probably want to do this on a branch other than the `main` one to keep it clean. But in practice it has been the authors' experience that everyone winds up simply creating dozens of commits to the repo to trigger the pipeline and watch for the next spot where it breaks. There's nothing wrong with doing this. For example, consider how many failed pipelines the author had while designing the test pipeline for this class, and how many of them involve fixing simple typos. . . ![No Big Deal!](../../assets/img/many_commits_are_ok.png) -::: \ No newline at end of file +::: diff --git a/src/courses/advanced/10.md b/src/courses/advanced/10.md index feb4baf25..15f258db5 100644 --- a/src/courses/advanced/10.md +++ b/src/courses/advanced/10.md @@ -16,6 +16,7 @@ First, we need to make sure that the node that runs our pipeline will have acces ::: code-tabs#shell @tab Adding More Steps + ``` yaml - name: PREP - Install InSpec executable run: curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P inspec -v 5 @@ -27,7 +28,9 @@ First, we need to make sure that the node that runs our pipeline will have acces - name: PREP - Check out this repository uses: actions/checkout@v3 ``` + @tab `pipeline.yml` after adding more steps + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -61,6 +64,7 @@ jobs: - name: PREP - Check out this repository uses: actions/checkout@v3 ``` + ::: The first new step installs the InSpec executable using the install instructions for Ubuntu as given [here](https://docs.chef.io/inspec/install/#cli-1). Remember that GitHub gives us a brand-new runner node every time we execute the pipeline; if we don't install it and it isn't on the pre-installed software list, it won't be available! @@ -77,19 +81,22 @@ This Action in particular is one of the most common -- [`checkout`](https://gith Most CI pipelines will also include a lint step, where the code is statically tested to make sure that it does not contain errors that we can spot before we even execute it, and to make sure it is conforming to a project style guide. For our purposes, it's a good idea to run the `inspec check` command to ensure that InSpec can recognize our tests as a real profile. -::: Note We can run InSpec inside this runner now because we installed it in a prior step! +::: note We can run InSpec inside this runner now because we installed it in a prior step! ::: Let's add the lint step: ::: code-tabs#shell @tab Adding Lint Step + ``` yaml # double-check that we don't have any serious issues in our profile code - name: LINT - Run InSpec Check run: inspec check $PROFILE ``` + @tab `pipeline.yml` after adding lint step + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -127,6 +134,7 @@ jobs: - name: LINT - Run InSpec Check run: inspec check $PROFILE ``` + ::: ### Deploy Test Container @@ -141,6 +149,7 @@ We'll also need to make sure that our test target has Python installed, since th ::: code-tabs#shell @tab Adding Deploy Steps + ``` yaml # launch a container as the test target - name: DEPLOY - Run a Docker container from nginx @@ -152,7 +161,9 @@ We'll also need to make sure that our test target has Python installed, since th docker exec nginx apt-get update -y docker exec nginx apt-get install -y python3 ``` + @tab `pipeline.yml` after adding deploy steps + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -200,6 +211,7 @@ jobs: docker exec nginx apt-get update -y docker exec nginx apt-get install -y python3 ``` + ::: ::: tip Multiline `run` commands @@ -216,6 +228,7 @@ Let's add the Hardening steps now. ::: code-tabs#shell @tab Adding Harden Steps + ``` yaml # fetch the hardening role and requirements - name: HARDEN - Fetch Ansible role @@ -230,7 +243,9 @@ Let's add the Hardening steps now. - name: HARDEN - Run Ansible hardening run: ansible-playbook --inventory=nginx, --connection=docker ansible-nginx-stigready-hardening/hardening-playbook.yml ``` + @tab `pipeline.yml` after adding hardening steps + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -291,6 +306,7 @@ jobs: - name: HARDEN - Run Ansible hardening run: ansible-playbook --inventory=nginx, --connection=docker ansible-nginx-stigready-hardening/hardening-playbook.yml ``` + ::: ### Validation @@ -301,6 +317,7 @@ Let's run InSpec: ::: code-tabs#shell @tab Adding Validate Steps + ``` yaml - name: VALIDATE - Run InSpec # we dont want to stop if our InSpec run finds failures, we want to continue and record the result @@ -322,7 +339,9 @@ Let's run InSpec: with: path: results/pipeline_run_attested.json ``` + @tab `pipeline.yml` after adding validate steps + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -403,9 +422,10 @@ jobs: with: path: results/pipeline_run_attested.json ``` + ::: -You may notice that the step that runs InSpec sets an attribute called `continue-on-error` to `true`. We'll discuss why we do that in the next section. +You may notice that the step that runs InSpec sets an attribute called `continue-on-error` to `true`. We'll discuss why we do that in the next section. You may also notice that we are adding in an attestation straight to our validation code right after we generate it! If you are unfamiliar with the attestation process, take a look at the [SAF User Class's section on Attestations](../user/12.md). To make a long story short, we are adding in a piece of data that confirms that a control that cannot be checked automatically by InSpec has in fact been validated. @@ -417,7 +437,7 @@ Remember that we used the `checkout` action earlier, so the pipeline is currentl In our `VALIDATE - Apply An Attestation` step, we invoke the SAF CLI. -The [SAF CLI](https://saf-cli.mitre.org/) is one the tool that the SAF supports to help automate security validation. It is our "kitchen-sink" utility for pipelines. If you took the [SAF User Class](../user/README.md), you are already familiar with the SAF CLI's [attestation](../user/12.md) function. +The [SAF CLI](https://saf-cli.mitre.org/) is a key tool amongst the many that the SAF utilizes to help automate security processes. It is our custom-made, "kitchen-sink" utility - and it sees a lot of use in CI/CD pipelines. If you took the [SAF User Class](../user/README.md), you are already familiar with the SAF CLI's [attestation](../user/12.md) function. This tool was installed alongside InSpec when you ran the `./build-lab.sh` script into your codespace. Note that we also installed it as a step in the pipeline. For general installation instructions, see the first link in the above paragraph. @@ -432,11 +452,13 @@ In addition to the documentation site, you can view the SAF CLI's capabilities b ::: code-tabs @tab Command + ```sh saf help ``` @tab Output + ```sh The MITRE Security Automation Framework (SAF) Command Line Interface (CLI) brings together applications, techniques, libraries, and tools developed by MITRE and the security community to streamline security automation for systems and DevOps pipelines @@ -470,9 +492,11 @@ COMMANDS summary Get a quick compliance overview of an HDF file version ``` + ::: You can get more information on a specific topic by running: + ```sh saf [TOPIC] -h ``` @@ -502,7 +526,7 @@ Heimdall Server includes some built-in [API endpoints](https://saf.mitre.org/doc We're going to leverage Heimdall Demo and its API to build our pipeline. ::: tip Heimdall Demo -The MITRE SAF team hosts the Heimdall Demo server at https://heimdall-demo.mitre.org. This is a fully-featured deployment of Heimdall that we use as the data aggregator and dashboard for any pipelines that we build as part of our open-source work. +The MITRE SAF team hosts the Heimdall Demo server at . This is a fully-featured deployment of Heimdall that we use as the data aggregator and dashboard for any pipelines that we build as part of our open-source work. We deploy Heimdall Demo so that the community can play around with a full deployment. ::: @@ -550,12 +574,13 @@ Scroll down until you see a button for adding a new repository secret. Click it, ![Adding Secrets](../../assets/img/adding_secrets.png) -Note that once you add this secret, you can _overwrite_ the value, or delete it entirely, but you can't _read_ it. +Note that once you add this secret, you can *overwrite* the value, or delete it entirely, but you can't *read* it. However, we can now reference the secret name -- HEIMDALL_API_KEY -- inside our pipeline code. Let's add a step for this now. ::: code-tabs#shell @tab Adding Validate Steps + ``` yaml # drop off the data with our dashboard - name: VALIDATE - Upload to Heimdall @@ -564,7 +589,9 @@ However, we can now reference the secret name -- HEIMDALL_API_KEY -- inside our curl -# -s -F data=@results/pipeline_run_attested.json -F "filename=${{ github.actor }}-pipeline-demo-${{ github.sha }}.json" -F "public=true" -F "evaluationTags=${{ github.repository }},${{ github.workflow }}" -H "Authorization: Api-Key ${{ secrets.HEIMDALL_API_KEY }}" "https://heimdall-demo.mitre.org/evaluations" ``` + @tab `pipeline.yml` after adding Heimdall push + ``` yaml name: Demo Security Validation Gold Image Pipeline @@ -651,4 +678,5 @@ jobs: run: | curl -# -s -F data=@results/pipeline_run_attested.json -F "filename=${{ github.actor }}-pipeline-demo-${{ github.sha }}.json" -F "public=true" -F "evaluationTags=${{ github.repository }},${{ github.workflow }}" -H "Authorization: Api-Key ${{ secrets.HEIMDALL_API_KEY }}" "https://heimdall-demo.mitre.org/evaluations" ``` -::: \ No newline at end of file + +::: diff --git a/src/courses/advanced/11.md b/src/courses/advanced/11.md index fae50573c..ed1204393 100644 --- a/src/courses/advanced/11.md +++ b/src/courses/advanced/11.md @@ -8,39 +8,42 @@ headerDepth: 3 ## Verification -At this point we have a much more mature workflow file. We have one more activity we need to do -- verification, or checking that the output of our validation run met our expectations. +At this point, we have a much more mature workflow file, but we still have one more activity left to do -- verification, or checking that the output of our validation run met our expectations. Note that "meeting our expectations" does *not* automatically mean that there are no failing tests. In many real-world use cases, security tests fail, but the software is still considered worth the risk to deploy because of mitigations for that risk, or perhaps the requirement is inapplicable due to the details of the deployment. With that said, we still want to run our tests to make sure we are continually collecting data; we just don't want our pipeline to halt if it finds a test that we were always expecting to fail. -By default, the InSpec executable returns a code 100 if *any* tests in a profile run fail. Pipeline orchestrators, like most software, interpret any non-zero return code as a serious failure, and will halt the pipeline run accordingly unless we explicitly tell it to ignore errors. This is why the "VALIDATE - Run InSpec" step has the `continue-on-error: true ` attribute specified. +By default, the InSpec executable returns a code 100 if *any* tests in a profile run fail. Pipeline orchestrators, like most software, interpret any non-zero return code as a serious failure and will halt the pipeline run accordingly unless we explicitly tell it to ignore errors. This is why the "VALIDATE - Run InSpec" step has the `continue-on-error: true` attribute specified. Our goal is to complete our InSpec scan, collect the result as a report file, and then parse that file to determine if we met our own *threshold* of security. We can do this with the SAF CLI. ### Updating the Workflow File -Let's add two steps to our pipeline to use the SAF CLI to understand our InSpec scan results before we verify them against a threshold. +Let's add two steps to our pipeline to use the SAF CLI to understand our InSpec scan results before we verify them against a threshold. ::: code-tabs#shell @tab Adding Verify Steps + ``` yaml - name: VERIFY - Display our results summary run: | saf view summary -i results/pipeline_run_attested.json # check if the pipeline passes our defined threshold -- name: VERIFY - Ensure the scan meets our results threshold +- name: VERIFY - Ensure the scan meets our results threshold run: | saf validate threshold -i results/pipeline_run_attested.json -F threshold.yml ``` + @tab `pipeline.yml` after adding verify steps + ``` yaml name: Demo Security Validation Gold Image Pipeline # define the triggers for this action -on: +on: push: # trigger this action on any push to main branch - branches: [ main, pipeline ] + branches: [ main, pipeline ] jobs: gold-image: @@ -48,26 +51,26 @@ jobs: runs-on: ubuntu-20.04 env: # so that we can use InSpec without manually accepting the license - CHEF_LICENSE: accept - # path to our profile - PROFILE: my_nginx + CHEF_LICENSE: accept + # path to our profile + PROFILE: my_nginx steps: # updating all dependencies is always a good start - - name: PREP - Update runner + - name: PREP - Update runner run: sudo apt-get update - - name: PREP - Install InSpec executable + - name: PREP - Install InSpec executable run: curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P inspec -v 5 - name: PREP - Install SAF CLI run: npm install -g @mitre/saf # checkout the profile, because that's where our profile is! - - name: PREP - Check out this repository + - name: PREP - Check out this repository uses: actions/checkout@v3 # double-check that we don't have any serious issues in our profile code - - name: LINT - Run InSpec Check + - name: LINT - Run InSpec Check run: inspec check $PROFILE # launch a container as the test target @@ -94,8 +97,8 @@ jobs: run: ansible-playbook --inventory=nginx, --connection=docker ansible-nginx-stigready-hardening/hardening-playbook.yml - name: VALIDATE - Run InSpec - # we dont want to stop if our InSpec run finds failures, we want to continue and record the result - continue-on-error: true + # we don't want to stop if our InSpec run finds failures, we want to continue and record the result + continue-on-error: true run: | inspec exec $PROFILE \ --input-file=$PROFILE/inputs-linux.yml \ @@ -108,7 +111,7 @@ jobs: saf attest apply -i results/pipeline_run.json attestation.json -o results/pipeline_run_attested.json # save our results to the pipeline artifacts, even if the InSpec run found failing tests - - name: VALIDATE - Save Test Result JSON + - name: VALIDATE - Save Test Result JSON uses: actions/upload-artifact@v3 with: path: results/pipeline_run_attested.json @@ -119,20 +122,22 @@ jobs: run: | curl -# -s -F data=@results/pipeline_run_attested.json -F "filename=${{ github.actor }}-pipeline-demo-${{ github.sha }}.json" -F "public=true" -F "evaluationTags=${{ github.repository }},${{ github.workflow }}" -H "Authorization: Api-Key ${{ secrets.HEIMDALL_API_KEY }}" "https://heimdall-demo.mitre.org/evaluations" - - name: VERIFY - Display our results summary + - name: VERIFY - Display our results summary run: | saf view summary -i results/pipeline_run_attested.json - + # check if the pipeline passes our defined threshold - - name: VERIFY - Ensure the scan meets our results threshold + - name: VERIFY - Ensure the scan meets our results threshold run: | saf validate threshold -i results/pipeline_run_attested.json -F threshold.yml ``` + ::: A few things to note here: -- We added the `summary` step because it will print us a concise summary inside the pipeline job view itself. That command takes one file argument; the results file we want to summarize. -- The `validate threshold` command, however, needs two files -- one is our report file as usual, and the other is a **threshold file**. + +- We added the `summary` step because it will print us a concise summary inside the pipeline job view itself. That command takes one file argument: the results file we want to summarize. +- The `validate threshold` command needs *two files*: one is our report file as usual, and the other is a **threshold file**. #### Threshold Files @@ -155,9 +160,9 @@ failed: This file specifies that we require a *minimum of 80% of the tests to pass.* We also specify that *at least one of them should pass, and that at maximum two of them can fail.* ::: info Threshold Files Options -To make more specific or detailed thresholds, check out [this documentation on generating theshold files](https://github.com/mitre/saf/wiki/Validation-with-Thresholds). +To make more specific or detailed thresholds, check out [this documentation on generating threshold files](https://github.com/mitre/saf/wiki/Validation-with-Thresholds). -*NOTE: You can name the threshold file something else or put it in a different location. We specify the name and location only for convenience.* +*NOTE: You can name the threshold file something else or put it in a different location. We specify the name and location only for convenience.* ::: This is a sample pipeline, so we are not too worried about being very stringent. For now, let's settle for running the pipeline with no *errors* (that is, as long as each test runs, we do not care if it passed or failed, but a source code error should still fail the pipeline). @@ -177,12 +182,15 @@ And with that, we have a complete pipeline file. Let's commit our changes and se ::: code-tabs#shell @tab Committing And Pushing Code + ``` sh git add .github git commit -s -m "finishing the pipeline" git push origin main ``` + @tab Output of Pushing Code + ``` sh $> git add . $> git commit -s -m "finishing the pipeline" @@ -216,6 +224,7 @@ Note in the SAF CLI Summary step, we get a simple YAML output summary of the InS ![The Summary](../../assets/img/summary_data.png) We see six critical-severity tests (remember how we set them all to `impact 1.0`?) passing, and no failures: + ``` yaml - profileName: my_nginx resultSets: @@ -265,11 +274,11 @@ From here, we can download that file and manually drop it off in something like And voilà! We have a completed pipeline. We are launching our application, hardening it, testing it, processing the resulting data, and aggregating it into a dashboard. ::: tip Viewing the file in Heimdall -Heimdall is a very powerfull tool for examining security data. See more details in the [SAF User Class](../user/09.md) +Heimdall is a very powerful tool for examining security data. See more details in the [SAF User Class](../user/09.md) ::: ### What Else Can We Do With A Pipeline? -In a real use case, if our pipeline passed, we would next save our bonafide hardened image to a secure registry where it could be distributed to users. If the pipeline did not pass, we would have already collected data describing why, in the form of InSpec scan reports that we save as artifacts. +In a real use case, if our pipeline passed, we would next save our hardened image to a secure registry where it could be distributed to users. If the pipeline did not pass, we would have already collected data describing why, in the form of InSpec scan reports that we save as artifacts. Other pipelines might flat out refuse to permit merging code into a repository branch, or alert the developer team if issues occur. The exact implementation is up to you and your needs for your work. diff --git a/src/courses/advanced/12.md b/src/courses/advanced/12.md index acd6acad7..13f1ceb8d 100644 --- a/src/courses/advanced/12.md +++ b/src/courses/advanced/12.md @@ -8,16 +8,21 @@ headerDepth: 3 ## Next Steps ### Take the Class Survey -Take our brief [survey](https://forms.office.com/g/W2xtcV2frW) to give feedback to fuel class improvement. -### Save your work on GitHub -If you want to save your work in your remote repository in GitHub, you need to use Git commands. You can reference a [Git cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf) or checkout [this Git tutorial](https://learngitbranching.js.org/). +Please take our brief [survey](https://forms.office.com/g/W2xtcV2frW) to provide feedback and help us improve the class. -### Reference other class content -This class is one of a set of security automation content offered by the MITRE SAF(c) team. If you found this content interesting and you want to learn more, we encourage you to go back to the User Class or Beginner Security Automation Developer Class (shown in the table of contents on the left). +### Save Your Work on GitHub + +To save your work in your remote repository on GitHub, use Git commands. You can reference a [Git cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf) or check out [this Git tutorial](https://learngitbranching.js.org/). + +### Reference Other Class Content + +This class is part of a set of security automation content offered by the MITRE SAF(c) team. If you found this content interesting and want to learn more, we encourage you to explore the User Class or Beginner Security Automation Developer Class, as shown in the table of contents on the left. ### Check Out the Rest of MITRE SAF(c)'s Content -MITRE SAF(c) is a large collection of tools and techniques for security automation in addition to those discussed in this class. You can find utilities and libraries to support any step of the software development lifecycle by browsing our offerings at [saf.mitre.org](https://saf.mitre.org). Note that everything offered by MITRE SAF(c) is open-source and available to use free of charge. You can also reference all of the resources listed from the class on the [Resources Page](../../resources/README.md) + +MITRE SAF(c) offers a large collection of tools and techniques for security automation beyond those discussed in this class. You can find utilities and libraries to support any step of the software development lifecycle by browsing our offerings at [saf.mitre.org](https://saf.mitre.org). Note that everything offered by MITRE SAF(c) is open-source and available free of charge. You can also reference all the resources listed from the class on the [Resources Page](../../resources/README.md). ### Contact Us -The MITRE SAF(c) team can be contacted at [saf@groups.mitre.org](mailto:saf@groups.mitre.org). We support U.S. government sponsors in developing new tools for the Framework and in implementing the existing ones in DevSecOps pipelines. If you have a question about how you can use any of the content you saw in this class in your own environment, we'd be happy to help. + +The MITRE SAF(c) team can be contacted at [saf@groups.mitre.org](mailto:saf@groups.mitre.org). We support U.S. government sponsors in developing new tools for the Framework and implementing existing ones in DevSecOps pipelines. If you have any questions about how you can use the content from this class in your own environment, we'd be happy to help. diff --git a/src/courses/advanced/Appendix A - Writing Plural Resources.md b/src/courses/advanced/Appendix A - Writing Plural Resources.md index a2f6639d0..dabe89ae9 100644 --- a/src/courses/advanced/Appendix A - Writing Plural Resources.md +++ b/src/courses/advanced/Appendix A - Writing Plural Resources.md @@ -9,11 +9,11 @@ headerDepth: 3 You might have noticed that many InSpec resources have a "plural" version. For example, `user` has a `users` counterpart, and `package` has `packages`. -Plural resources examine platform objects in bulk. -For example, +Plural resources examine platform objects in bulk. +For example, -- sorting through which packages are installed on a system, or -- which virtual machines are on a cloud provider. +- sorting through which packages are installed on a system, or +- which virtual machines are on a cloud provider. - all processes running more than an hour, or all VMs on a particular subnet. Plural resources usually include functions to query the set of objects it represents by an attribute, like so: @@ -37,7 +37,7 @@ FilterTable is intended to help you author plural resources with **stucture data ```ruby inspec> etc_hosts.entries -=> +=> [#, #, #, @@ -55,11 +55,14 @@ In theory, yes - that would be used to implement different data fetching / cachi Let's take a look at the structure of a resource that leverages FilterTable. We will write a dummy resource that models a small group of students. Our resource will describe each student's name, grade, and the toys they have. Usually, a resource will include some methods that reach out the system under test to populate the FilterTable with real system data, but for now we're just going to hard-code in some dummy data. -* Create new profile +- Create new profile + ``` inspec init profile filtertable-test ``` -* Place following file as custom resource in `libraries` directory as `filter.rb`. + +- Place following file as custom resource in `libraries` directory as `filter.rb`. + :::tip You can also use `inspec init resource ` to create the template for your resource. When following the prompts, you can choose "plural" to create the template for a plural resource. ::: @@ -97,11 +100,8 @@ class Filtertable < Inspec.resource(1) end end ``` -Now we've got a nice blob of code in a resource file. Let's load this resource in the InSpec shell and see what we can do with it. -#### Run the InSpec shell with a custom resource - -Invoking the InSpec shell with `inspec shell` will give you access to all the core InSpec resources by default, but InSpec does not automatically know about your locally defined resources unless you point them out. If you're testing a local resource, use the `--depends` flag and pass in the profile directory that your resource lives in. +Now we've got a nice blob of code in a resource file. Let's load this resource in the InSpec shell and see what we can do with it. ``` inspec shell --depends /path/to/profile/root/ @@ -120,6 +120,7 @@ As we mentioned earlier, a real InSpec resource will include methods that will p After we define our FilterTable's columns, we can also define custom matchers just like we do in singluar resources using `register_custom_matcher`. That function takes a block as an argument that defines a boolean expression that tells InSpec when that matcher should return `true`. Note that the matcher's logic can get pretty complicated -- that's why we're shoving all of it into a resource so we can avoid having to write complicated tests. - `has_bike?` + ```ruby describe filtertable.where( name: "Donny" ) do it { should have_bike } @@ -134,6 +135,7 @@ Version: (not specified) Test Summary: 0 successful, 1 failure, 0 skipped ``` + ```ruby describe filtertable.where( name: "Sarah" ) do it { should have_bike } @@ -148,9 +150,11 @@ Version: (not specified) Test Summary: 1 successful, 0 failures, 0 skipped ``` + In the simplest examples, we filter the table down to a single student using `where` (more on `where` in a minute) and invoke a matcher that checks if that student has a `bike` in their list of toys. We can write matchers to have whatever logic we like. For example, while `has_bike` checks if _all_ of the students in the table under test have a bike, while `has_middle_schooler` checks if _any_ student in the table under test is in the 7th grade or higher. - `has_middle_schooler?` + ```ruby describe filtertable.where { name =~ /Sarah|John/ } do it { should have_middle_schooler } @@ -169,9 +173,10 @@ Test Summary: 1 successful, 0 failures, 0 skipped #### Custom Property -We can also declare custom properties for our resource, using whatever logic we like, just like we did for our custom matchers. Properties can be referred to with `its` syntax in an InSpec test. +We can also declare custom properties for our resource, using whatever logic we like, just like we did for our custom matchers. Properties can be referred to with `its` syntax in an InSpec test. - `bike_count` + ```ruby describe filtertable do its('bike_count') { should eq 3 } @@ -186,7 +191,9 @@ Target ID: Test Summary: 1 successful, 0 failures, 0 skipped ``` + - `middle_schooler_count` + ```ruby describe filtertable do its('middle_schooler_count') { should eq 4 } @@ -322,6 +329,7 @@ If you call `entries` without chaining it after `where`, calling entries will tr #### The `exist?` matcher This `register_custom_matcher` call: + ```ruby filter_table_config.register_custom_matcher(:exist?) { |filter_table| !filter_table.entries.empty? } ``` @@ -345,6 +353,7 @@ As when you are implementing matchers on a singular resource, the only thing tha #### The `count` property This `register_custom_property` call: + ```ruby filter_table_config.register_custom_property(:count) { |filter_table| filter_table.entries.count } ``` @@ -380,6 +389,7 @@ Unlike `entries`, which wraps each row in a Struct and omits undeclared fields, ### FilterTable Examples FilterTable is a very flexible and powerful class that works well when designing plural resources. As always, if you need to write a plural resource, we encourage you to examine existing resources in the InSpec source code to see how other developers have implemented it. Some good examples include: - - [FirewallD](https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/firewalld.rb) - - [Users](https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/users.rb) - - [Shadow](https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/shadow.rb) + +- [FirewallD](https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/firewalld.rb) +- [Users](https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/users.rb) +- [Shadow](https://github.com/inspec/inspec/blob/63a5fd26a6925b1570ee80e2953d259b58c3012e/lib/inspec/resources/shadow.rb) diff --git a/src/courses/advanced/Appendix B - Resource Examples.md b/src/courses/advanced/Appendix B - Resource Examples.md index fc975296e..820ef57da 100644 --- a/src/courses/advanced/Appendix B - Resource Examples.md +++ b/src/courses/advanced/Appendix B - Resource Examples.md @@ -10,6 +10,7 @@ As an example we will go through a few custom resources that were built and appr ### The IPv6 resource #### docs/resources/ip6tables.md.erb + ```ruby --- title: About the ip6tables Resource @@ -88,6 +89,7 @@ The `have_rule` matcher tests the named rule against the information in the `ip6 ``` #### lib/inspec/resources.rb + ```ruby require "inspec/resources/iis_site" require "inspec/resources/inetd_conf" @@ -99,6 +101,7 @@ require "inspec/resources/kernel_parameter" ``` #### lib/inspec/resources/ip6tables.rb + ```ruby require "inspec/resources/command" @@ -186,6 +189,7 @@ While submitting PR it may be possible to extend existing test elements from cur ::: #### test/integration/default/controls/ip6tables_spec.rb + ```ruby case os[:family] when 'ubuntu', 'fedora', 'debian', 'suse' @@ -213,6 +217,7 @@ end ``` #### test/unit/resources/ip6tables_test.rb + ```ruby require "helper" require "inspec/resource" @@ -249,7 +254,9 @@ end ``` ### The NGINX resource + #### docs/resources/nginx.md.erb + ```ruby --- title: The Nginx Resource @@ -326,6 +333,7 @@ where ``` #### lib/inspec/resource.rb + ```ruby require 'resources/mysql' require 'resources/mysql_conf' @@ -337,6 +345,7 @@ require 'resources/ntp_conf' ``` #### lib/resources/nginx.rb + ```ruby # encoding: utf-8 # author: Aaron Lippold, lippold@gmail.com @@ -438,6 +447,7 @@ end ``` #### test/unit/resources/nginx_test.rb + ```ruby # encoding: utf-8 # author: Aaron Lippold, lippold@gmail.com @@ -538,8 +548,10 @@ end ### Additional examples #### PAM resource currently open PR + - [PAM Resource](https://github.com/simp/inspec-profile-disa_stig-el7/blob/master/libraries/pam.rb) - [PAM PR](https://github.com/inspec/inspec/pull/3993) #### Customizing an already existing resource (windows registry) + - [https://github.com/mitre/microsoft-windows-2012r2-memberserver-stig-baseline/blob/master/libraries/windows_registry.rb](https://github.com/mitre/microsoft-windows-2012r2-memberserver-stig-baseline/blob/master/libraries/windows_registry.rb) diff --git a/src/courses/advanced/Appendix C - Adding Your Resource to InSpec.md b/src/courses/advanced/Appendix C - Adding Your Resource to InSpec.md index 3efd3bafa..077977abf 100644 --- a/src/courses/advanced/Appendix C - Adding Your Resource to InSpec.md +++ b/src/courses/advanced/Appendix C - Adding Your Resource to InSpec.md @@ -7,9 +7,10 @@ headerDepth: 3 Many of the official InSpec resources were written by community members. If you have created a resource for your project and would like to make it part of the official library, you can open a pull request against the InSpec codebase. -To get started, go to the main [InSpec Github Repo](https://github.com/inspec/inspec) and fork the repository. On your forked repository, make a new branch, and call it something unique pertaining to what resource you are making. For example, if you use the `file` resource, then a useful branch name could be `file_resource`. +To get started, go to the main [InSpec GitHub Repo](https://github.com/inspec/inspec) and fork the repository. On your forked repository, create a new branch with a unique name related to the resource you are making. For example, if you are creating a `file` resource, a useful branch name could be `file_resource`. + +InSpec's source code top-level directory looks like this: -InSpec's source code's top level directory looks like: ```bash $ tree inspec -L 1 -d inspec @@ -31,6 +32,7 @@ inspec ``` The 3 key directories we need to focus on here are the `docs/` directory, the `lib/` directory and finally the `test/` directory. When developing a resource for upstream InSpec, you must: + 1) Create the resource itself 2) Create the documentation for the resource 3) Create the unit and integration tests for the resource @@ -38,6 +40,7 @@ The 3 key directories we need to focus on here are the `docs/` directory, the `l ::: tip The resource contents When creating this resource.rb file or in this scenario the `file.rb`, it would be developed and written the same exact way if you had put it in the libraries directory for a local resource. If you already developed the resource for local use, but want to push it to upstream, you can copy and paste the file directly to the following location: ::: + ```bash $ tree -L 1 lib/inspec/resources/ lib/inspec/resources/ @@ -49,6 +52,7 @@ lib/inspec/resources/ ``` This is the helper file you need to adjust for the file resource: + ```bash $ tree -L 1 lib/inspec/ lib/inspec/ @@ -64,6 +68,7 @@ When adding this line of code, be sure to place the resources in alphabetical or ::: In the `resources.rb` file you would add the following line: + ```ruby require "inspec/resources/etc_hosts" require "inspec/resources/file" @@ -71,6 +76,7 @@ require "inspec/resources/filesystem" ``` Next you would need to write out your unit and integration tests: + ```bash $ tree test/integration/default/controls/ test/integration/default/controls/ @@ -92,6 +98,7 @@ test/unit/resources/ ``` Finally, you would write up documentation on how to use the resource. This file will be published to the [InSpec docs](https://docs.chef.io/inspec/resources/). Take a look at the other docs pages for an idea of what needs to be documented -- each matcher and function on the resource should be listed, and examples of how to use the resource given. + ```bash $ tree docs/resources/ docs/resources/ diff --git a/src/courses/advanced/Appendix D - Example Pipeline for Validating an InSpec Profile.md b/src/courses/advanced/Appendix D - Example Pipeline for Validating an InSpec Profile.md index 43f10bf46..29fb6f780 100644 --- a/src/courses/advanced/Appendix D - Example Pipeline for Validating an InSpec Profile.md +++ b/src/courses/advanced/Appendix D - Example Pipeline for Validating an InSpec Profile.md @@ -5,11 +5,11 @@ author: Aaron Lippold headerDepth: 3 --- -### RHEL7 Pipeline example +### RHEL7 Pipeline Example -Below is a [RedHat 7 example](https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/blob/master/.github/workflows/verify-ec2.yml) of an automated pipeline that creates and configures two machines with the RedHat 7 operating system - one of which is set up as a vanilla configuration, and one of which is hardened using hardening scripts run by the Chef configuration management tool called kitchen. +Below is a [RedHat 7 example](https://github.com/mitre/redhat-enterprise-linux-7-stig-baseline/blob/master/.github/workflows/verify-ec2.yml) of an automated pipeline that creates and configures two machines with the RedHat 7 operating system. One machine is set up with a vanilla configuration, and the other is hardened using hardening scripts run by the Chef configuration management tool called Kitchen. -This pipeline is intended to validate that the RHEL7 InSpec profile itself functions correctly. We're not too concerned with whether out "hardened" box is actually hardened; we just want to know if InSpec is assessing it correctly. +This pipeline is intended to validate that the RHEL7 InSpec profile functions correctly. We're not too concerned with whether our "hardened" box is actually hardened; we just want to know if InSpec is assessing it correctly. ::: note Why Vanilla and Hardened? Having two test suites, where one is hardened and one is not, can be useful for seeing the differences between how a profile behaves on different types of systems. @@ -17,8 +17,8 @@ Having two test suites, where one is hardened and one is not, can be useful for It also has the added bonus of simultaneously validating that whatever tool we use for hardening is working correctly. ::: -:::info Modularity in Automation -We will demonstrate the automation process through this example, but note that the different orchestration tools, configuration mangement tools, and targets can be traded out for different uses while following the same automation flow and security automation framework. +::: info Modularity in Automation +We will demonstrate the automation process through this example, but note that the different orchestration tools, configuration management tools, and targets can be swapped out for different uses while following the same automation flow and security automation framework. ::: ![The CI Pipeline](../../assets/img/CI_Pipeline_Flow_EC2_Example.png) @@ -103,4 +103,3 @@ In a general sense we can use the SAF CLI to manage security data in the pipelin To practice doing manual attestations, take a look at the [User Class](../user/12.md). ![The CI Pipeline - Attestation](../../assets/img/CI_Pipeline_Flow_EC2_Example_With_Attestation.png) - diff --git a/src/courses/advanced/Appendix E - More Resource Examples.md b/src/courses/advanced/Appendix E - More Resource Examples.md index f60c761ed..de2f43ee3 100644 --- a/src/courses/advanced/Appendix E - More Resource Examples.md +++ b/src/courses/advanced/Appendix E - More Resource Examples.md @@ -1,11 +1,12 @@ --- order: 18 -title: Appendix B - More Resource Examples +title: Appendix E - More Resource Examples author: Aaron Lippold headerDepth: 3 --- ### The File Resource + ```ruby # copyright: 2015, Vulcano Security GmbH @@ -345,6 +346,7 @@ end ``` ### 11.2. Directory + ```ruby require "inspec/resources/file" @@ -372,6 +374,7 @@ end ``` ### The etc_hosts Resource + ```ruby require "inspec/utils/parser" require "inspec/utils/file_reader" @@ -435,4 +438,4 @@ class EtcHosts < Inspec.resource(1) ->(data) { %w{ip_address primary_name all_host_names}.zip(data).to_h } end end -``` \ No newline at end of file +``` diff --git a/src/courses/advanced/README.md b/src/courses/advanced/README.md index a69fa60a1..4c99f5fa4 100644 --- a/src/courses/advanced/README.md +++ b/src/courses/advanced/README.md @@ -1,60 +1,53 @@ --- order: 1 next: 02.md -title: InSpec Advanced Profile Development -shortTitle: Advanced Profile Development +title: 1. InSpec Advanced Profile Development +shortTitle: 1. Advanced Profile Development author: Aaron Lippold headerDepth: 3 --- ## 1.1 Class Objectives -The purpose of this class is to take you beyond profile development and give you the tools to actively participate in the open source security automation community. The advanced class builds off of the beginner class fundamentals, and by the end, you should be able to achieve the following objectives. +The purpose of this class is to advance your skills in profile development and equip you with the tools to actively participate in the open-source security automation community. This advanced class builds on the fundamentals of the beginner class. By the end, you should be able to achieve the following objectives: -### 1.1.2 Advanced Class Objectives: +### 1.1.2 Advanced Class Objectives -- Apply SAF to your organization’s mission and understand the overall mission of SAF and the tools/techniques. -- Automate security testing and go/no-go decisions by integrating InSpec scans and the SAF CLI into a workflow, such as CI/CD pipelines. -- Understand how an existing InSpec profile works under-the-hood. -- Improve existing InSpec resources to better query its intended target/component. +- Apply the Security Automation Framework (SAF) to your organization’s mission and understand the overall mission of SAF and its tools/techniques. +- Automate security testing and make go/no-go decisions by integrating InSpec scans and the SAF CLI into your workflows, CI/CD pipelines, and other security processes. +- Understand the inner workings of an existing InSpec profile. +- Improve existing InSpec resources to better query their intended targets/components. - Develop new InSpec resources to query new types of targets or components. -- Know how to propose a pull request to Chef InSpec to contribute your improved/developed InSpec - resources back to the community. +- Propose a pull request to Chef InSpec to contribute your improved/developed InSpec resources back to the community. ## 1.2 About InSpec -- InSpec is an open-source, community-developed compliance validation framework -- Provides a mechanism for defining machine-readable compliance and security requirements -- Easy to create, validate, and read content -- Cross-platform (Windows, Linux, Mac) -- Agnostic to other DevOps tools and techniques -- Integrates into multiple configuration managament tools +- InSpec is an open-source, community-developed compliance validation framework. +- It provides a mechanism for defining machine-readable compliance and security requirements. +- It is easy to create, validate, and read content. +- It is cross-platform (Windows, Linux, Mac). +- It is agnostic to other DevOps tools and techniques. +- It integrates with multiple configuration management tools. ### 1.2.1 The Lab Environment -This class will use GitHub Codespaces for a consistent environment for all students. See instructions for setting up your own lab environment [here](../../resources/05.md). +This class will use GitHub Codespaces to provide a consistent environment for all students. See instructions for setting up your own lab environment [here](../../resources/02.md). ## 1.3 The Road to Security Automation -InSpec is one of the primary tools in the Security Automation workflow. It integrates easily with orchestration and configuration management tools found in the DevOps world. +InSpec is one of the primary tools in the Security Automation workflow. It integrates easily with orchestration and configuration management tools commonly used in the DevOps world. -As you can see from the picture below, the process for developing automated security tests starts with a human-language requirements documents like SRGs, STIGs or CIS Benchmark and then implements them as code. We need that code to record test results in a standardized format so that we can easily export our security data somewhere people can use it to make decisions (like the Heimdall visualization app). +As shown in the picture below, the process for developing automated security tests starts with human-language requirements documents like SRGs, STIGs, or CIS Benchmarks and then implements them as code. We need that code to record test results in a standardized format so that we can easily export our security data to a place where people can use it to make decisions (like the Heimdall visualization app). This challenge is what the [MITRE Security Automation Framework](https://saf.mitre.org) or MITRE SAF was developed to simplify -- to make the journey from a Requirement Document to an automated test profile and back again a little easier to navigate. ![The SAF Lifecycle](../../assets/img/saf-lifecycle.png) - - ## 1.4 Where can I start on my own? -You can contribute to existing profiles that can be found here: -[https://github.com/mitre](https://github.com/mitre) +You can contribute to existing profiles that can be found here: +[https://github.com/mitre](https://github.com/mitre) -Otherwise you can create your own profiles if they don't exist using the following security guidelines: -[https://public.cyber.mil/stigs/downloads/](https://public.cyber.mil/stigs/downloads/) -[https://www.cisecurity.org/cis-benchmarks/](https://www.cisecurity.org/cis-benchmarks/) +Alternatively, you can create your own profiles if they don't exist using the following security guidelines: +[https://public.cyber.mil/stigs/downloads/](https://public.cyber.mil/stigs/downloads/) +[https://www.cisecurity.org/cis-benchmarks/](https://www.cisecurity.org/cis-benchmarks/) diff --git a/src/courses/beginner/README.md b/src/courses/beginner/README.md index 785c4dc46..0eeab455d 100644 --- a/src/courses/beginner/README.md +++ b/src/courses/beginner/README.md @@ -49,7 +49,7 @@ As you can see from the picture below, the process of developing automated secur This challenge is what the [MITRE SAF (Security Automation Framework)](https://saf.mitre.org) was developed to simplify -- to make the journey from a security guidance document to an automated test profile to a report on security posture easier to navigate. -![The SAF Lifecycle](../../assets/img/saf-lifecycle.png) +![The SAF Lifecycle](../../assets/img/saf-lifecycle.jpg) We hope that during this class you will become comfortable with the tools, parts, and processes involved in the end-to-end process, and gain the confidence to start automating your compliance journey with the information presented here.