diff --git a/src/courses/profile-dev-test/10.md b/src/courses/profile-dev-test/10.md index 5fea43405..c18683b1f 100644 --- a/src/courses/profile-dev-test/10.md +++ b/src/courses/profile-dev-test/10.md @@ -1,18 +1,90 @@ --- order: 10 next: 11.md -title: Rules of the Road +title: What Is Done for a Control? author: Aaron Lippold --- -When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents: +# When is a Control Considered 'Done' -1. **Maintain Version Integrity:** **Never Merge** new requirements into older benchmark branches. This will create a 'mixed baseline' that doesn't align with any specific guidance document. Benchmarks, STIGs, and Guidance Documents form a 'proper subset' - they should be treated as 'all or nothing'. Mixing requirements from different versions can invalidate the concept of 'testing to a known benchmark'. +You and your team might be wondering what 'done' means for a security control in your profile. Here are a few things to consider: -2. **Benchmarks are a Complete Set of Requirements:** A Security Benchmark is 'complete and valid' only when all requirements for a specific Release or Major Version are met. Unlike traditional software projects, features and capabilities cannot be incrementally added. A Security Benchmark and its corresponding InSpec Profile are valid only within the scope of a specific 'Release' of that Benchmark. +- The security automation content and its tests are essentially a refactoring of the 'validation' and 'remediation' guidance already established by the benchmark. +- The security automation content tests should fully capture the spirit - or intention - of the guidance, including its caveats, notes, discussion, and 'validation' and 'remediation' content. +- The tests can - and usually do - capture known 'corner cases and security best practices' that are sometimes indirectly or not directly addressed by the benchmark but implied by the spirit of the security requirement being addressed. +- These tests, like all human-written code, may not be perfect. They will need updates and will evolve as our knowledge of the system and benchmark grows. We use the profile in production and real-world environments. In other words, don't let the pursuit of perfection hinder progress. -3. **Release Readiness Is Predefined:** A Benchmark is considered 'ready for release' when it meets the expected thresholds, hardening, and validation results. Don't be overwhelmed by the multitude of changes across the files. Instead, focus on the specific requirement you are working on. Understand its expected failure and success states on each of the target testing platforms. This approach prevents you from being overwhelmed and provides solid pivot points as you work through the implementation of the automated tests for each requirement and its 'contexts'. +The 'is it done' litmus test is not solely determined by a perfect InSpec control or describe and expect blocks. It also heavily relies on you, the security automation engineer. Your experience, understanding of the platform you're working on, and the processes that you and your team have collectively agreed upon are all vital components. -4. **Use Vendor-Managed Standard Releases:** When setting up a test suite, prioritize using vendor-managed standard releases for software installations and baseline configurations. This should be the starting point for both 'vanilla' and 'hardening' workflows. This approach ensures that your initial and ongoing testing, hardening, and validation closely mirror the real-world usage scenarios of your end-users. +Trust your established expected test outcomes, the guidance document, and the CI/CD testing framework. They will help you know that, to the best of your ability, you have captured the spirit of the testing required by the Benchmark. -By adhering to these principles, you ensure that your updates to Benchmark Profiles are consistent, accurate, and aligned with the original guidance documents. \ No newline at end of file +## The MITRE Security Automation Framework 'Yardstick' + +We consider a control effectively tested when: + +1. All aspects of the 'validation' - also known as 'check text' - have been addressed. +2. Any aspects of the 'remediation' - also known as 'fix text' - that are part of the 'validation' process have been captured. +3. Any documented conditions that are Not Applicable, as outlined in the 'discussion', 'check', or 'fix' text, have been addressed. +4. Any documented conditions that have Not Been Reviewed, as outlined in the 'discussion', 'check', or 'fix' text, have been addressed. +5. The conditions for Not Applicable and Not Reviewed are early in the control to ensure the control is as efficient as possible. +6. The control uses the `only_if` block vs 'if/else' logic when possible to ensure that the control is as clear, direct, and maintainable as possible from a coding perspective. +7. The control has been tested on both 'vanilla' and 'hardened' instances, ensuring that: + 1. The test communicates effectively and fails as expected on the 'vanilla' testing target. + 2. The test communicates effectively and passes on the 'hardened' testing target. + 3. The test communicates effectively and fails on a misconfigured 'vanilla' testing target. + 4. The test communicates effectively and fails on a misconfigured 'hardened' testing target. + 5. The test communicates effectively and clearly articulates the Not Applicable condition for both 'vanilla' and 'hardened' testing targets. + 6. The test communicates effectively and clearly articulates the Not Reviewed condition for both the 'vanilla' and 'hardened' testing targets. + 7. The tests have been constructed in a way that they do not produce Profile Errors when looping, using conditional logic, or when system conditions - such as missing files, directories, or services - are not in the expected locations. + +## Defining 'Passes as Expected' + +'Passing as expected' is the most straightforward concept as it directly corresponds to the test conditions. When the test asserts a condition, it validates that assertion and reports it to the end user in a clear and concise manner. + +We strive to ensure that when we report a 'pass', we do so in a language that is direct, simple, and easy to understand. + +For example: + +```shell +✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date. + ✔ All system security patches and updates are up to date and have been applied +``` + +`Passes as Expected` also encompasses: + +- The conditions for the Not Reviewed and Not Applicable states for the control, if any. + +## Defining `Fails as Expected` + +'Failing as expected' is a less straightforward concept as it doesn't always directly correspond to the test conditions. When the test asserts a condition and it fails, the reason for that failure should be communicated to the end user in a clear and concise manner. + +However, as we all know, a test may fail for more than one reason. Sometimes, the reason for failure might be connected to human error, conditions on the system such as extra lines, files, or housekeeping on the system that was not done, etc. All these factors may need to be accounted for in your tests and perhaps captured in your output and 'reasons' for failure. + +This is where the above 'best practices' come into play. You don't just test in optional 'pass' and 'fail' conditions only, but also 'dirty things up' a bit and make sure that your 'failure' cases are robust enough to handle the real world and semi-perfect conditions. + +For example: + +```shell +✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date. + x The following packages have security patches and need to be updated: + - package 1 + - package 2 + - package 3 + - package 4 +``` + +`Fails as Expected` also encompasses: + +- Misconfigurations, extra lines in files, extra settings, missing files, etc. + +## Defining `Communicates Effectively` + +Clear communication from your testing suite may require you to use a combination of approaches, but the extra time and effort is well worth it. + +Here are some methods you can employ and things to consider: + +- Use `expect` vs `describe` statements in cases where you have multi-part or multi-phase test cases. +- Break up your `describe` statements into multiple layers so that the final output to the end user is clear and concise. +- Post-process and format both 'passing' and 'failures' so that they are useful to the end user later and clear for communication to other team members. This could be in the form of lists or bulleted lists. +- Collect 'failing results' as simple, clear lists or bullets that are easy to 'copy and paste'. This makes it easier for teams to know 'what they have to fix and where'. +- Consider assisting 'Manual Review'/'Not Reviewed' tests by collecting needed information, such as users, groups, or other elements that you are asking the user or another person to review. While we may not be able to fully automate the test, if the 'automation can help collect' then it still adds value. \ No newline at end of file diff --git a/src/courses/profile-dev-test/11.md b/src/courses/profile-dev-test/11.md index e786cd1dd..a2371aaf7 100644 --- a/src/courses/profile-dev-test/11.md +++ b/src/courses/profile-dev-test/11.md @@ -1,18 +1,18 @@ --- order: 11 next: 12.md -title: Creating a `Patch Update` +title: Rules of the Road author: Aaron Lippold --- -A patch update involves making minor changes to a profile to fix issues or improve functionality. Here's a step-by-step guide: +When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents: -1. **Report the Issue:** Open an issue on our project, detailing the problem and providing examples. Do this on [our issues page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/issues). -2. **Fork and Branch:** Fork the repository on GitHub, then create a branch off the `tagged` patch release you're targeting for the update. -3. **Set Up Testing Suites:** In your forked branch, set up the AWS and Docker testing suites. -4. **Make Updates:** Update the control, `inspec.yml` inputs, thresholds, etc. Don't worry about the InSpec version in the `inspec.yml` - the release process handles that. -5. **Test Your Updates Locally:** Test your updates on all `vanilla` and `hardened` variants of the `known bad` and `known good` states of the `AWS EC2` and `Docker` test targets. Also, test your controls outside perfect conditions to ensure they handle non-optimal target environments. Verify that your update considers the `container`, `virtual machine`, and `1U machine` testing context of applicability. -6. **Lint Your Updates:** Use the `bundle exec rake lint` and `bundle exec rake lint:autocorrect` commands from the test suite to lint your updates. -7. **Commit Your Updates:** After testing and linting, commit your updates to your branch. Include `Fixes #ISSUE` in your commit messages to automatically close the issue when your PR is merged. -8. **Open a PR:** Open a PR on the project repository from your fork. -9. **Check Test Suite:** Ensure the GitHub Action test suite on the project side passes. You can check this at [our actions page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/actions). +1. **Maintain Version Integrity:** **Never Merge** new requirements into older benchmark branches. This will create a 'mixed baseline' that doesn't align with any specific guidance document. Benchmarks, STIGs, and Guidance Documents form a 'proper subset' - they should be treated as 'all or nothing'. Mixing requirements from different versions can invalidate the concept of 'testing to a known benchmark'. + +2. **Benchmarks are a Complete Set of Requirements:** A Security Benchmark is 'complete and valid' only when all requirements for a specific Release or Major Version are met. Unlike traditional software projects, features and capabilities cannot be incrementally added. A Security Benchmark and its corresponding InSpec Profile are valid only within the scope of a specific 'Release' of that Benchmark. + +3. **Release Readiness Is Predefined:** A Benchmark is considered 'ready for release' when it meets the expected thresholds, hardening, and validation results. Don't be overwhelmed by the multitude of changes across the files. Instead, focus on the specific requirement you are working on. Understand its expected failure and success states on each of the target testing platforms. This approach prevents you from being overwhelmed and provides solid pivot points as you work through the implementation of the automated tests for each requirement and its 'contexts'. + +4. **Use Vendor-Managed Standard Releases:** When setting up a test suite, prioritize using vendor-managed standard releases for software installations and baseline configurations. This should be the starting point for both 'vanilla' and 'hardening' workflows. This approach ensures that your initial and ongoing testing, hardening, and validation closely mirror the real-world usage scenarios of your end-users. + +By adhering to these principles, you ensure that your updates to Benchmark Profiles are consistent, accurate, and aligned with the original guidance documents. \ No newline at end of file diff --git a/src/courses/profile-dev-test/12.md b/src/courses/profile-dev-test/12.md index 2238b71c1..5b25cff5b 100644 --- a/src/courses/profile-dev-test/12.md +++ b/src/courses/profile-dev-test/12.md @@ -1,16 +1,18 @@ --- order: 12 next: 13.md -title: Creating a `Release Update` +title: Creating a `Patch Update` author: Aaron Lippold --- -A `Release Update` involves creating a new branch, `v#{x}R#{x+1}`, from the current main or latest patch release branch. The `saf generate delta` workflow is then run, which updates the metadata of the `controls`, `inspec.yml`, `README.md`, and other profile elements, while preserving the `describe` and `ruby code logic`. This workflow is detailed in the [Inspec Delta](#2-inspec-delta) section. After the initial commit of the new release branch, follow these steps to keep your work organized: +A patch update involves making minor changes to a profile to fix issues or improve functionality. Here's a step-by-step guide: -1. **Track Control IDs:** Create a table of all new `control ids` in the updated benchmark. This can be in CSV, Markdown Table, or in the PR overview information section. This helps track completed and pending work. PRs off the `v#{x}r#{x+1}` can also be linked in the table, especially if using a `micro` vs `massive` PR approach. -2. **Ensure Consistency:** Add 'check box columns' to your tracking table to ensure each requirement of the updated Benchmark receives the same level of scrutiny. -3. **Update CI/CD Process:** Update elements such as the `hardening` content (ansible, puppet, chef, hardened docker images, hardened vagrant boxes) to meet new requirements. Ensure the CI/CD process still functions with the updated elements, preferably on the PR as well. -4. **Update Labels:** Update `titles` and other labels to reflect the updated release number of the Benchmark. -5. **Commit Changes:** Commit these changes to your release branch, ensuring your CI/CD process exits cleanly. -6. **Follow Patch Update Workflow:** With the above in place, follow the 'Patch Update' process, but expect a larger number of requirements to revalidate or update. -7. **Identify Potential Code Changes:** Controls with changes to the `check text` or `fix text` are likely to require `inspec code changes`. If the `check text` and `fix text` of a control remain unchanged, it's likely only a cosmetic update, with no change in the security requirement or validation code. \ No newline at end of file +1. **Report the Issue:** Open an issue on our project, detailing the problem and providing examples. Do this on [our issues page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/issues). +2. **Fork and Branch:** Fork the repository on GitHub, then create a branch off the `tagged` patch release you're targeting for the update. +3. **Set Up Testing Suites:** In your forked branch, set up the AWS and Docker testing suites. +4. **Make Updates:** Update the control, `inspec.yml` inputs, thresholds, etc. Don't worry about the InSpec version in the `inspec.yml` - the release process handles that. +5. **Test Your Updates Locally:** Test your updates on all `vanilla` and `hardened` variants of the `known bad` and `known good` states of the `AWS EC2` and `Docker` test targets. Also, test your controls outside perfect conditions to ensure they handle non-optimal target environments. Verify that your update considers the `container`, `virtual machine`, and `1U machine` testing context of applicability. +6. **Lint Your Updates:** Use the `bundle exec rake lint` and `bundle exec rake lint:autocorrect` commands from the test suite to lint your updates. +7. **Commit Your Updates:** After testing and linting, commit your updates to your branch. Include `Fixes #ISSUE` in your commit messages to automatically close the issue when your PR is merged. +8. **Open a PR:** Open a PR on the project repository from your fork. +9. **Check Test Suite:** Ensure the GitHub Action test suite on the project side passes. You can check this at [our actions page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/actions). diff --git a/src/courses/profile-dev-test/13.md b/src/courses/profile-dev-test/13.md index dd85cf33f..bf7b90404 100644 --- a/src/courses/profile-dev-test/13.md +++ b/src/courses/profile-dev-test/13.md @@ -1,20 +1,16 @@ --- order: 13 next: 14.md -title: Creating a `Major Version Update` +title: Creating a `Release Update` author: Aaron Lippold --- -A `Major Version Update` involves transitioning to a new STIG Benchmark, which introduces a new Rule ID index. This process is more complex than a `Release Update` due to the need for aligning old requirements (Rule IDs) with the new ones. +A `Release Update` involves creating a new branch, `v#{x}R#{x+1}`, from the current main or latest patch release branch. The `saf generate delta` workflow is then run, which updates the metadata of the `controls`, `inspec.yml`, `README.md`, and other profile elements, while preserving the `describe` and `ruby code logic`. This workflow is detailed in the [Inspec Delta](#2-inspec-delta) section. After the initial commit of the new release branch, follow these steps to keep your work organized: -For example, when transitioning from RedHat Enterprise Linux 8 v1R12 to Red Hat Enterprise Linux 9 V1R1, the alignment of InSpec tests to the new requirements must be `fuzzy matched`. This involves using common identifiers such as `SRG ID`, `CCIs`, and, if necessary, the `title` and `descriptions`. - -This is crucial when a single requirement from the old benchmark is split into multiple requirements in the new benchmark, although this is usually a rare occurrence. - -We use a similar process in our [MITRE Vulcan](https://vulcan.mitre.org) to align 'Related Controls' in your Vulcan project to existing published STIG documents. However, the `Delta` tool currently requires manual intervention, and improvements are needed to automate this process. - -The good news is that **these improvements are within reach**. We can leverage the existing work from `Vulcan` and hopefully soon incorporate these improvements into the SAF `Delta` tool as a direct function. - -Once the 'old controls' and 'new controls' are aligned across 'Rule IDs', you can migrate the InSpec / Ruby code into their respective places. - -Then, you follow the same setup, CI/CD organization, and control update process as in the `Release Update` process and hopfully finding that the actual InSpec code from the previous benchmark is very close to the needed InSpec code for the same 'requirement' in the new Benchmark. +1. **Track Control IDs:** Create a table of all new `control ids` in the updated benchmark. This can be in CSV, Markdown Table, or in the PR overview information section. This helps track completed and pending work. PRs off the `v#{x}r#{x+1}` can also be linked in the table, especially if using a `micro` vs `massive` PR approach. +2. **Ensure Consistency:** Add 'check box columns' to your tracking table to ensure each requirement of the updated Benchmark receives the same level of scrutiny. +3. **Update CI/CD Process:** Update elements such as the `hardening` content (ansible, puppet, chef, hardened docker images, hardened vagrant boxes) to meet new requirements. Ensure the CI/CD process still functions with the updated elements, preferably on the PR as well. +4. **Update Labels:** Update `titles` and other labels to reflect the updated release number of the Benchmark. +5. **Commit Changes:** Commit these changes to your release branch, ensuring your CI/CD process exits cleanly. +6. **Follow Patch Update Workflow:** With the above in place, follow the 'Patch Update' process, but expect a larger number of requirements to revalidate or update. +7. **Identify Potential Code Changes:** Controls with changes to the `check text` or `fix text` are likely to require `inspec code changes`. If the `check text` and `fix text` of a control remain unchanged, it's likely only a cosmetic update, with no change in the security requirement or validation code. \ No newline at end of file diff --git a/src/courses/profile-dev-test/14.md b/src/courses/profile-dev-test/14.md index dfd0d7a5d..8b8f508d9 100644 --- a/src/courses/profile-dev-test/14.md +++ b/src/courses/profile-dev-test/14.md @@ -1,52 +1,20 @@ --- order: 14 next: 15.md -title: Test Kitchen +title: Creating a `Major Version Update` author: Aaron Lippold --- -[Test Kitchen](http://kitchen.ci) is a robust tool for testing infrastructure code and software on isolated platforms. It provides a consistent, reliable environment for developing and testing infrastructure code. +A `Major Version Update` involves transitioning to a new STIG Benchmark, which introduces a new Rule ID index. This process is more complex than a `Release Update` due to the need for aligning old requirements (Rule IDs) with the new ones. -## Workflow Defined by our Test Kitchen Files +For example, when transitioning from RedHat Enterprise Linux 8 v1R12 to Red Hat Enterprise Linux 9 V1R1, the alignment of InSpec tests to the new requirements must be `fuzzy matched`. This involves using common identifiers such as `SRG ID`, `CCIs`, and, if necessary, the `title` and `descriptions`. -Test Kitchen's workflow involves building out suites and platforms using its drivers and provisioners. It follows a create, converge, verify, and destroy cycle: +This is crucial when a single requirement from the old benchmark is split into multiple requirements in the new benchmark, although this is usually a rare occurrence. -1. **Create:** Test Kitchen creates an instance of the platform. -2. **Converge:** It applies the infrastructure code to the instance. -3. **Verify:** It checks if the instance is in the desired state. -4. **Destroy:** It destroys the instance after testing. +We use a similar process in our [MITRE Vulcan](https://vulcan.mitre.org) to align 'Related Controls' in your Vulcan project to existing published STIG documents. However, the `Delta` tool currently requires manual intervention, and improvements are needed to automate this process. -In our testing workflow, we have defined four test suites to test different deployment patterns in two configurations - `vanilla` and `hardened`. +The good news is that **these improvements are within reach**. We can leverage the existing work from `Vulcan` and hopefully soon incorporate these improvements into the SAF `Delta` tool as a direct function. -- `vanilla`: This represents a completely stock installation of the testing target, as provided by the product vendor, with no configuration updates beyond what is 'shipped' by the vendor. Apart from the standard Test Kitchen initialization, the system is considered 'stock'. -- `hardened`: This configuration is set up using the `driver` section of the Test Kitchen suite and is executed during the `converge` phase. The `hardened` configuration represents the final `target configuration state` of our test instance, adhering to the recommended configuration of the Benchmark we are working on. For example, it aligns as closely as possible with the Red Hat Enterprise Linux V1R12 recommendations. +Once the 'old controls' and 'new controls' are aligned across 'Rule IDs', you can migrate the InSpec / Ruby code into their respective places. -For more details on Test Kitchen's workflow, refer to the [official documentation](http://kitchen.ci/docs/getting-started/). - -```journey Test Kitchen Workflow - section Setup - Checkout Repo: 3: - Install Tools: 3: - Setup Runner: 3: - section Configure - Setup Vanilla Instance: 3: - Setup Hardened Instance: 3: - section Run Test Suite - Run Tests on Vanilla: 3: - Run Tests on Hardened: 3: - section Record Results - Save Tests in Pipeline: 3: - Upload Tests to Heimdall Server: 3: - section Validate Aginst Threshold - Validate the 'vanilla' threshold: 4: - Validate the 'hardened' threshold: 4: - section Pass/Fail the Run - Failed: 1: - Passed: 5: -``` - - -## Test Kitchen's Modifications to Targets - -Test Kitchen makes minor modifications to the system to facilitate initialization and access. It adds a 'private ssh key' for the default user and sets up primary access to the system for this user using the generated key. Test Kitchen uses the 'platform standard' for access - SSH for Unix/Linux systems and WinRM for Windows systems. \ No newline at end of file +Then, you follow the same setup, CI/CD organization, and control update process as in the `Release Update` process and hopfully finding that the actual InSpec code from the previous benchmark is very close to the needed InSpec code for the same 'requirement' in the new Benchmark. diff --git a/src/courses/profile-dev-test/15.md b/src/courses/profile-dev-test/15.md index ea018295d..9030bd20c 100644 --- a/src/courses/profile-dev-test/15.md +++ b/src/courses/profile-dev-test/15.md @@ -1,15 +1,52 @@ --- order: 15 next: 16.md -title: Test Kitchen - Create +title: Test Kitchen author: Aaron Lippold -index: true --- -# Test Kitchen Create Stage +[Test Kitchen](http://kitchen.ci) is a robust tool for testing infrastructure code and software on isolated platforms. It provides a consistent, reliable environment for developing and testing infrastructure code. -The `create` stage in Test Kitchen sets up testing environments. It uses standard and patched images from AWS and Red Hat, including AMI EC2 images, Docker containers, and Vagrant boxes. +## Workflow Defined by our Test Kitchen Files -Test Kitchen automatically fetches the latest images from sources like Amazon Marketplace, DockerHub, Vagrant Marketplace, and Bento Hub. You can customize this to use different images, private repositories (like Platform One's Iron Bank), or local images. +Test Kitchen's workflow involves building out suites and platforms using its drivers and provisioners. It follows a create, converge, verify, and destroy cycle: -For more details on how Test Kitchen manages images, visit the [Test Kitchen website](https://kitchen.ci). You can also refer to the GitHub documentation for the `kitchen-ec2`, `kitchen-vagrant`, `kitchen-sync`, and [`kitchen-inspec`](https://github.com/inspec/kitchen-inspec) project on GitHub. \ No newline at end of file +1. **Create:** Test Kitchen creates an instance of the platform. +2. **Converge:** It applies the infrastructure code to the instance. +3. **Verify:** It checks if the instance is in the desired state. +4. **Destroy:** It destroys the instance after testing. + +In our testing workflow, we have defined four test suites to test different deployment patterns in two configurations - `vanilla` and `hardened`. + +- `vanilla`: This represents a completely stock installation of the testing target, as provided by the product vendor, with no configuration updates beyond what is 'shipped' by the vendor. Apart from the standard Test Kitchen initialization, the system is considered 'stock'. +- `hardened`: This configuration is set up using the `driver` section of the Test Kitchen suite and is executed during the `converge` phase. The `hardened` configuration represents the final `target configuration state` of our test instance, adhering to the recommended configuration of the Benchmark we are working on. For example, it aligns as closely as possible with the Red Hat Enterprise Linux V1R12 recommendations. + +For more details on Test Kitchen's workflow, refer to the [official documentation](http://kitchen.ci/docs/getting-started/). + +```journey Test Kitchen Workflow + section Setup + Checkout Repo: 3: + Install Tools: 3: + Setup Runner: 3: + section Configure + Setup Vanilla Instance: 3: + Setup Hardened Instance: 3: + section Run Test Suite + Run Tests on Vanilla: 3: + Run Tests on Hardened: 3: + section Record Results + Save Tests in Pipeline: 3: + Upload Tests to Heimdall Server: 3: + section Validate Aginst Threshold + Validate the 'vanilla' threshold: 4: + Validate the 'hardened' threshold: 4: + section Pass/Fail the Run + Failed: 1: + Passed: 5: +``` + + +## Test Kitchen's Modifications to Targets + +Test Kitchen makes minor modifications to the system to facilitate initialization and access. It adds a 'private ssh key' for the default user and sets up primary access to the system for this user using the generated key. Test Kitchen uses the 'platform standard' for access - SSH for Unix/Linux systems and WinRM for Windows systems. \ No newline at end of file diff --git a/src/courses/profile-dev-test/16.md b/src/courses/profile-dev-test/16.md index ec2a17105..b0e3f4492 100644 --- a/src/courses/profile-dev-test/16.md +++ b/src/courses/profile-dev-test/16.md @@ -1,33 +1,15 @@ --- order: 16 next: 17.md -title: Test Kitchen - Converge +title: Test Kitchen - Create author: Aaron Lippold index: true --- -# Test Kitchen Converge Stage +# Test Kitchen Create Stage -The `converge` stage uses Ansible Playbooks from the Ansible Lockdown project to apply hardening configurations, specifically the RHEL8-STIG playbook, and RedHat managed containers. +The `create` stage in Test Kitchen sets up testing environments. It uses standard and patched images from AWS and Red Hat, including AMI EC2 images, Docker containers, and Vagrant boxes. -## EC2 and Vagrant Converge +Test Kitchen automatically fetches the latest images from sources like Amazon Marketplace, DockerHub, Vagrant Marketplace, and Bento Hub. You can customize this to use different images, private repositories (like Platform One's Iron Bank), or local images. -For EC2 and Vagrant, we use 'wrapper playbooks' for the 'vanilla' and 'hardened' suites. - -- The 'vanilla' playbook establishes a basic test environment. -- The 'hardened' playbook applies the 'vanilla role' and the Ansible Lockdown RHEL8-STIG role to the 'hardened' target, using Ansible Galaxy, a `requirements.txt`, and Ansible Roles. - -Some tasks in the hardening role were disabled for automated testing, but this doesn't significantly impact our security posture. We can still meet our validation and thresholds. - -For more on using these playbooks, running Ansible, or modifying the playbooks, roles, and tasks, see the Ansible Project Website. - -Find these roles and 'wrapper playbooks' in the [spec/](./spec/) directory. - -## Container Converge - -We use RedHat vendor images for both the `vanilla` and `hardened` containers. - -- **`vanilla`:** This container uses the `registry.access.redhat.com/ubi8/ubi:8.9-1028` image from RedHat's community repositories. -- **`hardened`:** This container uses the `registry1.dso.mil/ironbank/redhat/ubi/ubi8` image from Red Hat's Platform One Iron Bank project. - -The Iron Bank UBI8 image is regularly patched, updated, and hardened according to STIG requirements. +For more details on how Test Kitchen manages images, visit the [Test Kitchen website](https://kitchen.ci). You can also refer to the GitHub documentation for the `kitchen-ec2`, `kitchen-vagrant`, `kitchen-sync`, and [`kitchen-inspec`](https://github.com/inspec/kitchen-inspec) project on GitHub. \ No newline at end of file diff --git a/src/courses/profile-dev-test/17.md b/src/courses/profile-dev-test/17.md index c597d72a5..3f90b54c7 100644 --- a/src/courses/profile-dev-test/17.md +++ b/src/courses/profile-dev-test/17.md @@ -1,17 +1,33 @@ --- order: 17 -next: 18.md -title: Test Kitchen - Validate +next: 16.md +title: Test Kitchen - Converge author: Aaron Lippold index: true --- -# Test Kitchen Validate Stage +# Test Kitchen Converge Stage -The `verify` stage uses the `kitchen-inspec` verifier from Test Kitchen to run the profile against the test targets. +The `converge` stage uses Ansible Playbooks from the Ansible Lockdown project to apply hardening configurations, specifically the RHEL8-STIG playbook, and RedHat managed containers. -For this stage, the profile receives a set of tailored `input` YAML files. These files adjust the testing for each target, ensuring accurate validation against the expected state and minimizing false results. +## EC2 and Vagrant Converge -There are also specific `threshold` files for each target environment platform (EC2, container, and Vagrant) in both the `vanilla` and `hardened` suites. +For EC2 and Vagrant, we use 'wrapper playbooks' for the 'vanilla' and 'hardened' suites. -The following sections provide a detailed breakdown of these files, their structure, and the workflow organization. \ No newline at end of file +- The 'vanilla' playbook establishes a basic test environment. +- The 'hardened' playbook applies the 'vanilla role' and the Ansible Lockdown RHEL8-STIG role to the 'hardened' target, using Ansible Galaxy, a `requirements.txt`, and Ansible Roles. + +Some tasks in the hardening role were disabled for automated testing, but this doesn't significantly impact our security posture. We can still meet our validation and thresholds. + +For more on using these playbooks, running Ansible, or modifying the playbooks, roles, and tasks, see the Ansible Project Website. + +Find these roles and 'wrapper playbooks' in the [spec/](./spec/) directory. + +## Container Converge + +We use RedHat vendor images for both the `vanilla` and `hardened` containers. + +- **`vanilla`:** This container uses the `registry.access.redhat.com/ubi8/ubi:8.9-1028` image from RedHat's community repositories. +- **`hardened`:** This container uses the `registry1.dso.mil/ironbank/redhat/ubi/ubi8` image from Red Hat's Platform One Iron Bank project. + +The Iron Bank UBI8 image is regularly patched, updated, and hardened according to STIG requirements. diff --git a/src/courses/profile-dev-test/18.md b/src/courses/profile-dev-test/18.md index 42b169106..7772ce08f 100644 --- a/src/courses/profile-dev-test/18.md +++ b/src/courses/profile-dev-test/18.md @@ -1,17 +1,17 @@ --- order: 18 next: 19.md -title: Test Kitchen - Destroy +title: Test Kitchen - Validate author: Aaron Lippold +index: true --- -# Test Kitchen Destroy Stage +# Test Kitchen Validate Stage -The `destroy` stage terminates the EC2 instances, Vagrant boxes, or containers that Test Kitchen created for testing. +The `verify` stage uses the `kitchen-inspec` verifier from Test Kitchen to run the profile against the test targets. -Occasionally, the `destroy` stage may encounter issues if the hosting platforms have altered the state of the provisioned instance during your writing, testing, or debugging sessions. If you face any problems with the `destroy` stage or any other Test Kitchen commands, verify the following: +For this stage, the profile receives a set of tailored `input` YAML files. These files adjust the testing for each target, ensuring accurate validation against the expected state and minimizing false results. -- The test target's login, hostname, and IP address are still accurate. -- The test instance is still running on the hosting platforms. +There are also specific `threshold` files for each target environment platform (EC2, container, and Vagrant) in both the `vanilla` and `hardened` suites. -Sometimes, the solution can be as simple as checking if the instance is still active. \ No newline at end of file +The following sections provide a detailed breakdown of these files, their structure, and the workflow organization. \ No newline at end of file diff --git a/src/courses/profile-dev-test/19.md b/src/courses/profile-dev-test/19.md index 50fc78428..88a0d02ca 100644 --- a/src/courses/profile-dev-test/19.md +++ b/src/courses/profile-dev-test/19.md @@ -1,10 +1,17 @@ --- order: 19 next: 20.md -title: Test Kitchen - .kitchen/ directory +title: Test Kitchen - Destroy author: Aaron Lippold --- -# The `.kitchen/` Directory +# Test Kitchen Destroy Stage -The [`.kitchen/`](/.kitchen/) directory contains the state file for Test Kitchen, which is automatically generated when you first run Test Kitchen. Refer to the [Finding Your Test Target Login Details](#311-locating-test-target-login-details) section to see how you can use the `.kitchen/` directory. +The `destroy` stage terminates the EC2 instances, Vagrant boxes, or containers that Test Kitchen created for testing. + +Occasionally, the `destroy` stage may encounter issues if the hosting platforms have altered the state of the provisioned instance during your writing, testing, or debugging sessions. If you face any problems with the `destroy` stage or any other Test Kitchen commands, verify the following: + +- The test target's login, hostname, and IP address are still accurate. +- The test instance is still running on the hosting platforms. + +Sometimes, the solution can be as simple as checking if the instance is still active. \ No newline at end of file diff --git a/src/courses/profile-dev-test/20.md b/src/courses/profile-dev-test/20.md index ea663b481..dd5468c71 100644 --- a/src/courses/profile-dev-test/20.md +++ b/src/courses/profile-dev-test/20.md @@ -1,100 +1,10 @@ --- order: 20 next: 21.md -title: Test Kitchen - `kitchen.yml` File +title: Test Kitchen - .kitchen/ directory author: Aaron Lippold --- -# Understanding the `kitchen.yml` File +# The `.kitchen/` Directory -The [`kitchen.yml`](./kitchen.yml) file is the primary configuration file for Test Kitchen. It outlines the shared configuration for all your testing environments, platforms, and the testing framework to be used. - -Each of the subsequent kitchen files will inherit the shared settings from this file automatlly and merge them with the setting in the child kitchen file. - -## Example `kitchen.yml` file - -```yaml ---- -verifier: - name: inspec - sudo: true - reporter: - - cli - - json:spec/results/%{platform}_%{suite}.json - inspec_tests: - - name: RedHat 8 STIG v1r12 - path: . - input_files: - - kitchen.inputs.yml - <% if ENV['INSPEC_CONTROL'] %> - controls: - - "<%= ENV['INSPEC_CONTROL'] %>" - <% end %> - load_plugins: true - -suites: - - name: vanilla - provisioner: - playbook: spec/ansible/roles/ansible-role-rhel-vanilla.yml - - name: hardened - provisioner: - playbook: spec/ansible/roles/ansible-role-rhel-hardened.yml -``` - -# Breakdown of the `kitchen.yml` file: - -```yaml -verifier: - name: inspec - sudo: true - reporter: - - cli - - json:spec/results/%{platform}_%{suite}.json - inspec_tests: - - name: RedHat 8 STIG v1r12 - path: . - input_files: - - kitchen.inputs.yml - <% if ENV['INSPEC_CONTROL'] %> - controls: - - "<%= ENV['INSPEC_CONTROL'] %>" - <% end %> - load_plugins: true -``` - -This first section configures the verifier, which is the tool that checks if your system is in the desired state. Here, it's using InSpec. - -- `sudo: true` means that InSpec will run with sudo privileges. -- `reporter` specifies the formats in which the test results will be reported. Here, it's set to report in the command-line interface (`cli`) and in a JSON file (`json:spec/results/%{platform}_%{suite}.json`). -- `inspec_tests` specifies the InSpec profiles to run. Here, it's running the "RedHat 8 STIG v1r12" profile located in the current directory (`path: .`). -- `input_files` specifies files that contain input variables for the InSpec profile. Here, it's using the `kitchen.inputs.yml` file. -- The `controls` section is dynamically set based on the `INSPEC_CONTROL` environment variable. If the variable is set, only the specified control will be run. -- `load_plugins: true` means that InSpec will load any available plugins. - -```yaml -suites: - - name: vanilla - provisioner: - playbook: spec/ansible/roles/ansible-role-rhel-vanilla.yml - - name: hardened - provisioner: - playbook: spec/ansible/roles/ansible-role-rhel-hardened.yml -``` - -This section defines the test suites. Each suite represents a different configuration to test. - -- Each suite has a `name` and a `provisioner`. -- The `provisioner` section specifies the Ansible playbook to use for the suite. Here, it's using the `ansible-role-rhel-vanilla.yml` playbook for the "vanilla" suite and the `ansible-role-rhel-hardened.yml` playbook for the "hardened" suite. - -## Environment Variables in `kitchen.yml` - -- `INSPEC_CONTROL`: This variable allows you to specify a single control to run during the `bundle exec kitchen verify` phase. This is particularly useful for testing or debugging a specific requirement. - -# Recap on Kitchen Stages - -The workflow of Test Kitchen involves the following steps: - -1. **Create:** Test Kitchen uses the driver to create an instance of the platform. -2. **Converge:** Test Kitchen uses the provisioner to apply the infrastructure code to the instance. In this case, it's using Ansible playbooks. -3. **Verify:** Test Kitchen uses the verifier to check if the instance is in the desired state. -4. **Destroy:** Test Kitchen uses the driver to destroy the instance after testing. This is not shown in your file. \ No newline at end of file +The [`.kitchen/`](/.kitchen/) directory contains the state file for Test Kitchen, which is automatically generated when you first run Test Kitchen. Refer to the [Finding Your Test Target Login Details](#311-locating-test-target-login-details) section to see how you can use the `.kitchen/` directory. diff --git a/src/courses/profile-dev-test/21.md b/src/courses/profile-dev-test/21.md index 7fe18bd24..85be11441 100644 --- a/src/courses/profile-dev-test/21.md +++ b/src/courses/profile-dev-test/21.md @@ -1,118 +1,100 @@ --- order: 21 next: 22.md -title: Test Kitchen - `kitchen.ec2.yml` File +title: Test Kitchen - `kitchen.yml` File author: Aaron Lippold --- -# Understanding the `kitchen.ec2.yml` File +# Understanding the `kitchen.yml` File -The `kitchen.ec2.yml` file is instrumental in setting up our testing targets within the AWS environment. It outlines the configuration details for these targets, including their VPC assignments and the specific settings for each VPC. +The [`kitchen.yml`](./kitchen.yml) file is the primary configuration file for Test Kitchen. It outlines the shared configuration for all your testing environments, platforms, and the testing framework to be used. -This file leverages the ` AWS CLI and AWS Credentials` configured as described in the previous [Required Software](#13-required-software) section. +Each of the subsequent kitchen files will inherit the shared settings from this file automatlly and merge them with the setting in the child kitchen file. -Alternatively, if you've set up AWS Environment Variables, the file will use those for AWS interactions. - -## Example `kitchen.ec2.yml` file +## Example `kitchen.yml` file ```yaml --- -platforms: - - name: rhel-8 - -driver: - name: ec2 - metadata_options: - http_tokens: required - http_put_response_hop_limit: 1 - instance_metadata_tags: enabled - instance_type: m5.large - associate_public_ip: true - interface: public - skip_cost_warning: true - privileged: true - tags: - CreatedBy: test-kitchen - -provisioner: - name: ansible_playbook - hosts: all - require_chef_for_busser: false - require_ruby_for_busser: false - ansible_binary_path: /usr/local/bin - require_pip3: true - ansible_verbose: true - roles_path: spec/ansible/roles - galaxy_ignore_certs: true - requirements_path: spec/ansible/roles/requirements.yml - ansible_extra_flags: <%= ENV['ANSIBLE_EXTRA_FLAGS'] %> - -lifecycle: - pre_converge: - - remote: | - echo "NOTICE - Installing needed packages" - sudo dnf -y clean all - sudo dnf -y install --nogpgcheck bc bind-utils redhat-lsb-core vim - echo "updating system packages" - sudo dnf -y update --nogpgcheck --nobest - sudo dnf -y distro-sync - echo "NOTICE - Updating the ec2-user to keep sudo working" - sudo chage -d $(( $( date +%s ) / 86400 )) ec2-user - echo "NOTICE - updating ec2-user sudo config" - sudo chmod 600 /etc/sudoers && sudo sed -i'' "/ec2-user/d" /etc/sudoers && sudo chmod 400 /etc/sudoers - -transport: - name: ssh - max_ssh_sessions: 2 +verifier: + name: inspec + sudo: true + reporter: + - cli + - json:spec/results/%{platform}_%{suite}.json + inspec_tests: + - name: RedHat 8 STIG v1r12 + path: . + input_files: + - kitchen.inputs.yml + <% if ENV['INSPEC_CONTROL'] %> + controls: + - "<%= ENV['INSPEC_CONTROL'] %>" + <% end %> + load_plugins: true + +suites: + - name: vanilla + provisioner: + playbook: spec/ansible/roles/ansible-role-rhel-vanilla.yml + - name: hardened + provisioner: + playbook: spec/ansible/roles/ansible-role-rhel-hardened.yml ``` -# Breakdown of the `kitchen.ec2.yml` file +# Breakdown of the `kitchen.yml` file: ```yaml -platforms: - - name: rhel-8 +verifier: + name: inspec + sudo: true + reporter: + - cli + - json:spec/results/%{platform}_%{suite}.json + inspec_tests: + - name: RedHat 8 STIG v1r12 + path: . + input_files: + - kitchen.inputs.yml + <% if ENV['INSPEC_CONTROL'] %> + controls: + - "<%= ENV['INSPEC_CONTROL'] %>" + <% end %> + load_plugins: true ``` -This section defines the platforms on which your tests will run. In this case, it's Red Hat Enterprise Linux 8. +This first section configures the verifier, which is the tool that checks if your system is in the desired state. Here, it's using InSpec. -```yaml -driver: - name: ec2 - ... -``` - -This section configures the driver, which is responsible for creating and managing the instances. Here, it's set to use Amazon EC2 instances. The various options configure the EC2 instances, such as instance type (`m5.large`), whether to associate a public IP address (`associate_public_ip: true`), and various metadata options. +- `sudo: true` means that InSpec will run with sudo privileges. +- `reporter` specifies the formats in which the test results will be reported. Here, it's set to report in the command-line interface (`cli`) and in a JSON file (`json:spec/results/%{platform}_%{suite}.json`). +- `inspec_tests` specifies the InSpec profiles to run. Here, it's running the "RedHat 8 STIG v1r12" profile located in the current directory (`path: .`). +- `input_files` specifies files that contain input variables for the InSpec profile. Here, it's using the `kitchen.inputs.yml` file. +- The `controls` section is dynamically set based on the `INSPEC_CONTROL` environment variable. If the variable is set, only the specified control will be run. +- `load_plugins: true` means that InSpec will load any available plugins. ```yaml -provisioner: - name: ansible_playbook - ... +suites: + - name: vanilla + provisioner: + playbook: spec/ansible/roles/ansible-role-rhel-vanilla.yml + - name: hardened + provisioner: + playbook: spec/ansible/roles/ansible-role-rhel-hardened.yml ``` -This section configures the provisioner, which is the tool that brings your system to the desired state. Here, it's using Ansible playbooks. The various options configure how Ansible is run, such as the path to the Ansible binary (`ansible_binary_path: /usr/local/bin`), whether to require pip3 (`require_pip3: true`), and the path to the roles and requirements files. +This section defines the test suites. Each suite represents a different configuration to test. -```yaml -lifecycle: - pre_converge: - - remote: | - ... -``` +- Each suite has a `name` and a `provisioner`. +- The `provisioner` section specifies the Ansible playbook to use for the suite. Here, it's using the `ansible-role-rhel-vanilla.yml` playbook for the "vanilla" suite and the `ansible-role-rhel-hardened.yml` playbook for the "hardened" suite. -This section defines lifecycle hooks, which are commands that run at certain points in the Test Kitchen run. Here, it's running a series of commands before the converge phase (i.e., before applying the infrastructure code). These commands install necessary packages, update system packages, and update the `ec2-user` configuration. +## Environment Variables in `kitchen.yml` -```yaml -transport: - name: ssh - max_ssh_sessions: 2 -``` +- `INSPEC_CONTROL`: This variable allows you to specify a single control to run during the `bundle exec kitchen verify` phase. This is particularly useful for testing or debugging a specific requirement. -This section configures the transport, which is the method Test Kitchen uses to communicate with the instance. Here, it's using SSH and allowing a maximum of 2 SSH sessions. +# Recap on Kitchen Stages The workflow of Test Kitchen involves the following steps: 1. **Create:** Test Kitchen uses the driver to create an instance of the platform. -2. **Converge:** Test Kitchen uses the provisioner to apply the infrastructure code to the instance. Before this phase, it runs the commands defined in the `pre_converge` lifecycle hook. -3. **Verify:** Test Kitchen checks if the instance is in the desired state. This is not shown in your file, but it would be configured in the `verifier` section. -4. **Destroy:** Test Kitchen uses the driver to destroy the instance after testing. This is not shown in your file, but it would be configured in the `driver` section. - -The `transport` is used in all these steps to communicate with the instance. \ No newline at end of file +2. **Converge:** Test Kitchen uses the provisioner to apply the infrastructure code to the instance. In this case, it's using Ansible playbooks. +3. **Verify:** Test Kitchen uses the verifier to check if the instance is in the desired state. +4. **Destroy:** Test Kitchen uses the driver to destroy the instance after testing. This is not shown in your file. \ No newline at end of file diff --git a/src/courses/profile-dev-test/22.md b/src/courses/profile-dev-test/22.md index aa0ada474..406ba0715 100644 --- a/src/courses/profile-dev-test/22.md +++ b/src/courses/profile-dev-test/22.md @@ -1,120 +1,118 @@ --- order: 22 next: 23.md -title: Test Kitchen - `kitchen.container.yml` +title: Test Kitchen - `kitchen.ec2.yml` File author: Aaron Lippold --- -# Understanding the [`kitchen.container.yml`](./kitchen.container.yml) +# Understanding the `kitchen.ec2.yml` File -The `kitchen.container.yml` file orchestrates our container-based test suite. It defines two types of containers, hardened and vanilla, and specifies the inspec_tests to run against them. It also configures the generation and storage of test reports. +The `kitchen.ec2.yml` file is instrumental in setting up our testing targets within the AWS environment. It outlines the configuration details for these targets, including their VPC assignments and the specific settings for each VPC. -Unlike other test suites, the container suite skips the 'provisioner' stage for the vanilla and hardened targets. Instead, during the create stage, it simply downloads and starts the specified images. This is due to the use of the [dummy Test Kitchen driver](https://github.com/test-kitchen/test-kitchen/blob/main/lib/kitchen/driver/dummy.rb), which is ideal for interacting with pre-configured or immutable targets like containers. +This file leverages the ` AWS CLI and AWS Credentials` configured as described in the previous [Required Software](#13-required-software) section. -This approach allows for the evaluation of existing containers, even those created by other workflows. It can be leveraged to build a generalized workflow for validating any container against our Benchmark requirements, providing a comprehensive assessment of its security posture. +Alternatively, if you've set up AWS Environment Variables, the file will use those for AWS interactions. -## Example `kitchen.container.yml` file +## Example `kitchen.ec2.yml` file ```yaml --- -# see: https://kitchen.ci/docs/drivers/dokken/ - -provisioner: - name: dummy +platforms: + - name: rhel-8 driver: - name: dokken - pull_platform_image: false + name: ec2 + metadata_options: + http_tokens: required + http_put_response_hop_limit: 1 + instance_metadata_tags: enabled + instance_type: m5.large + associate_public_ip: true + interface: public + skip_cost_warning: true + privileged: true + tags: + CreatedBy: test-kitchen -transport: - name: dokken +provisioner: + name: ansible_playbook + hosts: all + require_chef_for_busser: false + require_ruby_for_busser: false + ansible_binary_path: /usr/local/bin + require_pip3: true + ansible_verbose: true + roles_path: spec/ansible/roles + galaxy_ignore_certs: true + requirements_path: spec/ansible/roles/requirements.yml + ansible_extra_flags: <%= ENV['ANSIBLE_EXTRA_FLAGS'] %> + +lifecycle: + pre_converge: + - remote: | + echo "NOTICE - Installing needed packages" + sudo dnf -y clean all + sudo dnf -y install --nogpgcheck bc bind-utils redhat-lsb-core vim + echo "updating system packages" + sudo dnf -y update --nogpgcheck --nobest + sudo dnf -y distro-sync + echo "NOTICE - Updating the ec2-user to keep sudo working" + sudo chage -d $(( $( date +%s ) / 86400 )) ec2-user + echo "NOTICE - updating ec2-user sudo config" + sudo chmod 600 /etc/sudoers && sudo sed -i'' "/ec2-user/d" /etc/sudoers && sudo chmod 400 /etc/sudoers -platforms: - - name: ubi8 - -suites: - - name: vanilla - driver: - image: <%= ENV['VANILLA_CONTAINER_IMAGE'] || "registry.access.redhat.com/ubi8/ubi:8.9-1028" %> - verifier: - input_files: - - container.vanilla.inputs.yml - - name: hardened - driver: - image: <%= ENV['HARDENED_CONTAINER_IMAGE'] || "registry1.dso.mil/ironbank/redhat/ubi/ubi8" %> - verifier: - input_files: - - container.hardened.inputs.yml - # creds_file: './creds.json' +transport: + name: ssh + max_ssh_sessions: 2 ``` -# Breakdown of the `kitchen.container.yml` file: +# Breakdown of the `kitchen.ec2.yml` file ```yaml -provisioner: - name: dummy +platforms: + - name: rhel-8 ``` -This section configures the provisioner, which is the tool that brings your system to the desired state. Here, it's using a dummy provisioner, which means no provisioning will be done. +This section defines the platforms on which your tests will run. In this case, it's Red Hat Enterprise Linux 8. ```yaml driver: - name: dokken - pull_platform_image: false + name: ec2 + ... ``` -This section configures the driver, which is responsible for creating and managing the instances. Here, it's set to use the Dokken driver, which is designed for running tests in Docker containers. The `pull_platform_image: false` option means that it won't automatically pull the Docker image for the platform; it will use the image specified in the suite. +This section configures the driver, which is responsible for creating and managing the instances. Here, it's set to use Amazon EC2 instances. The various options configure the EC2 instances, such as instance type (`m5.large`), whether to associate a public IP address (`associate_public_ip: true`), and various metadata options. ```yaml -transport: - name: dokken +provisioner: + name: ansible_playbook + ... ``` -This section configures the transport, which is the method Test Kitchen uses to communicate with the instance. Here, it's using the Dokken transport, which communicates with the Docker container. +This section configures the provisioner, which is the tool that brings your system to the desired state. Here, it's using Ansible playbooks. The various options configure how Ansible is run, such as the path to the Ansible binary (`ansible_binary_path: /usr/local/bin`), whether to require pip3 (`require_pip3: true`), and the path to the roles and requirements files. ```yaml -platforms: - - name: ubi8 +lifecycle: + pre_converge: + - remote: | + ... ``` -This section defines the platforms on which your tests will run. In this case, it's UBI 8 (Red Hat's Universal Base Image 8). +This section defines lifecycle hooks, which are commands that run at certain points in the Test Kitchen run. Here, it's running a series of commands before the converge phase (i.e., before applying the infrastructure code). These commands install necessary packages, update system packages, and update the `ec2-user` configuration. ```yaml -suites: - - name: vanilla - driver: - image: <%= ENV['VANILLA_CONTAINER_IMAGE'] || "registry.access.redhat.com/ubi8/ubi:8.9-1028" %> - verifier: - input_files: - - container.vanilla.inputs.yml - - name: hardened - driver: - image: <%= ENV['HARDENED_CONTAINER_IMAGE'] || "registry1.dso.mil/ironbank/redhat/ubi/ubi8" %> - verifier: - input_files: - - container.hardened.inputs.yml +transport: + name: ssh + max_ssh_sessions: 2 ``` -This section defines the test suites. Each suite represents a different configuration to test. - -- Each suite has a `name`, a `driver`, and a `verifier`. -- The `driver` section specifies the Docker image to use for the suite. It's dynamically set based on the `VANILLA_CONTAINER_IMAGE` or `HARDENED_CONTAINER_IMAGE` environment variable, with a default value if the variable is not set. -- The `verifier` section specifies files that contain input variables for the InSpec profile. +This section configures the transport, which is the method Test Kitchen uses to communicate with the instance. Here, it's using SSH and allowing a maximum of 2 SSH sessions. The workflow of Test Kitchen involves the following steps: -1. **Create:** Test Kitchen uses the driver to create a Docker container of the platform. -2. **Converge:** Test Kitchen uses the provisioner to apply the infrastructure code to the container. In this case, no provisioning is done. -3. **Verify:** Test Kitchen checks if the container is in the desired state. This is not shown in your file, but it would be configured in the `verifier` section. -4. **Destroy:** Test Kitchen uses the driver to destroy the container after testing. This is not shown in your file, but it would be configured in the `driver` section. - -The `transport` is used in all these steps to communicate with the container. - -## Environment Variables in `kitchen.container.yml` - -The `kitchen.container.yml` file uses the following environment variables to select the containers used during its `hardened` and `vanilla` testing runs. You can test any container using these environment variables, even though standard defaults are set. +1. **Create:** Test Kitchen uses the driver to create an instance of the platform. +2. **Converge:** Test Kitchen uses the provisioner to apply the infrastructure code to the instance. Before this phase, it runs the commands defined in the `pre_converge` lifecycle hook. +3. **Verify:** Test Kitchen checks if the instance is in the desired state. This is not shown in your file, but it would be configured in the `verifier` section. +4. **Destroy:** Test Kitchen uses the driver to destroy the instance after testing. This is not shown in your file, but it would be configured in the `driver` section. -- `VANILLA_CONTAINER_IMAGE`: This variable specifies the Docker container image considered 'not hardened'. - - default: `registry.access.redhat.com/ubi8/ubi:8.9-1028` -- `HARDENED_CONTAINER_IMAGE`: This variable specifies the Docker container image considered 'hardened'. - - default: `registry1.dso.mil/ironbank/redhat/ubi/ubi8` \ No newline at end of file +The `transport` is used in all these steps to communicate with the instance. \ No newline at end of file diff --git a/src/courses/profile-dev-test/23.md b/src/courses/profile-dev-test/23.md index e707ec18a..441ffd968 100644 --- a/src/courses/profile-dev-test/23.md +++ b/src/courses/profile-dev-test/23.md @@ -1,36 +1,120 @@ --- order: 23 next: 24.md -title: GitHub Actions +title: Test Kitchen - `kitchen.container.yml` author: Aaron Lippold --- -# GitHub Actions +# Understanding the [`kitchen.container.yml`](./kitchen.container.yml) -## [`lint-profile.yml`](.github/workflows/lint-profile.yml) +The `kitchen.container.yml` file orchestrates our container-based test suite. It defines two types of containers, hardened and vanilla, and specifies the inspec_tests to run against them. It also configures the generation and storage of test reports. -This action checks out the repository, installs Ruby and InSpec, then runs `bundle exec inspec check .` to validate the structure and syntax of the InSpec profile and its Ruby code. +Unlike other test suites, the container suite skips the 'provisioner' stage for the vanilla and hardened targets. Instead, during the create stage, it simply downloads and starts the specified images. This is due to the use of the [dummy Test Kitchen driver](https://github.com/test-kitchen/test-kitchen/blob/main/lib/kitchen/driver/dummy.rb), which is ideal for interacting with pre-configured or immutable targets like containers. -## [`verify-ec2.yml`](.github/workflows/verify-ec2.yml) +This approach allows for the evaluation of existing containers, even those created by other workflows. It can be leveraged to build a generalized workflow for validating any container against our Benchmark requirements, providing a comprehensive assessment of its security posture. -This action performs the following steps: +## Example `kitchen.container.yml` file -1. Checks out the repository. -2. Installs Ruby, InSpec, AWS CLI, and Test Kitchen along with its drivers. -3. Sets up the 'runner'. -4. Configures access to the AWS VPC environment. -5. Runs the `vanilla` and `hardened` test suites. -6. Displays a summary of the test suite results. -7. Saves the test suite results. -8. Uploads the results to our Heimdall Demo server. -9. Determines the success or failure of the test run based on the validation of the test suite results against the `threshold.yml` files for each test suite (`hardened` and `vanilla`). +```yaml +--- +# see: https://kitchen.ci/docs/drivers/dokken/ + +provisioner: + name: dummy + +driver: + name: dokken + pull_platform_image: false + +transport: + name: dokken + +platforms: + - name: ubi8 + +suites: + - name: vanilla + driver: + image: <%= ENV['VANILLA_CONTAINER_IMAGE'] || "registry.access.redhat.com/ubi8/ubi:8.9-1028" %> + verifier: + input_files: + - container.vanilla.inputs.yml + - name: hardened + driver: + image: <%= ENV['HARDENED_CONTAINER_IMAGE'] || "registry1.dso.mil/ironbank/redhat/ubi/ubi8" %> + verifier: + input_files: + - container.hardened.inputs.yml + # creds_file: './creds.json' +``` + +# Breakdown of the `kitchen.container.yml` file: + +```yaml +provisioner: + name: dummy +``` + +This section configures the provisioner, which is the tool that brings your system to the desired state. Here, it's using a dummy provisioner, which means no provisioning will be done. + +```yaml +driver: + name: dokken + pull_platform_image: false +``` + +This section configures the driver, which is responsible for creating and managing the instances. Here, it's set to use the Dokken driver, which is designed for running tests in Docker containers. The `pull_platform_image: false` option means that it won't automatically pull the Docker image for the platform; it will use the image specified in the suite. + +```yaml +transport: + name: dokken +``` + +This section configures the transport, which is the method Test Kitchen uses to communicate with the instance. Here, it's using the Dokken transport, which communicates with the Docker container. + +```yaml +platforms: + - name: ubi8 +``` + +This section defines the platforms on which your tests will run. In this case, it's UBI 8 (Red Hat's Universal Base Image 8). + +```yaml +suites: + - name: vanilla + driver: + image: <%= ENV['VANILLA_CONTAINER_IMAGE'] || "registry.access.redhat.com/ubi8/ubi:8.9-1028" %> + verifier: + input_files: + - container.vanilla.inputs.yml + - name: hardened + driver: + image: <%= ENV['HARDENED_CONTAINER_IMAGE'] || "registry1.dso.mil/ironbank/redhat/ubi/ubi8" %> + verifier: + input_files: + - container.hardened.inputs.yml +``` + +This section defines the test suites. Each suite represents a different configuration to test. + +- Each suite has a `name`, a `driver`, and a `verifier`. +- The `driver` section specifies the Docker image to use for the suite. It's dynamically set based on the `VANILLA_CONTAINER_IMAGE` or `HARDENED_CONTAINER_IMAGE` environment variable, with a default value if the variable is not set. +- The `verifier` section specifies files that contain input variables for the InSpec profile. + +The workflow of Test Kitchen involves the following steps: -## [`verify-container.yml`](.github/workflows/verify-container.yml) +1. **Create:** Test Kitchen uses the driver to create a Docker container of the platform. +2. **Converge:** Test Kitchen uses the provisioner to apply the infrastructure code to the container. In this case, no provisioning is done. +3. **Verify:** Test Kitchen checks if the container is in the desired state. This is not shown in your file, but it would be configured in the `verifier` section. +4. **Destroy:** Test Kitchen uses the driver to destroy the container after testing. This is not shown in your file, but it would be configured in the `driver` section. -This action performs similar steps to `verify-ec2.yml`, but with some differences: +The `transport` is used in all these steps to communicate with the container. -1. It configures access to the required container registries - Platform One and Red Hat. +## Environment Variables in `kitchen.container.yml` -## [`verify-vagrant.yml.example`](.github/workflows/verify-vagrant.yml.example) +The `kitchen.container.yml` file uses the following environment variables to select the containers used during its `hardened` and `vanilla` testing runs. You can test any container using these environment variables, even though standard defaults are set. -This action is similar to the `verify-ec2` workflow, but instead of using a remote AWS EC2 instance in a VPC, it uses a local Vagrant virtual machine as the test target. The user can configure whether to upload the results to our Heimdall Demo server or not by modifing the Github Action. \ No newline at end of file +- `VANILLA_CONTAINER_IMAGE`: This variable specifies the Docker container image considered 'not hardened'. + - default: `registry.access.redhat.com/ubi8/ubi:8.9-1028` +- `HARDENED_CONTAINER_IMAGE`: This variable specifies the Docker container image considered 'hardened'. + - default: `registry1.dso.mil/ironbank/redhat/ubi/ubi8` \ No newline at end of file diff --git a/src/courses/profile-dev-test/24.md b/src/courses/profile-dev-test/24.md index bb02214ed..409481bb1 100644 --- a/src/courses/profile-dev-test/24.md +++ b/src/courses/profile-dev-test/24.md @@ -1,66 +1,36 @@ --- order: 24 next: 25.md -title: InSpec Delta - Laying the Ground for a Clean Release Branch -shortTitle: Delta - Prep & Setup +title: GitHub Actions author: Aaron Lippold --- -# InSpec Delta +# GitHub Actions -## Preparing the Profile Before Running Delta +## [`lint-profile.yml`](.github/workflows/lint-profile.yml) -Before running Delta, it's beneficial to format the profile to match the format Delta will use. This minimizes changes to only those necessary based on the guidance update. Follow these steps: +This action checks out the repository, installs Ruby and InSpec, then runs `bundle exec inspec check .` to validate the structure and syntax of the InSpec profile and its Ruby code. -1. **Run Cookstyle:** Install the Cookstyle gem and use it to lint the controls into Cookstyle format. Verify the gem installation with `gem list cookstyle`. Create a `.rubocop.yml` file with the provided example settings or modify these settings via the command line. Run `cookstyle -a ./controls` and any tests you have for your profile. +## [`verify-ec2.yml`](.github/workflows/verify-ec2.yml) -```shell -AllCops: - Exclude: - - "libraries/**/*" +This action performs the following steps: -Layout/LineLength: - Max: 1000 - AllowURI: true - IgnoreCopDirectives: true +1. Checks out the repository. +2. Installs Ruby, InSpec, AWS CLI, and Test Kitchen along with its drivers. +3. Sets up the 'runner'. +4. Configures access to the AWS VPC environment. +5. Runs the `vanilla` and `hardened` test suites. +6. Displays a summary of the test suite results. +7. Saves the test suite results. +8. Uploads the results to our Heimdall Demo server. +9. Determines the success or failure of the test run based on the validation of the test suite results against the `threshold.yml` files for each test suite (`hardened` and `vanilla`). -Naming/FileName: - Enabled: false +## [`verify-container.yml`](.github/workflows/verify-container.yml) -Metrics/BlockLength: - Max: 400 +This action performs similar steps to `verify-ec2.yml`, but with some differences: -Lint/ConstantDefinitionInBlock: - Enabled: false +1. It configures access to the required container registries - Platform One and Red Hat. -# Required for Profiles as it can introduce profile errors -Style/NumericPredicate: - Enabled: false +## [`verify-vagrant.yml.example`](.github/workflows/verify-vagrant.yml.example) -Style/WordArray: - Description: "Use %w or %W for an array of words. (https://rubystyle.guide#percent-w)" - Enabled: false - -Style/RedundantPercentQ: - Enabled: true - -Style/NestedParenthesizedCalls: - Enabled: false - -Style/TrailingCommaInHashLiteral: - Description: "https://docs.rubocop.org/rubocop/cops_style.html#styletrailingcommainhashliteral" - Enabled: true - EnforcedStyleForMultiline: no_comma - -Style/TrailingCommaInArrayLiteral: - Enabled: true - EnforcedStyleForMultiline: no_comma - -Style/BlockDelimiters: - Enabled: false - -Lint/AmbiguousBlockAssociation: - Enabled: false -``` - -2. **Run the SAF CLI Command:** Use `saf generate update_controls4delta` to check and update the control IDs with the provided XCCDF guidance. This process checks if the new guidance changes the control numbers and updates them if necessary. This minimizes the Delta output content and improves the visualization of the modifications provided by the Delta process. +This action is similar to the `verify-ec2` workflow, but instead of using a remote AWS EC2 instance in a VPC, it uses a local Vagrant virtual machine as the test target. The user can configure whether to upload the results to our Heimdall Demo server or not by modifing the Github Action. \ No newline at end of file diff --git a/src/courses/profile-dev-test/25.md b/src/courses/profile-dev-test/25.md index 723f5d5ee..99e5f9ac7 100644 --- a/src/courses/profile-dev-test/25.md +++ b/src/courses/profile-dev-test/25.md @@ -1,47 +1,66 @@ --- order: 25 next: 26.md -title: InSpec Delta - Making the Delta Release Branch -shortTitle: Delta - Making your Branch +title: InSpec Delta - Laying the Ground for a Clean Release Branch +shortTitle: Delta - Prep & Setup author: Aaron Lippold --- -# Prepair Your Environment +# InSpec Delta -- **Download New Guidance:** Download the appropriate profile from the [DISA Document Library](https://public.cyber.mil/stigs/downloads/). Unzip the downloaded folder and identify the `xccdf.xml` file. -- **Create the InSpec Profile JSON File:** Clone or download the InSpec profile locally. Run the `inspec json` command to create the InSpec Profile JSON file to be used in the `saf generate delta` command. +## Preparing the Profile Before Running Delta -## Delta Workflow Process +Before running Delta, it's beneficial to format the profile to match the format Delta will use. This minimizes changes to only those necessary based on the guidance update. Follow these steps: -![Delta Workflow Process](https://user-images.githubusercontent.com/13986875/228628448-ad6b9fd9-d165-4e65-95e2-a951031d19e2.png "Delta Workflow Process Image") +1. **Run Cookstyle:** Install the Cookstyle gem and use it to lint the controls into Cookstyle format. Verify the gem installation with `gem list cookstyle`. Create a `.rubocop.yml` file with the provided example settings or modify these settings via the command line. Run `cookstyle -a ./controls` and any tests you have for your profile. -## Using Delta +```shell +AllCops: + Exclude: + - "libraries/**/*" -The SAF InSpec Delta workflow typically involves two phases, `preformatting` and `delta`. +Layout/LineLength: + Max: 1000 + AllowURI: true + IgnoreCopDirectives: true -Before starting, ensure you have the latest SAF-CLI, the InSpec Profile JSON file, and the updated guidance file. +Naming/FileName: + Enabled: false -1. **Preformat the Source Profile:** Before running the Delta command, preformat your source profile (usually the Patch Release profile) using the `saf generate update_controls4delta` command. This prepares the profile for the Delta process. -2. **Run the Delta Command:** Execute `saf generate delta [arguments]` to start the Delta process. +Metrics/BlockLength: + Max: 400 -For more information on these commands, refer to the following documentation: +Lint/ConstantDefinitionInBlock: + Enabled: false -- [update_controls4delta](https://saf-cli.mitre.org/#delta-supporting-options) -- [saf generate delta](https://saf-cli.mitre.org/#delta) +# Required for Profiles as it can introduce profile errors +Style/NumericPredicate: + Enabled: false -## Scope of Changes by Delta +Style/WordArray: + Description: "Use %w or %W for an array of words. (https://rubystyle.guide#percent-w)" + Enabled: false -Delta focuses on specific modifications migrating the changes from the XCCDF Benchmark Rules to the Profiles controls, and updating the 'metadata' of each of thosin the `control ID`, `title`, `default desc`, `check text`, and `fix text`, between the XCCDF Benchmark Rules and the Profile Controls. +Style/RedundantPercentQ: + Enabled: true -If the XCCDF Guidance Document introduces a new 'Rule' or `inspec control` that is not in the current profile's `controls` directory, Delta will add it to the controls directory, populating the metadata from the XCCDF Benchmark data, similar to the [xccdf-benchmark-to-inspec-stubs](https://saf-cli.mitre.org/#xccdf-benchmark-to-inspec-stub) tool. +Style/NestedParenthesizedCalls: + Enabled: false -It also adjusts the `tags` and introduces a `ref` between the `impact` and `tags`. +Style/TrailingCommaInHashLiteral: + Description: "https://docs.rubocop.org/rubocop/cops_style.html#styletrailingcommainhashliteral" + Enabled: true + EnforcedStyleForMultiline: no_comma -Delta does not modify the Ruby/InSpec code within the control, leaving it intact. Instead, it updates the 'control metadata' using the information from the supplied XCCDF guidance document. This applies to 'matched controls' between the XCCDF Guidance Document and the InSpec profile. +Style/TrailingCommaInArrayLiteral: + Enabled: true + EnforcedStyleForMultiline: no_comma -### Further InSpec Delta Information and Background +Style/BlockDelimiters: + Enabled: false -- The original Delta branch can be found [here](https://github.com/mitre/saf/pull/485). -- Delta moves lines not labeled with 'desc' to the bottom, between tags and InSpec code. -- Whether the controls are formatted to be 80 lines or not, Delta exhibits the same behavior with the extra text. -- Parameterizing should be considered. \ No newline at end of file +Lint/AmbiguousBlockAssociation: + Enabled: false +``` + +2. **Run the SAF CLI Command:** Use `saf generate update_controls4delta` to check and update the control IDs with the provided XCCDF guidance. This process checks if the new guidance changes the control numbers and updates them if necessary. This minimizes the Delta output content and improves the visualization of the modifications provided by the Delta process. diff --git a/src/courses/profile-dev-test/26.md b/src/courses/profile-dev-test/26.md index 55e114c9b..807ec94e0 100644 --- a/src/courses/profile-dev-test/26.md +++ b/src/courses/profile-dev-test/26.md @@ -1,64 +1,47 @@ --- order: 26 next: 27.md -title: Tips, Tricks & Troubleshooting -shortTitle: Tips & Troubleshooting +title: InSpec Delta - Making the Delta Release Branch +shortTitle: Delta - Making your Branch author: Aaron Lippold --- -# Tips, Tricks and Troubleshooting +# Prepair Your Environment -## Test Kitchen +- **Download New Guidance:** Download the appropriate profile from the [DISA Document Library](https://public.cyber.mil/stigs/downloads/). Unzip the downloaded folder and identify the `xccdf.xml` file. +- **Create the InSpec Profile JSON File:** Clone or download the InSpec profile locally. Run the `inspec json` command to create the InSpec Profile JSON file to be used in the `saf generate delta` command. -### Locating Test Target Login Details +## Delta Workflow Process -Test Kitchen stores the current host details of your provisioned test targets in the `.kitchen/` directory. Here, you'll find a `yml` file containing your target's `hostname`, `ip address`, `host details`, and login credentials, which could be an `ssh pem key` or another type of credential. +![Delta Workflow Process](https://user-images.githubusercontent.com/13986875/228628448-ad6b9fd9-d165-4e65-95e2-a951031d19e2.png "Delta Workflow Process Image") -```shell -.kitchen -├── .kitchen/hardened-container.yml -├── .kitchen/hardened-rhel-8.pem -├── .kitchen/hardened-rhel-8.yml -├── .kitchen/logs -├── .kitchen/vanilla-container.yml -├── .kitchen/vanilla-rhel-8.pem -├── .kitchen/vanilla-rhel-8.yml -└── .kitchen/vanilla-ubi8.yml -``` +## Using Delta -### Restoring Access to a Halted or Restarted Test Target +The SAF InSpec Delta workflow typically involves two phases, `preformatting` and `delta`. -If your test target reboots or updates its network information, you don't need to execute bundle exec kitchen destroy. Instead, update the corresponding .kitchen/#{suite}-#{target}.yml file with the updated information. This will ensure that your kitchen login, kitchen validate, and other kitchen commands function correctly, as they'll be connecting to the correct location instead of using outdated data. +Before starting, ensure you have the latest SAF-CLI, the InSpec Profile JSON file, and the updated guidance file. -### AWS Console and EC2 Oddities +1. **Preformat the Source Profile:** Before running the Delta command, preformat your source profile (usually the Patch Release profile) using the `saf generate update_controls4delta` command. This prepares the profile for the Delta process. +2. **Run the Delta Command:** Execute `saf generate delta [arguments]` to start the Delta process. -Since we're using the free-tier for our AWS testing resources instead of a dedicated host, your test targets might shut down or 'reboot in the background' if you stop interacting with them, halt them, put them in a stop state, or leave them overnight. To regain access, edit the .kitchen/#{suite}-#{target}.yml file. As mentioned above, there's no need to recreate your testing targets if you can simply point Test Kitchen to the correct IP address. +For more information on these commands, refer to the following documentation: -## InSpec / Ruby +- [update_controls4delta](https://saf-cli.mitre.org/#delta-supporting-options) +- [saf generate delta](https://saf-cli.mitre.org/#delta) -### Using `pry` and `pry-byebug` for Debugging Controls +## Scope of Changes by Delta -When developing InSpec controls, it's beneficial to use the `kitchen-test` suite, the `INSPEC_CONTROL` environment variable, and `pry` or `pry-byebug`. This combination allows you to quickly debug, update, and experiment with your fixes in the context of the InSpec code, without having to run the full test suite. +Delta focuses on specific modifications migrating the changes from the XCCDF Benchmark Rules to the Profiles controls, and updating the 'metadata' of each of thosin the `control ID`, `title`, `default desc`, `check text`, and `fix text`, between the XCCDF Benchmark Rules and the Profile Controls. -`pry` and `pry-byebug` are powerful tools for debugging Ruby code, including InSpec controls. Here's how you can use them: +If the XCCDF Guidance Document introduces a new 'Rule' or `inspec control` that is not in the current profile's `controls` directory, Delta will add it to the controls directory, populating the metadata from the XCCDF Benchmark data, similar to the [xccdf-benchmark-to-inspec-stubs](https://saf-cli.mitre.org/#xccdf-benchmark-to-inspec-stub) tool. -1. First, add `require 'pry'` or `require 'pry-byebug'` at the top of your control file. -2. Then, insert `binding.pry` at the point in your code where you want to start debugging. -3. When you run your tests, execution will stop at the `binding.pry` line, and you can inspect variables, step through the code, and more. +It also adjusts the `tags` and introduces a `ref` between the `impact` and `tags`. -***!Pro Tip!*** +Delta does not modify the Ruby/InSpec code within the control, leaving it intact. Instead, it updates the 'control metadata' using the information from the supplied XCCDF guidance document. This applies to 'matched controls' between the XCCDF Guidance Document and the InSpec profile. -- Remember to remove or comment out the `binding.pry` lines when you're done debugging or you won't have a good 'linting' down the road. +### Further InSpec Delta Information and Background -### Streamlining Your Testing with `inspec shell` - -The `inspec shell` command allows you to test your full control update on your test target directly. To do this, you'll need to retrieve the IP address and SSH PEM key for your target instance from the Test Kitchen `.kitchen` directory. For more details on this, refer to the [Finding Your Test Target Login Details](#311-locating-test-target-login-details) section. - -Once you have your IP address and SSH PEM key (for AWS target instances), or the container ID (for Docker test instances), you can use the following commands: - -- For AWS test targets: `bundle exec inspec shell -i #{pem-key} -t ssh://ec2-user@#{ipaddress} --sudo` -- For Docker test instances: `bundle exec inspec shell -t docker://#{container-id}` - -### Using `kitchen login` for Easy Test Review and Modification - -The `kitchen login` command provides an easy way to review and modify your test target. This tool is particularly useful for introducing test cases, exploring corner cases, and validating both positive and negative test scenarios. +- The original Delta branch can be found [here](https://github.com/mitre/saf/pull/485). +- Delta moves lines not labeled with 'desc' to the bottom, between tags and InSpec code. +- Whether the controls are formatted to be 80 lines or not, Delta exhibits the same behavior with the extra text. +- Parameterizing should be considered. \ No newline at end of file diff --git a/src/courses/profile-dev-test/27.md b/src/courses/profile-dev-test/27.md index 8090af40d..698826aff 100644 --- a/src/courses/profile-dev-test/27.md +++ b/src/courses/profile-dev-test/27.md @@ -1,30 +1,64 @@ --- order: 27 next: 28.md -title: Background & Definitions +title: Tips, Tricks & Troubleshooting +shortTitle: Tips & Troubleshooting author: Aaron Lippold --- -# Background and Definitions +# Tips, Tricks and Troubleshooting -## Background +## Test Kitchen -### Evolution of STIGs and Security Benchmarks +### Locating Test Target Login Details -The Department of Defense (DOD) has continually updated its databases that track rules and Security Technical Implementation Guides (STIGs) that house those rules. +Test Kitchen stores the current host details of your provisioned test targets in the `.kitchen/` directory. Here, you'll find a `yml` file containing your target's `hostname`, `ip address`, `host details`, and login credentials, which could be an `ssh pem key` or another type of credential. -Initially, the system was known as the Vulnerability Management System (VMS). +```shell +.kitchen +├── .kitchen/hardened-container.yml +├── .kitchen/hardened-rhel-8.pem +├── .kitchen/hardened-rhel-8.yml +├── .kitchen/logs +├── .kitchen/vanilla-container.yml +├── .kitchen/vanilla-rhel-8.pem +├── .kitchen/vanilla-rhel-8.yml +└── .kitchen/vanilla-ubi8.yml +``` -In the STIGs, you might come across data elements that are remnants from these iterations. These include `Group Title` (gid or gtitle), `Vulnerability ID` (VulnID), `Rule ID` (rule_id), `STIG ID` (stig_id), and others. +### Restoring Access to a Halted or Restarted Test Target -A significant change was the shift from using `STIG ID` to `Rule ID` in many security scanning tools. This change occurred because the Vulnerability Management System used the STIG_ID as the primary index for the requirements in each Benchmark in VMS. +If your test target reboots or updates its network information, you don't need to execute bundle exec kitchen destroy. Instead, update the corresponding .kitchen/#{suite}-#{target}.yml file with the updated information. This will ensure that your kitchen login, kitchen validate, and other kitchen commands function correctly, as they'll be connecting to the correct location instead of using outdated data. -However, when DISA updated the Vendor STIG Processes and replaced the VMS, they decided to migrate the primary ID from the STIG ID to the Rule ID, tracking changes in the Rules as described above. +### AWS Console and EC2 Oddities -Examples of tools that still use either fully or in part the 'STIG ID' vs the 'Rule ID' as a primary index are: the DISA STIG Viewer, Nessus Audit Scans, and Open SCAP client. +Since we're using the free-tier for our AWS testing resources instead of a dedicated host, your test targets might shut down or 'reboot in the background' if you stop interacting with them, halt them, put them in a stop state, or leave them overnight. To regain access, edit the .kitchen/#{suite}-#{target}.yml file. As mentioned above, there's no need to recreate your testing targets if you can simply point Test Kitchen to the correct IP address. -While these elements might seem confusing, understanding their historical context is essential. +## InSpec / Ruby -In our modern profiles, some data from the XCCDF Benchmarks still exist in the document but are not used or rendered in the modern InSpec Profiles. However, in some of the older profiles, you may see many of these data elements as `tags` in the profile. The intention was to ensure easy and lossless conversion between XCCDF Benchmark and HDF Profile. +### Using `pry` and `pry-byebug` for Debugging Controls -It was later realized that since the structure of these data elements was 'static', they could be easily reintroduced when converting back to an XCCDF Benchmark. Therefore, rendering them in the profile was deemed unnecessary. +When developing InSpec controls, it's beneficial to use the `kitchen-test` suite, the `INSPEC_CONTROL` environment variable, and `pry` or `pry-byebug`. This combination allows you to quickly debug, update, and experiment with your fixes in the context of the InSpec code, without having to run the full test suite. + +`pry` and `pry-byebug` are powerful tools for debugging Ruby code, including InSpec controls. Here's how you can use them: + +1. First, add `require 'pry'` or `require 'pry-byebug'` at the top of your control file. +2. Then, insert `binding.pry` at the point in your code where you want to start debugging. +3. When you run your tests, execution will stop at the `binding.pry` line, and you can inspect variables, step through the code, and more. + +***!Pro Tip!*** + +- Remember to remove or comment out the `binding.pry` lines when you're done debugging or you won't have a good 'linting' down the road. + +### Streamlining Your Testing with `inspec shell` + +The `inspec shell` command allows you to test your full control update on your test target directly. To do this, you'll need to retrieve the IP address and SSH PEM key for your target instance from the Test Kitchen `.kitchen` directory. For more details on this, refer to the [Finding Your Test Target Login Details](#311-locating-test-target-login-details) section. + +Once you have your IP address and SSH PEM key (for AWS target instances), or the container ID (for Docker test instances), you can use the following commands: + +- For AWS test targets: `bundle exec inspec shell -i #{pem-key} -t ssh://ec2-user@#{ipaddress} --sudo` +- For Docker test instances: `bundle exec inspec shell -t docker://#{container-id}` + +### Using `kitchen login` for Easy Test Review and Modification + +The `kitchen login` command provides an easy way to review and modify your test target. This tool is particularly useful for introducing test cases, exploring corner cases, and validating both positive and negative test scenarios. diff --git a/src/courses/profile-dev-test/28.md b/src/courses/profile-dev-test/28.md index eb66f7dfe..34ecb0f31 100644 --- a/src/courses/profile-dev-test/28.md +++ b/src/courses/profile-dev-test/28.md @@ -1,23 +1,30 @@ --- order: 28 next: 29.md -title: Terms & Definitions +title: Background & Definitions author: Aaron Lippold --- -# Terms & Definitions - -- **Baseline**: This refers to a set of relevant security controls, such as NIST 800-53 controls or Center for Internet Security Controls. These controls offer high-level security best practices, grouped into common areas of concern. -- **Benchmark**: This is a set of security controls tailored to a specific type of application or product. These controls are typically categorized into 'high', 'medium', and 'low' levels based on Confidentiality, Integrity, and Availability (C.I.A). -- **[Common Correlation Identifier](https://public.cyber.mil/stigs/cci/) (CCI)**: The Control Correlation Identifier (CCI) provides a standard identifier and description for each of the singular, actionable statements that comprise an IA control or IA best practice. For example: 'CCI-000366'. -- **Group Title (gtitle)**: This is essentially the SRG ID but is a holdover data value from the old Vulnerability Management System. For example: 'SRG-OS-000480-GPOS-00227'. -- **Major Version Update**: These are updates that occur when a software vendor releases a new major version of their product's STIG, e.g., RedHat releasing version 9 of Red Hat Enterprise Linux or Microsoft releasing a new major version of Windows. -- **Patch Update**: These are regular updates that address missing corner cases of testing for one or more benchmark requirements, or improvements to the InSpec code for a requirement. These updates result in a new patch release of the benchmark, e.g., `v1.12.4` to `v1.12.5`. -- **Profile**: This is a set of tests representing a STIG or a CIS Benchmark. These tests automate the validation of a system against that STIG or CIS Benchmark. -- **Release Update**: These are updates that occur when the STIG Benchmark owner releases an updated version of the STIG, e.g., Red Hat Enterprise Linux V1R12 to V1R13. -- **Rule ID (rid)**: The Rule ID has two parts separated by the `r` in the string - ('SV-230221) and (r858734_rule)'. The first part remains unique within the major version of a Benchmark document, while the latter part of the string is updated each time the 'Rule' is updated 'release to release' of the Benchmark. For example: 'SV-230221r858734_rule'. -- **Security Requirements Guide (SRG)**: SRG documents provide generalized security guidance in XCCDF format that applies to a 'class' of software products such as 'web server', 'operating systems', 'application servers' or 'databases'. You can find an archive of these at the DISA STIG [Document Library](https://public.cyber.mil/stigs/downloads/). -- **Security Technical Implementation Guide (STIG)**: This is a set of specific technical actions required to establish a certain security posture for a software product. It is based on a desired Security Requirements Guide that applies to the product's software class and function, such as operating system, web server, database, etc. You can find an archive of these at the DISA STIG [Document Library](https://public.cyber.mil/stigs/downloads/). -- **SRG_ID**: This is the unique identifier of the SRG requirement. These indexes, like the STIG Rule IDs, also show their parent-child relationship. For example: 'SRG-OS-000480-GPOS-00227'. -- **STIG ID (stig_id)**: Many testing tools and testing results tools use this ID - vs the Rule ID - to display each of the individual results of a Benchmark validation run. For example: 'RHEL-08-010000'. Examples include: DISA STIG Viewer, Nessus Audit Scans and the Open SCAP client. -- **XCCDF Benchmark (XCCDF or XCCDF Benchmark)**: XCCDF (Extensible Configuration Checklist Description Format) is a standard developed by NIST and DOD to provide a machine-readable XML format for creating security guidance documents and security technical implementation guides. You can find an archive of these at the DISA STIG [Document Library](https://public.cyber.mil/stigs/downloads/). +# Background and Definitions + +## Background + +### Evolution of STIGs and Security Benchmarks + +The Department of Defense (DOD) has continually updated its databases that track rules and Security Technical Implementation Guides (STIGs) that house those rules. + +Initially, the system was known as the Vulnerability Management System (VMS). + +In the STIGs, you might come across data elements that are remnants from these iterations. These include `Group Title` (gid or gtitle), `Vulnerability ID` (VulnID), `Rule ID` (rule_id), `STIG ID` (stig_id), and others. + +A significant change was the shift from using `STIG ID` to `Rule ID` in many security scanning tools. This change occurred because the Vulnerability Management System used the STIG_ID as the primary index for the requirements in each Benchmark in VMS. + +However, when DISA updated the Vendor STIG Processes and replaced the VMS, they decided to migrate the primary ID from the STIG ID to the Rule ID, tracking changes in the Rules as described above. + +Examples of tools that still use either fully or in part the 'STIG ID' vs the 'Rule ID' as a primary index are: the DISA STIG Viewer, Nessus Audit Scans, and Open SCAP client. + +While these elements might seem confusing, understanding their historical context is essential. + +In our modern profiles, some data from the XCCDF Benchmarks still exist in the document but are not used or rendered in the modern InSpec Profiles. However, in some of the older profiles, you may see many of these data elements as `tags` in the profile. The intention was to ensure easy and lossless conversion between XCCDF Benchmark and HDF Profile. + +It was later realized that since the structure of these data elements was 'static', they could be easily reintroduced when converting back to an XCCDF Benchmark. Therefore, rendering them in the profile was deemed unnecessary. diff --git a/src/courses/profile-dev-test/29.md b/src/courses/profile-dev-test/29.md new file mode 100644 index 000000000..7c02baba8 --- /dev/null +++ b/src/courses/profile-dev-test/29.md @@ -0,0 +1,22 @@ +--- +order: 29 +title: Terms & Definitions +author: Aaron Lippold +--- + +# Terms & Definitions + +- **Baseline**: This refers to a set of relevant security controls, such as NIST 800-53 controls or Center for Internet Security Controls. These controls offer high-level security best practices, grouped into common areas of concern. +- **Benchmark**: This is a set of security controls tailored to a specific type of application or product. These controls are typically categorized into 'high', 'medium', and 'low' levels based on Confidentiality, Integrity, and Availability (C.I.A). +- **[Common Correlation Identifier](https://public.cyber.mil/stigs/cci/) (CCI)**: The Control Correlation Identifier (CCI) provides a standard identifier and description for each of the singular, actionable statements that comprise an IA control or IA best practice. For example: 'CCI-000366'. +- **Group Title (gtitle)**: This is essentially the SRG ID but is a holdover data value from the old Vulnerability Management System. For example: 'SRG-OS-000480-GPOS-00227'. +- **Major Version Update**: These are updates that occur when a software vendor releases a new major version of their product's STIG, e.g., RedHat releasing version 9 of Red Hat Enterprise Linux or Microsoft releasing a new major version of Windows. +- **Patch Update**: These are regular updates that address missing corner cases of testing for one or more benchmark requirements, or improvements to the InSpec code for a requirement. These updates result in a new patch release of the benchmark, e.g., `v1.12.4` to `v1.12.5`. +- **Profile**: This is a set of tests representing a STIG or a CIS Benchmark. These tests automate the validation of a system against that STIG or CIS Benchmark. +- **Release Update**: These are updates that occur when the STIG Benchmark owner releases an updated version of the STIG, e.g., Red Hat Enterprise Linux V1R12 to V1R13. +- **Rule ID (rid)**: The Rule ID has two parts separated by the `r` in the string - ('SV-230221) and (r858734_rule)'. The first part remains unique within the major version of a Benchmark document, while the latter part of the string is updated each time the 'Rule' is updated 'release to release' of the Benchmark. For example: 'SV-230221r858734_rule'. +- **Security Requirements Guide (SRG)**: SRG documents provide generalized security guidance in XCCDF format that applies to a 'class' of software products such as 'web server', 'operating systems', 'application servers' or 'databases'. You can find an archive of these at the DISA STIG [Document Library](https://public.cyber.mil/stigs/downloads/). +- **Security Technical Implementation Guide (STIG)**: This is a set of specific technical actions required to establish a certain security posture for a software product. It is based on a desired Security Requirements Guide that applies to the product's software class and function, such as operating system, web server, database, etc. You can find an archive of these at the DISA STIG [Document Library](https://public.cyber.mil/stigs/downloads/). +- **SRG_ID**: This is the unique identifier of the SRG requirement. These indexes, like the STIG Rule IDs, also show their parent-child relationship. For example: 'SRG-OS-000480-GPOS-00227'. +- **STIG ID (stig_id)**: Many testing tools and testing results tools use this ID - vs the Rule ID - to display each of the individual results of a Benchmark validation run. For example: 'RHEL-08-010000'. Examples include: DISA STIG Viewer, Nessus Audit Scans and the Open SCAP client. +- **XCCDF Benchmark (XCCDF or XCCDF Benchmark)**: XCCDF (Extensible Configuration Checklist Description Format) is a standard developed by NIST and DOD to provide a machine-readable XML format for creating security guidance documents and security technical implementation guides. You can find an archive of these at the DISA STIG [Document Library](https://public.cyber.mil/stigs/downloads/).