-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
added the 'what is done' section for a control
Signed-off-by: Aaron Lippold <[email protected]>
- Loading branch information
1 parent
638fffb
commit 95719d4
Showing
20 changed files
with
638 additions
and
549 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,90 @@ | ||
--- | ||
order: 10 | ||
next: 11.md | ||
title: Rules of the Road | ||
title: What Is Done for a Control? | ||
author: Aaron Lippold | ||
--- | ||
|
||
When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents: | ||
# When is a Control Considered 'Done' | ||
|
||
1. **Maintain Version Integrity:** **Never Merge** new requirements into older benchmark branches. This will create a 'mixed baseline' that doesn't align with any specific guidance document. Benchmarks, STIGs, and Guidance Documents form a 'proper subset' - they should be treated as 'all or nothing'. Mixing requirements from different versions can invalidate the concept of 'testing to a known benchmark'. | ||
You and your team might be wondering what 'done' means for a security control in your profile. Here are a few things to consider: | ||
|
||
2. **Benchmarks are a Complete Set of Requirements:** A Security Benchmark is 'complete and valid' only when all requirements for a specific Release or Major Version are met. Unlike traditional software projects, features and capabilities cannot be incrementally added. A Security Benchmark and its corresponding InSpec Profile are valid only within the scope of a specific 'Release' of that Benchmark. | ||
- The security automation content and its tests are essentially a refactoring of the 'validation' and 'remediation' guidance already established by the benchmark. | ||
- The security automation content tests should fully capture the spirit - or intention - of the guidance, including its caveats, notes, discussion, and 'validation' and 'remediation' content. | ||
- The tests can - and usually do - capture known 'corner cases and security best practices' that are sometimes indirectly or not directly addressed by the benchmark but implied by the spirit of the security requirement being addressed. | ||
- These tests, like all human-written code, may not be perfect. They will need updates and will evolve as our knowledge of the system and benchmark grows. We use the profile in production and real-world environments. In other words, don't let the pursuit of perfection hinder progress. | ||
|
||
3. **Release Readiness Is Predefined:** A Benchmark is considered 'ready for release' when it meets the expected thresholds, hardening, and validation results. Don't be overwhelmed by the multitude of changes across the files. Instead, focus on the specific requirement you are working on. Understand its expected failure and success states on each of the target testing platforms. This approach prevents you from being overwhelmed and provides solid pivot points as you work through the implementation of the automated tests for each requirement and its 'contexts'. | ||
The 'is it done' litmus test is not solely determined by a perfect InSpec control or describe and expect blocks. It also heavily relies on you, the security automation engineer. Your experience, understanding of the platform you're working on, and the processes that you and your team have collectively agreed upon are all vital components. | ||
|
||
4. **Use Vendor-Managed Standard Releases:** When setting up a test suite, prioritize using vendor-managed standard releases for software installations and baseline configurations. This should be the starting point for both 'vanilla' and 'hardening' workflows. This approach ensures that your initial and ongoing testing, hardening, and validation closely mirror the real-world usage scenarios of your end-users. | ||
Trust your established expected test outcomes, the guidance document, and the CI/CD testing framework. They will help you know that, to the best of your ability, you have captured the spirit of the testing required by the Benchmark. | ||
|
||
By adhering to these principles, you ensure that your updates to Benchmark Profiles are consistent, accurate, and aligned with the original guidance documents. | ||
## The MITRE Security Automation Framework 'Yardstick' | ||
|
||
We consider a control effectively tested when: | ||
|
||
1. All aspects of the 'validation' - also known as 'check text' - have been addressed. | ||
2. Any aspects of the 'remediation' - also known as 'fix text' - that are part of the 'validation' process have been captured. | ||
3. Any documented conditions that are Not Applicable, as outlined in the 'discussion', 'check', or 'fix' text, have been addressed. | ||
4. Any documented conditions that have Not Been Reviewed, as outlined in the 'discussion', 'check', or 'fix' text, have been addressed. | ||
5. The conditions for Not Applicable and Not Reviewed are early in the control to ensure the control is as efficient as possible. | ||
6. The control uses the `only_if` block vs 'if/else' logic when possible to ensure that the control is as clear, direct, and maintainable as possible from a coding perspective. | ||
7. The control has been tested on both 'vanilla' and 'hardened' instances, ensuring that: | ||
1. The test communicates effectively and fails as expected on the 'vanilla' testing target. | ||
2. The test communicates effectively and passes on the 'hardened' testing target. | ||
3. The test communicates effectively and fails on a misconfigured 'vanilla' testing target. | ||
4. The test communicates effectively and fails on a misconfigured 'hardened' testing target. | ||
5. The test communicates effectively and clearly articulates the Not Applicable condition for both 'vanilla' and 'hardened' testing targets. | ||
6. The test communicates effectively and clearly articulates the Not Reviewed condition for both the 'vanilla' and 'hardened' testing targets. | ||
7. The tests have been constructed in a way that they do not produce Profile Errors when looping, using conditional logic, or when system conditions - such as missing files, directories, or services - are not in the expected locations. | ||
|
||
## Defining 'Passes as Expected' | ||
|
||
'Passing as expected' is the most straightforward concept as it directly corresponds to the test conditions. When the test asserts a condition, it validates that assertion and reports it to the end user in a clear and concise manner. | ||
|
||
We strive to ensure that when we report a 'pass', we do so in a language that is direct, simple, and easy to understand. | ||
|
||
For example: | ||
|
||
```shell | ||
✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date. | ||
✔ All system security patches and updates are up to date and have been applied | ||
``` | ||
|
||
`Passes as Expected` also encompasses: | ||
|
||
- The conditions for the Not Reviewed and Not Applicable states for the control, if any. | ||
|
||
## Defining `Fails as Expected` | ||
|
||
'Failing as expected' is a less straightforward concept as it doesn't always directly correspond to the test conditions. When the test asserts a condition and it fails, the reason for that failure should be communicated to the end user in a clear and concise manner. | ||
|
||
However, as we all know, a test may fail for more than one reason. Sometimes, the reason for failure might be connected to human error, conditions on the system such as extra lines, files, or housekeeping on the system that was not done, etc. All these factors may need to be accounted for in your tests and perhaps captured in your output and 'reasons' for failure. | ||
|
||
This is where the above 'best practices' come into play. You don't just test in optional 'pass' and 'fail' conditions only, but also 'dirty things up' a bit and make sure that your 'failure' cases are robust enough to handle the real world and semi-perfect conditions. | ||
|
||
For example: | ||
|
||
```shell | ||
✔ SV-230222: RHEL 8 vendor packaged system security patches and updates must be installed and up to date. | ||
x The following packages have security patches and need to be updated: | ||
- package 1 | ||
- package 2 | ||
- package 3 | ||
- package 4 | ||
``` | ||
|
||
`Fails as Expected` also encompasses: | ||
|
||
- Misconfigurations, extra lines in files, extra settings, missing files, etc. | ||
|
||
## Defining `Communicates Effectively` | ||
|
||
Clear communication from your testing suite may require you to use a combination of approaches, but the extra time and effort is well worth it. | ||
|
||
Here are some methods you can employ and things to consider: | ||
|
||
- Use `expect` vs `describe` statements in cases where you have multi-part or multi-phase test cases. | ||
- Break up your `describe` statements into multiple layers so that the final output to the end user is clear and concise. | ||
- Post-process and format both 'passing' and 'failures' so that they are useful to the end user later and clear for communication to other team members. This could be in the form of lists or bulleted lists. | ||
- Collect 'failing results' as simple, clear lists or bullets that are easy to 'copy and paste'. This makes it easier for teams to know 'what they have to fix and where'. | ||
- Consider assisting 'Manual Review'/'Not Reviewed' tests by collecting needed information, such as users, groups, or other elements that you are asking the user or another person to review. While we may not be able to fully automate the test, if the 'automation can help collect' then it still adds value. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,18 @@ | ||
--- | ||
order: 11 | ||
next: 12.md | ||
title: Creating a `Patch Update` | ||
title: Rules of the Road | ||
author: Aaron Lippold | ||
--- | ||
|
||
A patch update involves making minor changes to a profile to fix issues or improve functionality. Here's a step-by-step guide: | ||
When updating Benchmark Profiles, adhere to these key principles to maintain alignment with the original Guidance Documents: | ||
|
||
1. **Report the Issue:** Open an issue on our project, detailing the problem and providing examples. Do this on [our issues page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/issues). | ||
2. **Fork and Branch:** Fork the repository on GitHub, then create a branch off the `tagged` patch release you're targeting for the update. | ||
3. **Set Up Testing Suites:** In your forked branch, set up the AWS and Docker testing suites. | ||
4. **Make Updates:** Update the control, `inspec.yml` inputs, thresholds, etc. Don't worry about the InSpec version in the `inspec.yml` - the release process handles that. | ||
5. **Test Your Updates Locally:** Test your updates on all `vanilla` and `hardened` variants of the `known bad` and `known good` states of the `AWS EC2` and `Docker` test targets. Also, test your controls outside perfect conditions to ensure they handle non-optimal target environments. Verify that your update considers the `container`, `virtual machine`, and `1U machine` testing context of applicability. | ||
6. **Lint Your Updates:** Use the `bundle exec rake lint` and `bundle exec rake lint:autocorrect` commands from the test suite to lint your updates. | ||
7. **Commit Your Updates:** After testing and linting, commit your updates to your branch. Include `Fixes #ISSUE` in your commit messages to automatically close the issue when your PR is merged. | ||
8. **Open a PR:** Open a PR on the project repository from your fork. | ||
9. **Check Test Suite:** Ensure the GitHub Action test suite on the project side passes. You can check this at [our actions page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/actions). | ||
1. **Maintain Version Integrity:** **Never Merge** new requirements into older benchmark branches. This will create a 'mixed baseline' that doesn't align with any specific guidance document. Benchmarks, STIGs, and Guidance Documents form a 'proper subset' - they should be treated as 'all or nothing'. Mixing requirements from different versions can invalidate the concept of 'testing to a known benchmark'. | ||
|
||
2. **Benchmarks are a Complete Set of Requirements:** A Security Benchmark is 'complete and valid' only when all requirements for a specific Release or Major Version are met. Unlike traditional software projects, features and capabilities cannot be incrementally added. A Security Benchmark and its corresponding InSpec Profile are valid only within the scope of a specific 'Release' of that Benchmark. | ||
|
||
3. **Release Readiness Is Predefined:** A Benchmark is considered 'ready for release' when it meets the expected thresholds, hardening, and validation results. Don't be overwhelmed by the multitude of changes across the files. Instead, focus on the specific requirement you are working on. Understand its expected failure and success states on each of the target testing platforms. This approach prevents you from being overwhelmed and provides solid pivot points as you work through the implementation of the automated tests for each requirement and its 'contexts'. | ||
|
||
4. **Use Vendor-Managed Standard Releases:** When setting up a test suite, prioritize using vendor-managed standard releases for software installations and baseline configurations. This should be the starting point for both 'vanilla' and 'hardening' workflows. This approach ensures that your initial and ongoing testing, hardening, and validation closely mirror the real-world usage scenarios of your end-users. | ||
|
||
By adhering to these principles, you ensure that your updates to Benchmark Profiles are consistent, accurate, and aligned with the original guidance documents. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,16 +1,18 @@ | ||
--- | ||
order: 12 | ||
next: 13.md | ||
title: Creating a `Release Update` | ||
title: Creating a `Patch Update` | ||
author: Aaron Lippold | ||
--- | ||
|
||
A `Release Update` involves creating a new branch, `v#{x}R#{x+1}`, from the current main or latest patch release branch. The `saf generate delta` workflow is then run, which updates the metadata of the `controls`, `inspec.yml`, `README.md`, and other profile elements, while preserving the `describe` and `ruby code logic`. This workflow is detailed in the [Inspec Delta](#2-inspec-delta) section. After the initial commit of the new release branch, follow these steps to keep your work organized: | ||
A patch update involves making minor changes to a profile to fix issues or improve functionality. Here's a step-by-step guide: | ||
|
||
1. **Track Control IDs:** Create a table of all new `control ids` in the updated benchmark. This can be in CSV, Markdown Table, or in the PR overview information section. This helps track completed and pending work. PRs off the `v#{x}r#{x+1}` can also be linked in the table, especially if using a `micro` vs `massive` PR approach. | ||
2. **Ensure Consistency:** Add 'check box columns' to your tracking table to ensure each requirement of the updated Benchmark receives the same level of scrutiny. | ||
3. **Update CI/CD Process:** Update elements such as the `hardening` content (ansible, puppet, chef, hardened docker images, hardened vagrant boxes) to meet new requirements. Ensure the CI/CD process still functions with the updated elements, preferably on the PR as well. | ||
4. **Update Labels:** Update `titles` and other labels to reflect the updated release number of the Benchmark. | ||
5. **Commit Changes:** Commit these changes to your release branch, ensuring your CI/CD process exits cleanly. | ||
6. **Follow Patch Update Workflow:** With the above in place, follow the 'Patch Update' process, but expect a larger number of requirements to revalidate or update. | ||
7. **Identify Potential Code Changes:** Controls with changes to the `check text` or `fix text` are likely to require `inspec code changes`. If the `check text` and `fix text` of a control remain unchanged, it's likely only a cosmetic update, with no change in the security requirement or validation code. | ||
1. **Report the Issue:** Open an issue on our project, detailing the problem and providing examples. Do this on [our issues page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/issues). | ||
2. **Fork and Branch:** Fork the repository on GitHub, then create a branch off the `tagged` patch release you're targeting for the update. | ||
3. **Set Up Testing Suites:** In your forked branch, set up the AWS and Docker testing suites. | ||
4. **Make Updates:** Update the control, `inspec.yml` inputs, thresholds, etc. Don't worry about the InSpec version in the `inspec.yml` - the release process handles that. | ||
5. **Test Your Updates Locally:** Test your updates on all `vanilla` and `hardened` variants of the `known bad` and `known good` states of the `AWS EC2` and `Docker` test targets. Also, test your controls outside perfect conditions to ensure they handle non-optimal target environments. Verify that your update considers the `container`, `virtual machine`, and `1U machine` testing context of applicability. | ||
6. **Lint Your Updates:** Use the `bundle exec rake lint` and `bundle exec rake lint:autocorrect` commands from the test suite to lint your updates. | ||
7. **Commit Your Updates:** After testing and linting, commit your updates to your branch. Include `Fixes #ISSUE` in your commit messages to automatically close the issue when your PR is merged. | ||
8. **Open a PR:** Open a PR on the project repository from your fork. | ||
9. **Check Test Suite:** Ensure the GitHub Action test suite on the project side passes. You can check this at [our actions page](https://github.com/mitre/redhat-enterprise-linux-8-stig-baseline/actions). |
Oops, something went wrong.