Skip to content

Commit

Permalink
Merge branch 'master' into add_cis_slem
Browse files Browse the repository at this point in the history
  • Loading branch information
teacup-on-rockingchair authored Dec 3, 2024
2 parents f2ec0ed + 4c8b22c commit f2181f0
Show file tree
Hide file tree
Showing 59 changed files with 10,371 additions and 113 deletions.
14 changes: 8 additions & 6 deletions .github/workflows/automatus-ubuntu2404.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ on:
concurrency:
group: ${{ github.workflow }}-${{ github.event.number || github.run_id }}
cancel-in-progress: true
env:
DATASTREAM: ssg-ubuntu2404-ds.xml
jobs:
build-content:
name: Build Content
Expand Down Expand Up @@ -55,12 +57,12 @@ jobs:
prop_path: 'product'
- name: Build product
if: ${{ steps.ctf.outputs.CTF_OUTPUT_SIZE != '0' }}
run: ./build_product ${{steps.product.outputs.prop}} --datastream-only
run: ./build_product ubuntu2404 --datastream-only
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4
if: ${{ steps.ctf.outputs.CTF_OUTPUT_SIZE != '0' }}
with:
name: ssg-${{steps.product.outputs.prop}}-ds.xml
path: build/ssg-${{steps.product.outputs.prop}}-ds.xml
name: ${{ env.DATASTREAM }}
path: build/${{ env.DATASTREAM }}
validate-ubuntu:
name: Run Tests
needs: build-content
Expand Down Expand Up @@ -123,10 +125,10 @@ jobs:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4
if: ${{ steps.ctf.outputs.CTF_OUTPUT_SIZE != '0' }}
with:
name: ssg-${{steps.product.outputs.prop}}-ds.xml
name: ${{ env.DATASTREAM }}
- name: Run tests in a container - Bash
if: ${{steps.bash.outputs.prop == 'True' && steps.ctf.outputs.CTF_OUTPUT_SIZE != '0' }}
run: tests/test_rule_in_container.sh --no-make-applicable-in-containers --dontclean --logdir logs_bash --remediate-using bash --name ssg_test_suite --datastream ssg-${{steps.product.outputs.prop}}-ds.xml ${{join(fromJSON(steps.rules.outputs.prop))}}
run: tests/test_rule_in_container.sh --no-make-applicable-in-containers --dontclean --logdir logs_bash --remediate-using bash --name ssg_test_suite --datastream ${{ env.DATASTREAM }} ${{join(fromJSON(steps.rules.outputs.prop))}}
env:
ADDITIONAL_TEST_OPTIONS: "--duplicate-templates --remove-fips-certified"
- name: Check for ERROR in logs
Expand All @@ -147,7 +149,7 @@ jobs:
path: logs_bash/
- name: Run tests in a container - Ansible
if: ${{ steps.ansible.outputs.prop == 'True' && steps.ctf.outputs.CTF_OUTPUT_SIZE != '0' }}
run: tests/test_rule_in_container.sh --no-make-applicable-in-containers --dontclean --logdir logs_ansible --remediate-using ansible --name ssg_test_suite --datastream ssg-${{steps.product.outputs.prop}}-ds.xml ${{join(fromJSON(steps.rules.outputs.prop))}}
run: tests/test_rule_in_container.sh --no-make-applicable-in-containers --dontclean --logdir logs_ansible --remediate-using ansible --name ssg_test_suite --datastream ${{ env.DATASTREAM }} ${{join(fromJSON(steps.rules.outputs.prop))}}
env:
ADDITIONAL_TEST_OPTIONS: "--duplicate-templates --remove-fips-certified"
- name: Check for ERROR in logs
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/gh-pages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ jobs:
git config --global --add safe.directory "$GITHUB_WORKSPACE"
- name: Deploy
if: ${{ github.event_name == 'push' && github.repository == 'ComplianceAsCode/content' && github.ref == 'refs/heads/master' }}
uses: JamesIves/github-pages-deploy-action@62fec3add6773ec5dbbf18d2ee4260911aa35cf4 # v4.6.9
uses: JamesIves/github-pages-deploy-action@dc18a3c6b46d56484cb63f291becd7ed4f0269b9 # v4.7.1
with:
branch: main # The branch the action should deploy to.
folder: ${{ env.PAGES_DIR }} # The folder the action should deploy.
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/k8s-content-pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ jobs:
org.opencontainers.image.vendor='Compliance Operator Authors'
- name: Build container images and push
id: docker_build
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75 # v6
uses: docker/build-push-action@48aba3b46d1b1fec4febb7c5d0c644b249a11355 # v6
with:
context: .
file: ./Dockerfiles/ocp4_content
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/srg-mapping-table.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ jobs:
git config --global --add safe.directory "$GITHUB_WORKSPACE"
- name: Deploy
if: ${{ github.event_name == 'push' && github.repository == 'ComplianceAsCode/content' }}
uses: JamesIves/github-pages-deploy-action@62fec3add6773ec5dbbf18d2ee4260911aa35cf4 # v4.6.9
uses: JamesIves/github-pages-deploy-action@dc18a3c6b46d56484cb63f291becd7ed4f0269b9 # v4.7.1
with:
branch: main # The branch the action should deploy to.
folder: ${{ env.PAGES_DIR }} # The folder the action should deploy.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,5 @@ ocil_clause: 'Network separation needs review'

ocil: |-
Create separate Ingress Controllers for the API and your Applications. Also setup your environment in a way, that Control Plane Nodes are in another network than your worker nodes. If you implement multiple Nodes for different purposes evaluate if these should be in different network segments (i.e. Infra-Nodes, Storage-Nodes, ...).
Also evaluate how you handle outgoing connections and if they have to be pinned to
specific nodes or IPs.
18 changes: 13 additions & 5 deletions applications/openshift/general/general_node_separation/rule.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,25 @@ documentation_complete: true
title: 'Create Boundaries between Resources using Nodes or Clusters'

description: |-
Use Nodes or Clusters to isolate Workloads with high protection requirements.
Use Nodes or Clusters to isolate Workloads with high protection requirements.
Run the following command and review the pods and how they are deployed on Nodes. <pre>$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" </pre>
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters with workloads of lower protection requirements.
Run the following command and review the pods and how they are deployed on Nodes.
<pre>$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" </pre>
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters
with workloads of lower protection requirements.
rationale: |-
Assigning workloads with high protection requirements to specific nodes creates and additional boundary (the node) between workloads of high protection requirements and workloads which might follow less strict requirements. An adversary which attacked a lighter protected workload now has additional obstacles for their movement towards the higher protected workloads.
Assigning workloads with high protection requirements to specific nodes creates and additional
boundary (the node) between workloads of high protection requirements and workloads which might
follow less strict requirements. An adversary which attacked a lighter protected workload now has
additional obstacles for their movement towards the higher protected workloads.
severity: medium

identifiers:
cce@ocp4: CCE-88903-0

ocil_clause: 'Application placement on Nodes and Clusters needs review'

ocil: |-
Expand Down
61 changes: 61 additions & 0 deletions applications/openshift/master/master_taint_noschedule/rule.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
documentation_complete: true

title: Verify that Control Plane Nodes are not schedulable for workloads

description: -|
<p>
User workloads should not be colocated with control plane workloads. To ensure that the scheduler won't
schedule workloads on the master nodes, the taint "node-role.kubernetes.io/master" with the "NoSchedule"
effect is set by default in most cluster configurations (excluding SNO and Compact Clusters).
</p>
<p>
The scheduling of the master nodes is centrally configurable without reboot via
<pre>oc edit schedulers.config.openshift.io cluster </pre> for details see the Red Hat Solution
{{{ weblink(link="https://access.redhat.com/solutions/4564851") }}}
</p>
<p>
If you run a setup, which requires the colocation of control plane and user workload you need to
exclude this rule.
</p>

rationale: -|
By separating user workloads and the control plane workloads we can better ensure that there is
no ill effects from workload boosts to each other. Furthermore we ensure that an adversary who gets
control over a badly secured workload container is not colocated to critical components of the control plane.
In some setups it might be necessary to make the control plane schedulable for workloads i.e.
Single Node Openshift (SNO) or Compact Cluster (Three Node Cluster) setups.

{{% set jqfilter = '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule")' %}}

identifiers:
cce@ocp4: CCE-88731-5

severity: medium

ocil_clause: 'Control Plane is schedulable'

ocil: |-
Run the following command to see if control planes are schedulable
<pre>$oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'</pre>
for each master node, there should be an output of a key with the NoSchedule effect.
By editing the cluster scheduler you can centrally configure the masters as schedulable or not
by setting .spec.mastersSchedulable to true.
Use <pre>$oc edit schedulers.config.openshift.io cluster</pre> to configure the scheduling.
warnings:
- general: |-
{{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}}
template:
name: yamlfile_value
vars:
ocp_data: "true"
filepath: |-
{{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}}
yamlpath: ".effect"
check_existence: "at_least_one_exists"
entity_check: "at least one"
values:
- value: "NoSchedule"
operation: "pattern match"
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
default_result: PASS
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
default_result: MANUAL
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
documentation_complete: true

title: 'Ensure Appropriate Network Policies are Configured'

description: |-
Configure Network Policies in any application namespace in an appropriate way, so that
only the required communications are allowed. The Network Policies should precisely define
source and target using label selectors and ports.
rationale: |-
By default, all pod to pod traffic within a cluster is allowed. Network
Policy creates a pod- level firewall that can be used to restrict traffic
between sources. Pod traffic is restricted by having a Network Policy that
selects it (through the use of labels). Once there is any Network Policy in a
namespace selecting a particular pod, that pod will reject any connections
that are not allowed by any Network Policy. Other pods in the namespace that
are not selected by any Network Policy will continue to accept all traffic.
Implementing Kubernetes Network Policies with minimal allowed communication enhances security
by reducing entry points and limiting attacker movement within the cluster. It ensures pods and
services communicate only with necessary entities, reducing unauthorized access risks. In case
of a breach, these policies contain compromised pods, preventing widespread malicious activity.
Additionally, they enhance monitoring and detection of anomalous network activities.
severity: medium

identifiers:
cce@ocp4: CCE-89537-5

ocil_clause: 'Network Policies need to be evaluated if they are appropriate'

ocil: |-
For each non-default namespace in the cluster, review the configured Network Policies
and ensure that they only allow the necessary network connections. They should
precisely define source and target using label selectors and ports.
1. Get a list of existing projects(namespaces), exclude default, kube-*, openshift-*
<pre>$ oc get namespaces -ojson | jq -r '[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.name | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.name]'</pre>
Namespaces matching the variable <tt>ocp4-var-network-policies-namespaces-exempt-regex</tt> regex are excluded from this check.
2. For each of these namespaces, review the network policies:
<pre>$ oc get networkpolicies -n $namespace -o yaml</pre>
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
documentation_complete: true

title: Check Egress IPs Assignable to Nodes

description: -|
<p>
The OpenShift Container Platform egress IP address functionality allows you to ensure that the
traffic from one or more pods in one or more namespaces has a consistent source IP address for
services outside the cluster network.
</p>
<p>
The necessary labeling on the designated nodes is configurable without reboot via
<pre>$ oc label nodes $NODENAME k8s.ovn.org/egress-assignable="" </pre> for details see the
Red Hat Documentation
{{{ weblink(link="https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/ovn-kubernetes-network-plugin#nw-egress-ips-about_configuring-egress-ips-ovn") }}}
</p>

rationale: -|
By using egress IPs you can provide a consistent IP to external services and configure special
firewall rules which precisely select this IP. This allows for more control on external systems.
Furthermore you can bind the IPs to specific nodes, which handle all the network connections to
achieve a better separation of duties between the different nodes.

identifiers:
cce@ocp4: CCE-86787-9

severity: medium

ocil_clause: 'Check Egress IPs Assignable to Nodes'

ocil: |-
Run the following command to see if nodes are assignable for egress IPs
<pre>$ oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."k8s.ovn.org/egress-assignable" != null) | .metadata.name'</pre>
This commands prints the name of each node which is configured to get egress IPs assigned. If
the output is empty, there are no nodes available.
{{% set old_jqfilter = 'if any(.items[]?; .metadata.labels."k8s.ovn.org/egress-assignable" != null) then true else false end' %}}
{{% set jqfilter = '[ .items[] | .metadata.labels["k8s.ovn.org/egress-assignable"] != null ]' %}}


warnings:
- general: |-
{{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}}
template:
name: yamlfile_value
vars:
ocp_data: "true"
filepath: |-
{{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}}
yamlpath: '[:]'
check_existence: at_least_one_exists
entity_check: "at least one"
values:
- value: 'true'
type: "string"
entity_check: "at least one"
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash
set -xe

echo "Labeling Node for egress IP"

NODENAME=`oc get node | tail -1 | cut -d" " -f1`
oc label node $NODENAME k8s.ovn.org/egress-assignable=""

sleep 5
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
default_result: FAIL
result_after_remediation: PASS
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
documentation_complete: true

title: 'Limiting Network Bandwidth in Pods'

description: |-
Network bandwidth, SHOULD be appropriately reserved and limited.
ocil: |-
Network bandwidth is limited at the pod level and can be determined separately according
to incoming and outgoing network bandwidth.
For more information about limiting network bandwidth on the pod level please refer to the Red Hat documentation:
{{{ weblink(link="https://docs.openshift.com/container-platform/4.17/nodes/pods/nodes-pods-configuring.html#nodes-pods-configuring-bandwidth_nodes-pods-configuring") }}}
Out of the documetation use the example for the network bandwidth configuration of a pod:
<pre>
kind: Pod
apiVersion: v1
metadata:
name: hello-openshift
annotations:
kubernetes.io/ingress-bandwidth: 2M
kubernetes.io/egress-bandwidth: 1M
spec:
containers:
- image: openshift/hello-openshift
name: hello-openshift
</pre>
severity: unknown

identifiers:
cce@ocp4: CCE-87610-2

ocil_clause: 'Limiting Pod network bandwidth on OCP 4'

rationale: |-
Extend pod configuration with network bandwidth annotations to prevent
a bad actor or a malfunction in the pod to consume all the bandwidth in the cluster.
A network bandwidth limitation on the pod level can mitigate the bearing onto the cluster.
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,3 @@ template:
values:
- value: "true"
operation: "pattern match"

Loading

0 comments on commit f2181f0

Please sign in to comment.