Skip to content

Commit

Permalink
Adjusted rules for BSI APP.4.4.A19 according to review
Browse files Browse the repository at this point in the history
  • Loading branch information
benruland committed Oct 4, 2024
1 parent b11504b commit bd743c5
Show file tree
Hide file tree
Showing 12 changed files with 40 additions and 86 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -17,37 +17,15 @@ description: |-
names: <tt>var_deployments_without_high_availability</tt>. This will ignore deployments matching
those names in all namespaces.
An example allowing all deployments named <tt>uncritical-service</tt> is as follows:
<pre>
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
name: bsi-additional-deployments
spec:
description: Allows additional deployments to not be highly available and evenly spread
setValues:
- name: upstream-ocp4-var_deployments_without_high_availability
rationale: Ignore our uncritical service
value: ^uncritical-service$
extends: upstream-ocp4-bsi
title: Modified BSI allowing non-highly-available deployments
</pre>
Finally, reference this <tt>TailoredProfile</tt> in a <tt>ScanSettingBinding</tt>
For more information on Tailoring the Compliance Operator, please consult the
OpenShift documentation:
{{{ weblink(link="https://docs.openshift.com/container-platform/latest/security/compliance_operator/co-scans/compliance-operator-tailor.html") }}}
rationale: |-
Distributing Kubernetes pods across nodes and availability zones using pod topology spread
constraints and anti-affinity rules is essential for enhancing high availability, fault
tolerance, and security.
This approach ensures that a single node or AZ failure does not lead to total application
downtime, as workloads are balanced and resources are efficiently utilized.
identifiers: {}
identifiers:
cce@ocp4: CCE-89351-1

severity: medium

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
documentation_complete: true

title: 'Ensure statefulsets have either anti-affinity rules or topology spread constraints'
title: 'Ensure Statefulsets have either Anti-Affinity Rules or Topology Spread Constraints'

description: |-
Distributing Kubernetes pods across nodes and availability zones using pod topology spread
Expand All @@ -10,44 +10,22 @@ description: |-
There might be statefulsets, that do not require high availability or spreading across nodes.
To limit the number of false positives, this rule only checks statefulsets with a replica count
of more than one. For statefulsets with one replica neither anti-affinity rules nor topology
of more than one. For statefulsets with one replica, neither anti-affinity rules nor topology
spread constraints provide any value.
To exclude other statefulsets from this rule, you can create a regular expression for statefulset
names: <tt>var_statefulsets_without_high_availability</tt>. This will ignore statefulsets matching
those names in all namespaces.
An example allowing all statefulsets named <tt>uncritical-service</tt> is as follows:
<pre>
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
name: bsi-additional-statefulsets
spec:
description: Allows additional statefulsets to not be highly available and evenly spread
setValues:
- name: upstream-ocp4-var_statefulsets_without_high_availability
rationale: Ignore our uncritical service
value: ^uncritical-service$
extends: upstream-ocp4-bsi
title: Modified BSI allowing non-highly-available statefulsets
</pre>
Finally, reference this <tt>TailoredProfile</tt> in a <tt>ScanSettingBinding</tt>
For more information on Tailoring the Compliance Operator, please consult the
OpenShift documentation:
{{{ weblink(link="https://docs.openshift.com/container-platform/4.16/security/compliance_operator/co-scans/compliance-operator-tailor.html") }}}
rationale: |-
Distributing Kubernetes pods across nodes and availability zones using pod topology spread
constraints and anti-affinity rules is essential for enhancing high availability, fault
tolerance, and security.
This approach ensures that a single node or AZ failure does not lead to total application
downtime, as workloads are balanced and resources are efficiently utilized.
identifiers: {}
identifiers:
cce@ocp4: CCE-89908-8

severity: medium

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
documentation_complete: true

title: 'Ensure control plane / master nodes are distribute across three failure zones'
title: 'Ensure Control Plane / Master Nodes are Distributed Across Three Failure Zones'

description: |-
Distributing Kubernetes control plane nodes across failure zones enhances security by mitigating
Expand All @@ -21,7 +21,8 @@ rationale: |-
This label is automatically assigned to each node by cloud providers but might need to be managed
manually in other environments
identifiers: {}
identifiers:
cce@ocp4: CCE-88713-3

severity: medium

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
documentation_complete: true

title: 'Ensure infrastructure nodes are distribute across three failure zones'
title: 'Ensure Infrastructure Nodes are Distributed Across Three Failure Zones'

description: |-
Distributing Kubernetes infrastructure nodes across failure zones enhances security by mitigating
Expand All @@ -20,7 +20,8 @@ rationale: |-
This label is automatically assigned to each node by cloud providers but might need to be managed
manually in other environments
identifiers: {}
identifiers:
cce@ocp4: CCE-87050-1

severity: medium

Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
documentation_complete: true

title: 'Ensure every MachineConfigPool consists of more than one node'
title: 'Ensure every MachineConfigPool consists of More Than One Node'

description: |-
To ensure, that workloads are able to be provisioned highly available, every node role should
consist of more than one node. This enables workloads to be scheduled across multiple nodes and
stay available in case one node of a role is unavailable. Different node roles may exist to isolate
control plane, infrastructure and application workload. There might be additional use cases to
create additional node roles for further isolation.
To ensure, that workloads are able to be provisioned highly available, every node MachineConfigPool
should consist of more than one node. This enables workloads to be scheduled across multiple nodes and
stay available in case one node of a MachineConfigPool is unavailable. Different MachineConfigPools
may exist to isolate control plane, infrastructure and application workload. There might be additional
use cases to create additional MachineConfigPools for further isolation.
rationale: |-
To ensure, that workloads are able to be provisioned highly available, every node role should
To ensure, that workloads are able to be provisioned highly available, every MachineConfigPool should
consist of more than one node. This enables workloads to be scheduled across multiple nodes and
stay available in case one node of a role is unavailable.
stay available in case one node of a MachineConfigPool is unavailable.
{{% set jqfilter = '[.items[] | select(.status.machineCount == 1 or .status.machineCount == 0) | .metadata.name]' %}}

Expand All @@ -23,7 +23,8 @@ ocil: |-
<pre>$ oc get machineconfigpools -o json | jq '{{{ jqfilter }}}'</pre>
Make sure that there is output nothing in the result.
identifiers: {}
identifiers:
cce@ocp4: CCE-90465-6

severity: medium

Expand Down
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
documentation_complete: true

title: 'Ensure machine count of MachineConfigPool master is 3'
title: 'Ensure there are Three Machines in the Master MachineConfigPool'

description: |-
To ensure, that the OpenShift control plane stays accessible on outage of a single master node, a
number of 3 control plane nodes is required.
To ensure, that the OpenShift control plane stays accessible on outage of a single master node,
three control plane nodes are required.
rationale: |-
A highly-available OpenShift control plane requires 3 control plane nodes. This allows etcd to have
a functional quorum state, when a single control plane node is unavailable.
A high available OpenShift control plane requires three control plane nodes. This allows etcd
to have a functional quorum state, when a single control plane node is unavailable.
identifiers: {}
identifiers:
cce@ocp4: CCE-87551-8

severity: medium

Expand All @@ -26,11 +27,11 @@ warnings:
{{{ openshift_cluster_setting("/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master") | indent(4) }}}
template:
name: yamlfile_value
vars:
ocp_data: 'true'
filepath: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/master
yamlpath: .status.machineCount
entity_check: at least one
values:
- value: '3'
name: yamlfile_value
vars:
ocp_data: 'true'
filepath: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/master
yamlpath: .status.machineCount
entity_check: at least one
values:
- value: '3'
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ rationale: |-
This label is automatically assigned to each node by cloud providers but might need to be managed
manually in other environments
identifiers: {}
identifiers:
cce@ocp4: CCE-88863-6

severity: medium

Expand Down
2 changes: 1 addition & 1 deletion controls/bsi_app_4_4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -479,7 +479,7 @@ controls:

rules:
# Section 1, 3
- multiple_nodes_in_every_role
- multiple_nodes_in_every_mcp
- control_plane_nodes_in_three_zones
- worker_nodes_in_two_zones_or_more
- infra_nodes_in_two_zones_or_more
Expand Down
7 changes: 0 additions & 7 deletions shared/references/cce-redhat-avail.txt
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,6 @@ CCE-87020-4
CCE-87044-4
CCE-87048-5
CCE-87049-3
CCE-87050-1
CCE-87051-9
CCE-87054-3
CCE-87058-4
Expand Down Expand Up @@ -604,7 +603,6 @@ CCE-87547-6
CCE-87548-4
CCE-87549-2
CCE-87550-0
CCE-87551-8
CCE-87553-4
CCE-87554-2
CCE-87556-7
Expand Down Expand Up @@ -1336,7 +1334,6 @@ CCE-88708-3
CCE-88709-1
CCE-88710-9
CCE-88711-7
CCE-88713-3
CCE-88715-8
CCE-88716-6
CCE-88719-0
Expand Down Expand Up @@ -1419,7 +1416,6 @@ CCE-88859-4
CCE-88860-2
CCE-88861-0
CCE-88862-8
CCE-88863-6
CCE-88864-4
CCE-88867-7
CCE-88869-3
Expand Down Expand Up @@ -1707,7 +1703,6 @@ CCE-89343-8
CCE-89347-9
CCE-89348-7
CCE-89349-5
CCE-89351-1
CCE-89352-9
CCE-89353-7
CCE-89354-5
Expand Down Expand Up @@ -2075,7 +2070,6 @@ CCE-89899-9
CCE-89901-3
CCE-89905-4
CCE-89907-0
CCE-89908-8
CCE-89909-6
CCE-89910-4
CCE-89911-2
Expand Down Expand Up @@ -2450,7 +2444,6 @@ CCE-90461-5
CCE-90462-3
CCE-90463-1
CCE-90464-9
CCE-90465-6
CCE-90467-2
CCE-90468-0
CCE-90470-6
Expand Down

0 comments on commit bd743c5

Please sign in to comment.