Skip to content

Commit

Permalink
Add rules and controls for APP.4.4.A14
Browse files Browse the repository at this point in the history
  • Loading branch information
sluetze committed Jul 16, 2024
1 parent accd730 commit 2174217
Show file tree
Hide file tree
Showing 4 changed files with 90 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ rationale: |-
Assigning workloads with high protection requirements to specific nodes creates and additional boundary (the node) between workloads of high protection requirements and workloads which might follow less strict requirements. An adversary which attacked a lighter protected workload now has additional obstacles for their movement towards the higher protected workloads.
references:
bsi: APP.4.4.A15
bsi: APP.4.4.A14,APP.4.4.A15

severity: medium

Expand Down
61 changes: 61 additions & 0 deletions applications/openshift/master/master_taint_noschedule/rule.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
documentation_complete: true

title: Verify that Control Plane Nodes are not schedulable for workloads

description: -|
<p>
User workloads should not be colocated with control plane workloads. To ensure that the scheduler won't
schedule workloads on the master nodes, the taint "node-role.kubernetes.io/master" with the "NoSchedule"
effect is set by default in most cluster configurations (excluding SNO and Compact Clusters).
</p>
<p>
The scheduling of the master nodes is centrally configurable without reboot via
<pre>oc edit schedulers.config.openshift.io cluster </pre> for details see the Red Hat Solution
{{{ weblink(link="https://access.redhat.com/solutions/4564851") }}}
</p>
<p>
If you run a setup, which requires the colocation of control plane and user workload you need to
exclude this rule.
</p>

rationale: -|
By separating user workloads and the control plane workloads we can better ensure that there is
no ill effects from workload boosts to each other. Furthermore we ensure that an adversary who gets
control over a badly secured workload container is not colocated to critical components of the control plane.
In some setups it might be necessary to make the control plane schedulable for workloads i.e.
Single Node Openshift (SNO) or Compact Cluster (Three Node Cluster) setups.

{{% set jqfilter = '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule")' %}}

identifiers:
cce@ocp4: CCE-88731-5

severity: medium

ocil_clause: 'Control Plane is schedulable'

ocil: |-
Run the following command to see if control planes are schedulable
<pre>$oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'</pre>
for each master node, there should be an output of a key with the NoSchedule effect.
By editing the cluster scheduler you can centrally configure the masters as schedulable or not
by setting .spec.mastersSchedulable to true.
Use <pre>$oc edit schedulers.config.openshift.io cluster</pre> to configure the scheduling.
warnings:
- general: |-
{{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}}
template:
name: yamlfile_value
vars:
ocp_data: "true"
filepath: |-
{{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}}
yamlpath: ".effect"
check_existence: "at_least_one_exists"
entity_check: "at least one"
values:
- value: "NoSchedule"
operation: "pattern match"
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
default_result: PASS
33 changes: 26 additions & 7 deletions controls/bsi_app_4_4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -375,18 +375,37 @@ controls:
levels:
- elevated
description: >-
In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and only run pods that are
(1) In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and only run pods that are
assigned to each task.
Bastion nodes SHOULD handle all incoming and outgoing data connections of between
(2) Bastion nodes SHOULD handle all incoming and outgoing data connections of between
applications and other networks.
Management nodes SHOULD operate control plane pods and only handle control plane data
(3) Management nodes SHOULD operate control plane pods and only handle control plane data
connections.
If deployed, storage nodes SHOULD only operate the fixed persistent volume services pods in
(4) If deployed, storage nodes SHOULD only operate the fixed persistent volume services pods in
a cluster.
notes: >-
TBD
status: pending
rules: []
Section 1:
This requirement must be solved organizationally. OpenShift can bind applications to specific
nodes or node groups (via labels and node selectors). ACM can take over the labeling of nodes
and ensure that the nodes are labeled accordingly.
Section 2:
OpenShift uses the concept of infra-nodes. The incoming connections can be bound to these and,
by using Egress-IP, the incoming connections can also be bound.
Section 3:
OpenShift uses control plane nodes for management, on which no applications are running.
Data connections between applications to the outside world and to one another are not routed
via the control plane as standard. The necessary requirements must be taken into account as
part of the planning.
Section 4:
OpenShift Data Foundation (ODF) can be linked to its own infra nodes using the OpenShift
mechanisms, which only run storage services. This can be implemented equivalently with other
storage solutions.
status: partial
rules:
# Section 1,2,3,4
- general_node_separation
# Section 3
- master_taint_noschedule

- id: APP.4.4.A15
title: Separation of Applications at Node and Cluster Level
Expand Down

0 comments on commit 2174217

Please sign in to comment.