Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BSI APP.4.4.A14+A15 #12158

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,5 @@ ocil_clause: 'Network separation needs review'

ocil: |-
Create separate Ingress Controllers for the API and your Applications. Also setup your environment in a way, that Control Plane Nodes are in another network than your worker nodes. If you implement multiple Nodes for different purposes evaluate if these should be in different network segments (i.e. Infra-Nodes, Storage-Nodes, ...).
Also evaluate how you handle outgoing connections and if they have to be pinned to
specific nodes or IPs.
18 changes: 13 additions & 5 deletions applications/openshift/general/general_node_separation/rule.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,25 @@ documentation_complete: true
title: 'Create Boundaries between Resources using Nodes or Clusters'

description: |-
Use Nodes or Clusters to isolate Workloads with high protection requirements.
Use Nodes or Clusters to isolate Workloads with high protection requirements.

Run the following command and review the pods and how they are deployed on Nodes. <pre>$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" </pre>
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters with workloads of lower protection requirements.
Run the following command and review the pods and how they are deployed on Nodes.
<pre>$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" </pre>
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters
with workloads of lower protection requirements.

rationale: |-
Assigning workloads with high protection requirements to specific nodes creates and additional boundary (the node) between workloads of high protection requirements and workloads which might follow less strict requirements. An adversary which attacked a lighter protected workload now has additional obstacles for their movement towards the higher protected workloads.
Assigning workloads with high protection requirements to specific nodes creates and additional
boundary (the node) between workloads of high protection requirements and workloads which might
follow less strict requirements. An adversary which attacked a lighter protected workload now has
additional obstacles for their movement towards the higher protected workloads.

severity: medium

identifiers:
cce@ocp4: CCE-88903-0

ocil_clause: 'Application placement on Nodes and Clusters needs review'

ocil: |-
Expand Down
61 changes: 61 additions & 0 deletions applications/openshift/master/master_taint_noschedule/rule.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
documentation_complete: true

title: Verify that Control Plane Nodes are not schedulable for workloads

description: -|
<p>
User workloads should not be colocated with control plane workloads. To ensure that the scheduler won't
schedule workloads on the master nodes, the taint "node-role.kubernetes.io/master" with the "NoSchedule"
effect is set by default in most cluster configurations (excluding SNO and Compact Clusters).
</p>
<p>
The scheduling of the master nodes is centrally configurable without reboot via
<pre>oc edit schedulers.config.openshift.io cluster </pre> for details see the Red Hat Solution
{{{ weblink(link="https://access.redhat.com/solutions/4564851") }}}
</p>
<p>
If you run a setup, which requires the colocation of control plane and user workload you need to
exclude this rule.
</p>

rationale: -|
By separating user workloads and the control plane workloads we can better ensure that there is
no ill effects from workload boosts to each other. Furthermore we ensure that an adversary who gets
control over a badly secured workload container is not colocated to critical components of the control plane.
In some setups it might be necessary to make the control plane schedulable for workloads i.e.
Single Node Openshift (SNO) or Compact Cluster (Three Node Cluster) setups.

{{% set jqfilter = '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule")' %}}

identifiers:
cce@ocp4: CCE-88731-5

severity: medium

ocil_clause: 'Control Plane is schedulable'

ocil: |-
Run the following command to see if control planes are schedulable
<pre>$oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'</pre>
for each master node, there should be an output of a key with the NoSchedule effect.

By editing the cluster scheduler you can centrally configure the masters as schedulable or not
by setting .spec.mastersSchedulable to true.
Use <pre>$oc edit schedulers.config.openshift.io cluster</pre> to configure the scheduling.

warnings:
- general: |-
{{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}}

template:
name: yamlfile_value
vars:
ocp_data: "true"
filepath: |-
{{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}}
yamlpath: ".effect"
check_existence: "at_least_one_exists"
entity_check: "at least one"
values:
- value: "NoSchedule"
operation: "pattern match"
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
default_result: PASS
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
documentation_complete: true

title: Check Egress IPs Assignable to Nodes

description: -|
<p>
The OpenShift Container Platform egress IP address functionality allows you to ensure that the
traffic from one or more pods in one or more namespaces has a consistent source IP address for
services outside the cluster network.
</p>
<p>
The necessary labeling on the designated nodes is configurable without reboot via
<pre>$ oc label nodes $NODENAME k8s.ovn.org/egress-assignable="" </pre> for details see the
Red Hat Documentation
{{{ weblink(link="https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/ovn-kubernetes-network-plugin#nw-egress-ips-about_configuring-egress-ips-ovn") }}}
</p>

rationale: -|
By using egress IPs you can provide a consistent IP to external services and configure special
firewall rules which precisely select this IP. This allows for more control on external systems.
Furthermore you can bind the IPs to specific nodes, which handle all the network connections to
achieve a better separation of duties between the different nodes.

identifiers:
cce@ocp4: CCE-86787-9

severity: medium

ocil_clause: 'Check Egress IPs Assignable to Nodes'

ocil: |-
Run the following command to see if nodes are assignable for egress IPs
<pre>$ oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."k8s.ovn.org/egress-assignable" != null) | .metadata.name'</pre>
This commands prints the name of each node which is configured to get egress IPs assigned. If
the output is empty, there are no nodes available.

{{% set old_jqfilter = 'if any(.items[]?; .metadata.labels."k8s.ovn.org/egress-assignable" != null) then true else false end' %}}
{{% set jqfilter = '[ .items[] | .metadata.labels["k8s.ovn.org/egress-assignable"] != null ]' %}}


warnings:
- general: |-
{{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}}

template:
name: yamlfile_value
vars:
ocp_data: "true"
filepath: |-
{{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}}
yamlpath: '[:]'
check_existence: at_least_one_exists
entity_check: "at least one"
values:
- value: 'true'
type: "string"
entity_check: "at least one"
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash
set -xe

echo "Labeling Node for egress IP"

NODENAME=`oc get node | tail -1 | cut -d" " -f1`
oc label node $NODENAME k8s.ovn.org/egress-assignable=""

sleep 5
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
default_result: FAIL
result_after_remediation: PASS
62 changes: 47 additions & 15 deletions controls/bsi_app_4_4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -356,11 +356,16 @@ controls:
levels:
- elevated
description: >-
(1) There SHOULD be an automated audit that checks the settings of nodes, of Kubernetes, and of the pods of applications against a defined list of allowed settings and standardised benchmarks.
(2) Kubernetes SHOULD enforce these established rules in each cluster by connecting appropriate tools.
(1) There SHOULD be an automated audit that checks the settings of nodes, of Kubernetes, and
of the pods of applications against a defined list of allowed settings and standardised
benchmarks.
(2) Kubernetes SHOULD enforce these established rules in each cluster by connecting
appropriate tools.
notes: >-
Section 1 is addressed by the compliance operator itself. The standardized Benchmarks can be just the BSI Profile, or additionally a hardening standard like the CIS Benchmark.
Section 2 can be addressed by using auto-remediation of compliance-operator or for workloads by using Advanced Cluster Security or similar tools.
Section 1 is addressed by the compliance operator itself. The standardized Benchmarks can be
just the BSI Profile, or additionally a hardening standard like the CIS Benchmark.
Section 2 can be addressed by using auto-remediation of compliance-operator or for workloads
by using Advanced Cluster Security or similar tools.
status: automated
rules:
- scansettingbinding_exists
Expand All @@ -372,25 +377,48 @@ controls:
levels:
- elevated
description: >-
In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and only run pods that are
(1) In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and only run pods that are
assigned to each task.
Bastion nodes SHOULD handle all incoming and outgoing data connections of between
(2) Bastion nodes SHOULD handle all incoming and outgoing data connections of between
applications and other networks.
Management nodes SHOULD operate control plane pods and only handle control plane data
(3) Management nodes SHOULD operate control plane pods and only handle control plane data
connections.
If deployed, storage nodes SHOULD only operate the fixed persistent volume services pods in
(4) If deployed, storage nodes SHOULD only operate the fixed persistent volume services pods in
a cluster.
notes: >-
TBD
status: pending
rules: []
Section 1:
This requirement must be solved organizationally. OpenShift can bind applications to specific
nodes or node groups (via labels and node selectors). ACM can take over the labeling of nodes
and ensure that the nodes are labeled accordingly.
Section 2:
OpenShift uses the concept of infra-nodes. The incoming connections can be bound to these and,
by using Egress-IP, the outgoing connections can also be bound.
Section 3:
OpenShift uses control plane nodes for management, on which no applications are running.
Data connections between applications to the outside world and to one another are not routed
via the control plane as standard. The necessary requirements must be taken into account as
part of the planning.
Section 4:
OpenShift Data Foundation (ODF) can be linked to its own infra nodes using the OpenShift
mechanisms, which only run storage services. This can be implemented equivalently with other
storage solutions.
status: partial
rules:
# Section 1,2,3,4
- general_node_separation
- general_network_separation
# Section 2
- configure_egress_ip_node_assignable
# Section 3
- master_taint_noschedule

- id: APP.4.4.A15
title: Separation of Applications at Node and Cluster Level
levels:
- elevated
description: >-
Applications with very high protection needs SHOULD each use their own Kubernetes clusters or dedicated nodes that are not available for other applications
(1) Applications with very high protection needs SHOULD each use their own Kubernetes clusters
or dedicated nodes that are not available for other applications
notes: ''
status: manual
rules:
Expand All @@ -401,11 +429,15 @@ controls:
levels:
- elevated
description: >-
The automation of operational tasks in operators SHOULD be used for particularly critical applications and control plane programs.
(1) The automation of operational tasks in operators SHOULD be used for particularly critical
applications and control plane programs.
notes: >-
OpenShift relies consistently on the application of the concept of operators. The platform itself is operated and managed 100% by operators, meaning that all internal components of the platform are rolled out and managed by operators.
OpenShift relies consistently on the application of the concept of operators. The platform
itself is operated and managed 100% by operators, meaning that all internal components of
the platform are rolled out and managed by operators.

Application-specific operators must be considered as part of application development and deployment.
Application-specific operators must be considered as part of application development and
deployment.
status: inherently met
rules: []

Expand Down
3 changes: 0 additions & 3 deletions shared/references/cce-redhat-avail.txt
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,6 @@ CCE-86780-4
CCE-86781-2
CCE-86784-6
CCE-86785-3
CCE-86787-9
CCE-86788-7
CCE-86789-5
CCE-86790-3
Expand Down Expand Up @@ -1347,7 +1346,6 @@ CCE-88725-7
CCE-88727-3
CCE-88728-1
CCE-88729-9
CCE-88731-5
CCE-88734-9
CCE-88735-6
CCE-88736-4
Expand Down Expand Up @@ -1443,7 +1441,6 @@ CCE-88898-2
CCE-88899-0
CCE-88900-6
CCE-88902-2
CCE-88903-0
CCE-88904-8
CCE-88905-5
CCE-88906-3
Expand Down
Loading