diff --git a/applications/openshift/general/general_network_separation/rule.yml b/applications/openshift/general/general_network_separation/rule.yml index 8144cfc3ffa..87ccb3c8b76 100644 --- a/applications/openshift/general/general_network_separation/rule.yml +++ b/applications/openshift/general/general_network_separation/rule.yml @@ -18,3 +18,5 @@ ocil_clause: 'Network separation needs review' ocil: |- Create separate Ingress Controllers for the API and your Applications. Also setup your environment in a way, that Control Plane Nodes are in another network than your worker nodes. If you implement multiple Nodes for different purposes evaluate if these should be in different network segments (i.e. Infra-Nodes, Storage-Nodes, ...). + Also evaluate how you handle outgoing connections and if they have to be pinned to + specific nodes or IPs. diff --git a/applications/openshift/general/general_node_separation/rule.yml b/applications/openshift/general/general_node_separation/rule.yml index 1e2e49bd723..1e012d86eb7 100644 --- a/applications/openshift/general/general_node_separation/rule.yml +++ b/applications/openshift/general/general_node_separation/rule.yml @@ -3,17 +3,25 @@ documentation_complete: true title: 'Create Boundaries between Resources using Nodes or Clusters' description: |- - Use Nodes or Clusters to isolate Workloads with high protection requirements. + Use Nodes or Clusters to isolate Workloads with high protection requirements. - Run the following command and review the pods and how they are deployed on Nodes.
$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"- You can use labels or other data as custom field which helps you to identify parts of an application. - Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters with workloads of lower protection requirements. + Run the following command and review the pods and how they are deployed on Nodes. +
$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"+ You can use labels or other data as custom field which helps you to identify parts of an application. + Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters + with workloads of lower protection requirements. rationale: |- - Assigning workloads with high protection requirements to specific nodes creates and additional boundary (the node) between workloads of high protection requirements and workloads which might follow less strict requirements. An adversary which attacked a lighter protected workload now has additional obstacles for their movement towards the higher protected workloads. + Assigning workloads with high protection requirements to specific nodes creates and additional + boundary (the node) between workloads of high protection requirements and workloads which might + follow less strict requirements. An adversary which attacked a lighter protected workload now has + additional obstacles for their movement towards the higher protected workloads. severity: medium +identifiers: + cce@ocp4: CCE-88903-0 + ocil_clause: 'Application placement on Nodes and Clusters needs review' ocil: |- diff --git a/applications/openshift/master/master_taint_noschedule/rule.yml b/applications/openshift/master/master_taint_noschedule/rule.yml new file mode 100644 index 00000000000..d6b78f9d10c --- /dev/null +++ b/applications/openshift/master/master_taint_noschedule/rule.yml @@ -0,0 +1,61 @@ +documentation_complete: true + +title: Verify that Control Plane Nodes are not schedulable for workloads + +description: -| +
+ User workloads should not be colocated with control plane workloads. To ensure that the scheduler won't + schedule workloads on the master nodes, the taint "node-role.kubernetes.io/master" with the "NoSchedule" + effect is set by default in most cluster configurations (excluding SNO and Compact Clusters). +
++ The scheduling of the master nodes is centrally configurable without reboot via +
oc edit schedulers.config.openshift.io clusterfor details see the Red Hat Solution + {{{ weblink(link="https://access.redhat.com/solutions/4564851") }}} + +
+ If you run a setup, which requires the colocation of control plane and user workload you need to + exclude this rule. +
+ +rationale: -| + By separating user workloads and the control plane workloads we can better ensure that there is + no ill effects from workload boosts to each other. Furthermore we ensure that an adversary who gets + control over a badly secured workload container is not colocated to critical components of the control plane. + In some setups it might be necessary to make the control plane schedulable for workloads i.e. + Single Node Openshift (SNO) or Compact Cluster (Three Node Cluster) setups. + +{{% set jqfilter = '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule")' %}} + +identifiers: + cce@ocp4: CCE-88731-5 + +severity: medium + +ocil_clause: 'Control Plane is schedulable' + +ocil: |- + Run the following command to see if control planes are schedulable +$oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'+ for each master node, there should be an output of a key with the NoSchedule effect. + + By editing the cluster scheduler you can centrally configure the masters as schedulable or not + by setting .spec.mastersSchedulable to true. + Use
$oc edit schedulers.config.openshift.io clusterto configure the scheduling. + +warnings: + - general: |- + {{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}} + +template: + name: yamlfile_value + vars: + ocp_data: "true" + filepath: |- + {{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}} + yamlpath: ".effect" + check_existence: "at_least_one_exists" + entity_check: "at least one" + values: + - value: "NoSchedule" + operation: "pattern match" diff --git a/applications/openshift/master/master_taint_noschedule/tests/ocp4/e2e.yml b/applications/openshift/master/master_taint_noschedule/tests/ocp4/e2e.yml new file mode 100644 index 00000000000..b49fd368b98 --- /dev/null +++ b/applications/openshift/master/master_taint_noschedule/tests/ocp4/e2e.yml @@ -0,0 +1,2 @@ +--- +default_result: PASS diff --git a/applications/openshift/networking/configure_egress_ip_node_assignable/rule.yml b/applications/openshift/networking/configure_egress_ip_node_assignable/rule.yml new file mode 100644 index 00000000000..6249f48548b --- /dev/null +++ b/applications/openshift/networking/configure_egress_ip_node_assignable/rule.yml @@ -0,0 +1,57 @@ +documentation_complete: true + +title: Check Egress IPs Assignable to Nodes + +description: -| +
+ The OpenShift Container Platform egress IP address functionality allows you to ensure that the + traffic from one or more pods in one or more namespaces has a consistent source IP address for + services outside the cluster network. +
++ The necessary labeling on the designated nodes is configurable without reboot via +
$ oc label nodes $NODENAME k8s.ovn.org/egress-assignable=""for details see the + Red Hat Documentation + {{{ weblink(link="https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/ovn-kubernetes-network-plugin#nw-egress-ips-about_configuring-egress-ips-ovn") }}} + + +rationale: -| + By using egress IPs you can provide a consistent IP to external services and configure special + firewall rules which precisely select this IP. This allows for more control on external systems. + Furthermore you can bind the IPs to specific nodes, which handle all the network connections to + achieve a better separation of duties between the different nodes. + +identifiers: + cce@ocp4: CCE-86787-9 + +severity: medium + +ocil_clause: 'Check Egress IPs Assignable to Nodes' + +ocil: |- + Run the following command to see if nodes are assignable for egress IPs +
$ oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."k8s.ovn.org/egress-assignable" != null) | .metadata.name'+ This commands prints the name of each node which is configured to get egress IPs assigned. If + the output is empty, there are no nodes available. + +{{% set old_jqfilter = 'if any(.items[]?; .metadata.labels."k8s.ovn.org/egress-assignable" != null) then true else false end' %}} +{{% set jqfilter = '[ .items[] | .metadata.labels["k8s.ovn.org/egress-assignable"] != null ]' %}} + + +warnings: + - general: |- + {{{ openshift_filtered_cluster_setting({'/api/v1/nodes': jqfilter}) | indent(8) }}} + +template: + name: yamlfile_value + vars: + ocp_data: "true" + filepath: |- + {{{ openshift_filtered_path('/api/v1/nodes', jqfilter) }}} + yamlpath: '[:]' + check_existence: at_least_one_exists + entity_check: "at least one" + values: + - value: 'true' + type: "string" + entity_check: "at least one" diff --git a/applications/openshift/networking/configure_egress_ip_node_assignable/tests/ocp4/e2e-remediation.sh b/applications/openshift/networking/configure_egress_ip_node_assignable/tests/ocp4/e2e-remediation.sh new file mode 100755 index 00000000000..90a19e0dca9 --- /dev/null +++ b/applications/openshift/networking/configure_egress_ip_node_assignable/tests/ocp4/e2e-remediation.sh @@ -0,0 +1,9 @@ +#!/bin/bash +set -xe + +echo "Labeling Node for egress IP" + +NODENAME=`oc get node | tail -1 | cut -d" " -f1` +oc label node $NODENAME k8s.ovn.org/egress-assignable="" + +sleep 5 diff --git a/applications/openshift/networking/configure_egress_ip_node_assignable/tests/ocp4/e2e.yml b/applications/openshift/networking/configure_egress_ip_node_assignable/tests/ocp4/e2e.yml new file mode 100644 index 00000000000..fd9b313e87b --- /dev/null +++ b/applications/openshift/networking/configure_egress_ip_node_assignable/tests/ocp4/e2e.yml @@ -0,0 +1,3 @@ +--- +default_result: FAIL +result_after_remediation: PASS diff --git a/controls/bsi_app_4_4.yml b/controls/bsi_app_4_4.yml index 98e4e9b9075..d8ebd323833 100644 --- a/controls/bsi_app_4_4.yml +++ b/controls/bsi_app_4_4.yml @@ -356,11 +356,16 @@ controls: levels: - elevated description: >- - (1) There SHOULD be an automated audit that checks the settings of nodes, of Kubernetes, and of the pods of applications against a defined list of allowed settings and standardised benchmarks. - (2) Kubernetes SHOULD enforce these established rules in each cluster by connecting appropriate tools. + (1) There SHOULD be an automated audit that checks the settings of nodes, of Kubernetes, and + of the pods of applications against a defined list of allowed settings and standardised + benchmarks. + (2) Kubernetes SHOULD enforce these established rules in each cluster by connecting + appropriate tools. notes: >- - Section 1 is addressed by the compliance operator itself. The standardized Benchmarks can be just the BSI Profile, or additionally a hardening standard like the CIS Benchmark. - Section 2 can be addressed by using auto-remediation of compliance-operator or for workloads by using Advanced Cluster Security or similar tools. + Section 1 is addressed by the compliance operator itself. The standardized Benchmarks can be + just the BSI Profile, or additionally a hardening standard like the CIS Benchmark. + Section 2 can be addressed by using auto-remediation of compliance-operator or for workloads + by using Advanced Cluster Security or similar tools. status: automated rules: - scansettingbinding_exists @@ -372,25 +377,48 @@ controls: levels: - elevated description: >- - In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and only run pods that are + (1) In a Kubernetes cluster, nodes SHOULD be assigned dedicated tasks and only run pods that are assigned to each task. - Bastion nodes SHOULD handle all incoming and outgoing data connections of between + (2) Bastion nodes SHOULD handle all incoming and outgoing data connections of between applications and other networks. - Management nodes SHOULD operate control plane pods and only handle control plane data + (3) Management nodes SHOULD operate control plane pods and only handle control plane data connections. - If deployed, storage nodes SHOULD only operate the fixed persistent volume services pods in + (4) If deployed, storage nodes SHOULD only operate the fixed persistent volume services pods in a cluster. notes: >- - TBD - status: pending - rules: [] + Section 1: + This requirement must be solved organizationally. OpenShift can bind applications to specific + nodes or node groups (via labels and node selectors). ACM can take over the labeling of nodes + and ensure that the nodes are labeled accordingly. + Section 2: + OpenShift uses the concept of infra-nodes. The incoming connections can be bound to these and, + by using Egress-IP, the outgoing connections can also be bound. + Section 3: + OpenShift uses control plane nodes for management, on which no applications are running. + Data connections between applications to the outside world and to one another are not routed + via the control plane as standard. The necessary requirements must be taken into account as + part of the planning. + Section 4: + OpenShift Data Foundation (ODF) can be linked to its own infra nodes using the OpenShift + mechanisms, which only run storage services. This can be implemented equivalently with other + storage solutions. + status: partial + rules: + # Section 1,2,3,4 + - general_node_separation + - general_network_separation + # Section 2 + - configure_egress_ip_node_assignable + # Section 3 + - master_taint_noschedule - id: APP.4.4.A15 title: Separation of Applications at Node and Cluster Level levels: - elevated description: >- - Applications with very high protection needs SHOULD each use their own Kubernetes clusters or dedicated nodes that are not available for other applications + (1) Applications with very high protection needs SHOULD each use their own Kubernetes clusters + or dedicated nodes that are not available for other applications notes: '' status: manual rules: @@ -401,11 +429,15 @@ controls: levels: - elevated description: >- - The automation of operational tasks in operators SHOULD be used for particularly critical applications and control plane programs. + (1) The automation of operational tasks in operators SHOULD be used for particularly critical + applications and control plane programs. notes: >- - OpenShift relies consistently on the application of the concept of operators. The platform itself is operated and managed 100% by operators, meaning that all internal components of the platform are rolled out and managed by operators. + OpenShift relies consistently on the application of the concept of operators. The platform + itself is operated and managed 100% by operators, meaning that all internal components of + the platform are rolled out and managed by operators. - Application-specific operators must be considered as part of application development and deployment. + Application-specific operators must be considered as part of application development and + deployment. status: inherently met rules: [] diff --git a/shared/references/cce-redhat-avail.txt b/shared/references/cce-redhat-avail.txt index 33debb230d4..52a376cd244 100644 --- a/shared/references/cce-redhat-avail.txt +++ b/shared/references/cce-redhat-avail.txt @@ -173,7 +173,6 @@ CCE-86780-4 CCE-86781-2 CCE-86784-6 CCE-86785-3 -CCE-86787-9 CCE-86788-7 CCE-86789-5 CCE-86790-3 @@ -1347,7 +1346,6 @@ CCE-88725-7 CCE-88727-3 CCE-88728-1 CCE-88729-9 -CCE-88731-5 CCE-88734-9 CCE-88735-6 CCE-88736-4 @@ -1443,7 +1441,6 @@ CCE-88898-2 CCE-88899-0 CCE-88900-6 CCE-88902-2 -CCE-88903-0 CCE-88904-8 CCE-88905-5 CCE-88906-3