diff --git a/controls/bsi_app_4_4.yml b/controls/bsi_app_4_4.yml index 7ba3a7e2e444..5a1dfba7c0d8 100644 --- a/controls/bsi_app_4_4.yml +++ b/controls/bsi_app_4_4.yml @@ -36,41 +36,22 @@ controls: programs of the application. Only applications with similar protection needs and similar possible attack vectors SHOULD share a Kubernetes cluster. notes: >- - These requirements must be implemented organizationally. OpenShift fully supports them. - OpenShift simplifies the implementation of the stated requirements for separating applications - as well as development and production environments by setting up projects (tenants). - Namespaces, networks/network separation, meta tags as well as CPU and memory separation are already - configured by OpenShift as required (security-by-design). Special requirements for protection and - network zone concepts can also be flexibly and easily mapped using additional measures. - This particularly includes the ability to define application classes, operate in multiple, - separate clusters, and automatically distribute workloads to protection zones and fire compartments. - Particularly in the case of separate clusters, ACM can support rule-based distribution of applications using labels. - status: manual - rules: - - general_namespace_separation + TBD + status: pending + rules: [] - id: APP.4.4.A2 title: Planning Automation with CI/CD levels: - basic description: >- - (1) Automating the operation of applications in Kubernetes using CI/CD MUST ONLY take place - after appropriate planning. (2) The planning MUST cover the entire lifecycle from commissioning - to decommissioning, including development, testing, operation, monitoring, and updates. (3) A + Automating the operation of applications in Kubernetes using CI/CD MUST ONLY take place + after appropriate planning. The planning MUST cover the entire lifecycle from commissioning + to decommissioning, including development, testing, operation, monitoring, and updates. A roles and rights concept and the securing of Kubernetes Secrets MUST be part of the planning notes: >- - Since this requirement is a plan only, we cannot test this with compliance checks. - Section 1: This requirement must be implemented organizationally. - The documentation at https://docs.openshift.com/container-platform/latest/cicd/pipelines/understanding-openshift-pipelines.html - provides information on the planning - Section 2: The protective measure is primarily of an organizational nature. OpenShift fully supports them. - With the integrated CI/CD technologies Jenkins, Tekton and OpenShift GitOps, OpenShift already offers preconfigured solutions - for automated CI/CD pipelines. Of course, other technologies such as Gitlab CI and GitHub Actions can also be integrated. - Section 3: Kubernetes secrets are secured by a Role Based Access Control (RBAC) system. - Depending on the protection requirement, Kubernetes secrets can be secured via an (encrypted) etcd metadata store or - additionally via an integration of Vault components or "sealed secrets" for CD and GitOps mechanisms. - Secrets and roles can also be managed centrally using ACM and rolled out consistently to the managed clusters using policies. - status: documentation + TBD + status: pending rules: [] - id: APP.4.4.A3 @@ -78,31 +59,22 @@ controls: levels: - basic description: >- - (1) Kubernetes and all other control plane applications MUST authenticate and authorise each + Kubernetes and all other control plane applications MUST authenticate and authorise each action taken by a user or, in automated mode, corresponding software. This applies whether the actions are taken via a client, a web interface, or a corresponding API. Administrative actions MUST NOT be performed anonymously. - (2) Each user MUST ONLY be granted the permissions they absolutely require. Unlimited access + Each user MUST ONLY be granted the permissions they absolutely require. Unlimited access rights MUST be granted in a very restrictive manner. - (3) Only a small group of people SHOULD be authorised to define automation processes. Only + Only a small group of people SHOULD be authorised to define automation processes. Only selected administrators SHOULD be given the right to create or change shares for persistent volumes in Kubernetes. notes: >- - Section 1: In the default configuration, OpenShift restricts the use of the web console and APIs only to authenticated and authorized users.| - Connection to external directory services (LDAP, OIDC and others) is possible. - Section 2: OpenShift already offers roles for a least privilege concept. The RBAC roles can be adapted or supplemented with new roles. - The preconfigured roles enable easy authorization assignment according to the least-privilege and need-to-know principles. - User actions can be tracked via the audit log. - Section 3: In the default configuration, persistent storage can only be integrated by cluster administrators. - For dynamically provisioned storage, the corresponding provisioners have the necessary authorizations. - These provisioners must be set up and configured by an admin. Storage requirements are controlled and restricted using quota mechanisms. - status: partial + TBD + status: pending rules: - # Section 1 - api_server_anonymous_auth - kubelet_anonymous_auth - kubeadmin_removed - # Section 2 + 3 - rbac_least_privilege - id: APP.4.4.A4 @@ -143,42 +115,32 @@ controls: levels: - basic description: >- - (1) A cluster MUST have a backup. The backup MUST include: - (2) • Persistent volumes - (3) • Configuration files for Kubernetes and the other programs of the control plane - (4) • The current state of the Kubernetes cluster, including extensions - (5) • Databases of the configuration (namely etcd in this case) - (6) • All infrastructure applications required to operate the cluster and the services within it - (7) • The data storage of the code and image registries - (8) Snapshots for the operation of the applications SHOULD also be considered. Snapshots MUST NOT be considered a substitute for backups. + A cluster MUST have a backup. The backup MUST include: + • Persistent volumes + • Configuration files for Kubernetes and the other programs of the control plane + • The current state of the Kubernetes cluster, including extensions + • Databases of the configuration (namely etcd in this case) + • All infrastructure applications required to operate the cluster and the services within it + • The data storage of the code and image registries + Snapshots for the operation of the applications SHOULD also be considered. Snapshots MUST + NOT be considered a substitute for backups. notes: >- - The data backup of a cluster must be individually defined as part of the system architecture as part of the operating model. The areas of responsibility for the container platform (cluster administration), the infrastructure services (system administration) and the application management (technical administration) should be considered separately. - - For data backup as part of cluster administration (Kubernetes configuration, current state of the Kubernetes cluster, configuration database) the integrated functions or methods of OpenShift must be used. System administration and specialist administration must be carried out in accordance with the respective specifications. - - Snapshots for persistent volumes are supported when using OpenShift's Container Storage Interface (CSI) drivers. OpenShift offers an easily configurable backup system with the OpenShift API for Data Protection (OADP). - - Additional third-party solutions for backup are also available in the OperatorHub. - - The checks are not checking the requirement in detail. They only setup a foundation to implement the configurations as described. For Section 3,4 and 6 a GitOps approach might achieve the best results. for 2 and 7 a sufficient backup solution is needed. 5 can be achieved with onboard utilities. 8 is dependend on the CSI provider and the available features - status: partial - rules: - # Section 2,7 - - general_backup_solution_installed - # Section 5 - - etcd_backup + TBD + status: pending + rules: [] - id: APP.4.4.A6 title: Initialisation of Pods levels: - standard description: >- - If an initialisation (e.g. of an application) takes place in a pod at start-up, this SHOULD take place in a separate Init container. It SHOULD be ensured that the initialisation terminates all processes that are already running. Kubernetes SHOULD ONLY start the other containers if the initialisation is successful. + If an initialisation (e.g. of an application) takes place in a pod at start-up, this SHOULD take + place in a separate Init container. It SHOULD be ensured that the initialisation terminates all + processes that are already running. Kubernetes SHOULD ONLY start the other containers if + the initialisation is successful. notes: >- - OpenShift provides the necessary resource configurations via Kubernetes. Kubernetes ensures the (process) dependencies between init containers and “normal” containers of a pod. - - The requirement must be implemented by application development. - status: inherently met + TBD + status: pending rules: [] - id: APP.4.4.A7 @@ -186,60 +148,35 @@ controls: levels: - standard description: >- - (1) Networks for the administration of nodes, the control plane, and the individual networks of application services SHOULD be separated. - (2) Only the network ports of the pods necessary for operation SHOULD be released into the designated networks. (3) If a Kubernetes cluster contains multiple applications, all the network connections between the Kubernetes namespaces SHOULD first be prohibited and only required network connections permitted (whitelisting). (4) The network ports necessary for the administration of the nodes, the runtime, and Kubernetes (including its extensions) SHOULD ONLY be accessible from the corresponding administration network and from pods that need them. - (5) Only selected administrators SHOULD be authorised in Kubernetes to manage the CNI and create or change rules for the network. + Networks for the administration of nodes, the control plane, and the individual networks of + application services SHOULD be separated. + Only the network ports of the pods necessary for operation SHOULD be released into the + designated networks. If a Kubernetes cluster contains multiple applications, all the network + connections between the Kubernetes namespaces SHOULD first be prohibited and only + required network connections permitted (whitelisting). The network ports necessary for the + administration of the nodes, the runtime, and Kubernetes (including its extensions) SHOULD + ONLY be accessible from the corresponding administration network and from pods that need + them. + Only selected administrators SHOULD be authorised in Kubernetes to manage the CNI and + create or change rules for the network. notes: >- - Section 1-3: - The requirements for restricting network ports and network connections between Kubernetes namespaces are already supported by OpenShift as standard using network policies and the option for default network policies (security by design). - - The separation of the management network can also be implemented at the namespace level via network policies (incoming, the responsibility of the namespace administrator) and egress firewalls (outgoing, the responsibility of the cluster admins). - - Externally exposed services can receive their own IP and thus data traffic can also be separated outside the platform. Inter-node communication is carried out via suitable tunnel protocols (VXLAN, GENEVE) and can also be encrypted using IPSec. - - The determination of the necessary network policies for applications is supported by the network policy generator in ACS. - Section 4 is true by default - Section 5 maps to principle of least privilege - status: partial - rules: - # Section 1 - - general_network_separation - # Section 2 - - configure_network_policies - - configure_network_policies_namespaces - # Section 3 - - project_config_and_template_network_policy - # Section 4, default - # Section 5 - - rbac_least_privilege - + TBD + status: pending + rules: [] - id: APP.4.4.A8 title: Securing Configuration Files on Kubernetes levels: - standard description: >- - (1) The configuration files of a Kubernetes cluster, including all its extensions and applications, + The configuration files of a Kubernetes cluster, including all its extensions and applications, SHOULD be versioned and annotated. - (2) Access rights to configuration file management software SHOULD be granted in a restrictive - manner. (3) Read and write access rights to the configuration files of the control plane SHOULD + Access rights to configuration file management software SHOULD be granted in a restrictive + manner. Read and write access rights to the configuration files of the control plane SHOULD be assigned and restricted with particular care. notes: >- - OpenShift is fully configured using Kubernetes resources including CustomResources (CR). All - resources that are created after the initial cluster installation can be considered configuration - files as described in this control. - - Section 1: This control needs to be adressed on an organizational level. To achieve versioning, - the configuration files should be stored in a Git repository. The Git repository is considered - the only source of truth and provides a visible and auditable trail of changes. To automatically - apply the configuration, GitOps processes and tools like OpenShift GitOps can be used. - - Section 2: This control needs to be adressed in the respective external systems. Access rights - to the Git repository and GitOps controller should be granted in a restrictive manner. - - Section 3: The relevant Kubernetes resources for configuring the control plane are inherently - protected by Kubernetes RBAC and can only be modified by cluster administrators. - status: manual + TBD + status: pending rules: [] - id: APP.4.4.A9 @@ -247,61 +184,32 @@ controls: levels: - standard description: >- - (1) Pods SHOULD NOT use the "default" service account. (2) Rights SHOULD NOT be granted to the - "default" service account. (3) Pods for different applications SHOULD run under their own service - accounts. (4) Access rights for the service accounts of the applications' pods SHOULD be limited + Pods SHOULD NOT use the "default" service account. Rights SHOULD NOT be granted to the + "default" service account. Pods for different applications SHOULD run under their own service + accounts. Access rights for the service accounts of the applications' pods SHOULD be limited to those that are strictly necessary. - (5) Pods that do not require a service account SHOULD not be able to view it or have access to + Pods that do not require a service account SHOULD not be able to view it or have access to corresponding tokens. - (6) Only control plane pods and pods that absolutely need them SHOULD use privileged service + Only control plane pods and pods that absolutely need them SHOULD use privileged service accounts. - (7) Automation programs SHOULD each receive their own tokens, even if they share a common + Automation programs SHOULD each receive their own tokens, even if they share a common service account due to similar tasks. notes: >- - Section 1-5: This needs to be adressed in the individual application deployments. The - associated rules provide additional guidance. - - Section 6: The usage of privileged service accounts is controlled by Security Context - Constraints (SCC), which should be configured and granted according to the principle of least - privilege. - - Section 7: This control needs to be adressed on an organizational level. - status: partial - rules: - # Section 1-3: - - accounts_unique_service_account - # Section 2: - - accounts_no_rolebindings_default_service_account - - accounts_no_clusterrolebindings_default_service_account - # Section 4: - - rbac_least_privilege - - rbac_wildcard_use - # Section 5: - - accounts_restrict_service_account_tokens - # Section 6: - - scc_drop_container_capabilities - - scc_limit_container_allowed_capabilities - - scc_limit_host_dir_volume_plugin - - scc_limit_host_ports - - scc_limit_ipc_namespace - - scc_limit_net_raw_capability - - scc_limit_network_namespace - - scc_limit_privilege_escalation - - scc_limit_privileged_containers - - scc_limit_process_id_namespace - - scc_limit_root_containers + TBD + status: pending + rules: [] - id: APP.4.4.A10 title: Securing Automation Processes levels: - standard description: >- - (1) All automation software processes, such as CI/CD and their pipelines, SHOULD only operate - with the rights that are strictly necessary. (2) If different user groups can change - configurations or start pods via automation software, this SHOULD be done for each group - through separate processes that only have the rights necessary for the respective user group. + All automation software processes, such as CI/CD and their pipelines, SHOULD only operate + with the rights that are strictly necessary. If different user groups can change configurations or + start pods via automation software, this SHOULD be done for each group through separate + processes that only have the rights necessary for the respective user group. notes: >- - This control needs to be adressed on an organizational level. All service accounts used by + This control needs to be adressed on an organizational level. All service accounts used by automation software need to adhere to the principle of least privilege. status: not applicable rules: [] @@ -311,22 +219,16 @@ controls: levels: - standard description: >- - (1) In pods, each container SHOULD define a health check for start-up and operation ("readiness" - and "liveness"). (2) These checks SHOULD provide information about the availability of the - software running in a pod. (3) The checks SHOULD fail if the monitored software cannot perform - its tasks properly. (4) For each of these checks, a time period SHOULD be defined that is - appropriate for the service running in the pod. (5) Based on these checks, Kubernetes SHOULD + In pods, each container SHOULD define a health check for start-up and operation ("readiness" + and "liveness"). These checks SHOULD provide information about the availability of the + software running in a pod. The checks SHOULD fail if the monitored software cannot perform + its tasks properly. For each of these checks, a time period SHOULD be defined that is + appropriate for the service running in the pod. Based on these checks, Kubernetes SHOULD delete or restart the pods. notes: >- - Section 1-3: The existance of readiness und liveness probes can be validated technically. This - check needs to be performed for each container in every pod individually. - Section 4: The adequacy of the checks and the configured time periods needs to be ensured by - the application owner. - Section 5: This functionality is inherently met by OpenShift. - status: manual - rules: - # Section 1-4: - - liveness_readiness_probe_in_workload + TBD + status: pending + rules: [] - id: APP.4.4.A12 title: Securing Infrastructure Applications @@ -354,16 +256,15 @@ controls: levels: - elevated description: >- - (1) There SHOULD be an automated audit that checks the settings of nodes, of Kubernetes, and of the pods of applications against a defined list of allowed settings and standardised benchmarks. - (2) Kubernetes SHOULD enforce these established rules in each cluster by connecting appropriate tools. + There SHOULD be an automated audit that checks the settings of nodes, of Kubernetes, and of + the pods of applications against a defined list of allowed settings and standardised + benchmarks. + Kubernetes SHOULD enforce these established rules in each cluster by connecting appropriate + tools. notes: >- - Section 1 is addressed by the compliance operator itself. The standardized Benchmarks can be just the BSI Profile, or additionally a hardening standard like the CIS Benchmark. - Section 2 can be addressed by using auto-remediation of compliance-operator or for workloads by using Advanced Cluster Security or similar tools. - status: automated - rules: - - scansettingbinding_exists - - scansettings_have_schedule - - scansetting_has_autoapplyremediations + TBD + status: pending + rules: [] - id: APP.4.4.A14 title: Use of Dedicated Nodes @@ -388,23 +289,23 @@ controls: levels: - elevated description: >- - Applications with very high protection needs SHOULD each use their own Kubernetes clusters or dedicated nodes that are not available for other applications - notes: '' - status: manual - rules: - - general_node_separation + Applications with very high protection needs SHOULD each use their own Kubernetes clusters + or dedicated nodes that are not available for other applications + notes: >- + TBD + status: pending + rules: [] - id: APP.4.4.A16 title: Use of Operators levels: - elevated description: >- - The automation of operational tasks in operators SHOULD be used for particularly critical applications and control plane programs. + The automation of operational tasks in operators SHOULD be used for particularly critical + applications and control plane programs. notes: >- - OpenShift relies consistently on the application of the concept of operators. The platform itself is operated and managed 100% by operators, meaning that all internal components of the platform are rolled out and managed by operators. - - Application-specific operators must be considered as part of application development and deployment. - status: inherently met + TBD + status: pending rules: [] - id: APP.4.4.A17 @@ -416,58 +317,9 @@ controls: message to the control plane. The control plane SHOULD ONLY accept nodes into a cluster that have successfully proven their integrity. notes: >- - OpenShift Nodes are using Red Hat CoreOS (RHCOS) by default, an immutable operating system. - While RHEL is also supported for Compute Nodes, RHCOS is mandatory for Control Plane Nodes and - recommended for all nodes. The correct version and configuration of RHCOS is verified - cryptographically with the desired state, that is managed by the Control Plane using MachineConfigs. - Any manual change on managed files is overwritten to ensure the desired state. Therefore, the - control is mostly inheretly met when using CoreOS for all nodes. - - Section 1: OpenShift uses an internal Certificate Authority (CA). The nodes (kubelet to API server - and MachineConfig daemon to MachineConfig server) are communicating using node-specific certificates, - signed by this CA. Correct permissions of relevant files and secure TLS configuration are verified - using the referenced rules. A TPM-verified status is not present with currently built-in mechanisms - of OpenShift. - - Section 2: Using the Red Hat File Integrity Operator, all files on the RHCOS nodes can be - cryptographically checked for integrity using Advanced Intrusion Detection Environment (AIDE). - status: partial - rules: - # Section 1 (worker / kubelet) - - file_groupowner_kubelet_conf - - file_groupowner_worker_ca - - file_groupowner_worker_kubeconfig - - file_groupowner_worker_service - - file_owner_kubelet - - file_owner_kubelet_conf - - file_owner_worker_ca - - file_owner_worker_kubeconfig - - file_owner_worker_service - - file_permissions_kubelet - - file_permissions_kubelet_conf - - file_permissions_worker_ca - - file_permissions_worker_kubeconfig - - file_permissions_worker_service - - kubelet_configure_client_ca - - kubelet_configure_tls_cert - - kubelet_configure_tls_cipher_suites - - kubelet_configure_tls_key - - kubelet_configure_tls_min_version - # Section 1 (API Server) - - api_server_client_ca - - api_server_kubelet_client_cert - - api_server_kubelet_client_key - - api_server_https_for_kubelet_conn - - api_server_tls_cert - - api_server_tls_cipher_suites - - api_server_tls_private_key - - api_server_tls_security_profile_not_old - - tls_version_check_apiserver - # Section 2 - - cluster_version_operator_exists - - cluster_version_operator_verify_integrity - - file_integrity_exists - - file_integrity_notification_enabled + TBD + status: pending + rules: [] - id: APP.4.4.A18 title: Use of Micro-Segmentation @@ -480,37 +332,12 @@ controls: rules SHOULD precisely define the source and destination of the allowed connections using at least one of the following criteria: service name, metadata (“labels”), Kubernetes service accounts, or certificate-based authentication. - (4) All the criteria used as labels for a connection SHOULD be secured in such a way that they - can only be changed by authorised persons and management services. + All the criteria used as labels for a connection SHOULD be secured in such a way that they can + only be changed by authorised persons and management services. notes: >- - In a cluster using a network plugin that supports Kubernetes network policy, network isolation - is controlled entirely by NetworkPolicy objects. In OpenShift, the default plugins (OpenShift SDN, - OVN Kubernetes) supports using network policy. Support for NetworkPolicy objects is verified - using rules. - - Section 1-3: By default, all pods in a project are accessible from other pods and network endpoints. - To isolate one or more pods in a project, you need to create NetworkPolicy objects in that project - to indicate the allowed incoming connections. If a pod is matched by selectors in one or more - NetworkPolicy objects, then the pod will accept only connections that are allowed by at least - one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects - is fully accessible. - - It is useful to create default policies for each application namespace e.g. to deny all ingress - traffic by default. The existance of at least one network policy and the automatic creation - as part of a namespace template is checked using rules. - - The creation of suitable NetworkPolicy objects that satisfy the requirements from sections 1 to 3, - however, needs to be ensured by the application owner. - - Section 4: It needs to be ensured organizationally, that only required subjects are granted - RBAC to change the relevant Kubernetes objects. - status: partial - rules: - # General support of network policies - - configure_network_policies - # Section 1-2 - - configure_network_policies_namespaces - - project_config_and_template_network_policy + TBD + status: pending + rules: [] - id: APP.4.4.A19 title: High Availability of Kubernetes