copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2024-10-09 |
kubernetes, docker, containers |
containers |
{{site.data.keyword.attribute-definition-list}}
{: #service-arch}
Review sample cluster architectures and the components that are created in your classic or VPC cluster. {: shortdesc}
{: #architecture_classic}
The following architectural overviews are specific to the classic infrastructure provider. For an architectural overview for the VPC infrastructure provider, see VPC cluster architecture. {: note}
{: #no-vrf-public-endpoint}
The following image shows the components of your cluster and how they interact in a non-VRF or VRF-enabled account when only the public cloud service endpoint is enabled. {: shortdesc}
{: caption="Cluster architecture when only the public cloud service endpoint is enabled" caption-side="bottom"}
{: #vrf-both-endpoints}
The following image shows the components of your cluster and how they interact in a VRF-enabled account when the public and private cloud service endpoints are enabled. {: shortdesc}
{: caption="Cluster architecture when public and private cloud service endpoints are enabled" caption-side="bottom"}
{: #master-components}
The Kubernetes master is tasked with managing all compute, network, and storage resources in the cluster, and ensures that your containerized apps and services are equally deployed to the worker nodes in the cluster. Depending on how you configure your app and services the master determines the worker node that has sufficient resources to fulfill the app's requirements. {: shortdesc}
The Kubernetes master and all the master components are dedicated only to you, and are not shared with other IBM customers.
{: note}
The following table describes the components of the Kubernetes master.
kube-apiserver
: The Kubernetes API server serves as the main entry point for all cluster management requests from the worker node to the Kubernetes master. The Kubernetes API server validates and processes requests that change the state of Kubernetes resources, such as pods or services, and stores this state in etcd.
konnectivity-server
: The Konnectivity server works with the Konnectivity agent to securely connect the master to the worker node. This connection supports apiserver proxy
calls to your pods and services, and kubectl exec
, attach
, and logs
calls to the kubelet.
etcd
: etcd
is a highly available key value store that stores the state of all Kubernetes resources of a cluster, such as services, deployments, and pods. Data in etcd is backed up to an encrypted storage instance that IBM manages.
kube-scheduler
: The Kubernetes scheduler watches for newly created pods and decides where to deploy them based on capacity, performance needs, policy constraints, anti-affinity specifications, and workload requirements. If no worker node can be found that matches the requirements, the pod is not deployed in the cluster.
kube-controller-manager
: The Kubernetes controller manager is a daemon that watches the state of cluster resources, such as replica sets. When the state of a resource changes, for example if a pod in a replica set goes down, the controller manager initiates correcting actions to achieve the required state.
{: #worker-components}
Each worker node is a physical machine (bare metal) or a virtual machine that runs on physical hardware in the cloud environment. When you provision a worker node, you determine the resources that are available to the containers that are hosted on that worker node. Out of the box, your worker nodes are set up with an {{site.data.keyword.IBM_notm}}-managed container runtime, separate compute resources, networking, and a volume service. The built-in security features provide isolation, resource management capabilities, and worker node security compliance. {: shortdesc}
The worker nodes and all the worker node components are dedicated only to you, and are not shared with other IBM customers. However, if you use a worker node virtual machine, the underlying hardware might be shared with other customers depending on the level of hardware isolation that you choose. {: note}
Modifying default worker node components such as the kubelet
is not supported and might cause unexpected results.
{: note}
The following tables describe the components of a worker node.
{: #kube-system-ns-resources}
ibm-master-proxy
: The ibm-master-proxy
forwards requests from the worker node to the IP addresses of the highly available master replicas. In single zone clusters, the master has three replicas on separate hosts with one master IP address and domain name. For clusters that are in a multizone-capable zone, the master has three replicas that are spread across zones. As such, each master has its own IP address that is registered with DNS, with one domain name for the entire cluster master.
konnectivity-agent
: The Konnectivity agent works with the Konnectivity server to securely connect the master to the worker node. This connection supports apiserver proxy
calls to your pods and services, and kubectl exec
, attach
, and logs
calls to the kubelet.
kubelet
: The kubelet is a pod that runs on every worker node and is responsible for monitoring the health of pods that run on the worker node and for watching the events that the Kubernetes API server sends. Based on the events, the kubelet creates or removes pods, ensures liveness and readiness probes, and reports back the status of the pods to the Kubernetes API server.
coredns
: By default, Kubernetes schedules a CoreDNS pod (or KubeDNS pod in version 1.12 and earlier) and service on the cluster. Containers automatically use the DNS service's IP to resolve DNS names in their searches for other pods and services.
calico
: Calico manages network policies for your cluster, and comprises a few components as follows.
: calico-cni
: The Calico container network interface (CNI) manages the network connectivity of containers and removes allocated resources when a container is deleted.
: calico-ipam
: The Calico IPAM manages IP address assignment for containers.
: calico-node
: The Calico node is a container that bundles together the various components that are required for networking containers with Calico.
: calico-policy-controller
: The Calico policy controller watches inbound and outbound network traffic for compliance with set network policies. If the traffic is not allowed in the cluster, access to the cluster is blocked. The Calico policy controller is also used to create and set network policies for a cluster.
kube-proxy
: The Kubernetes network proxy is a daemon that runs on every worker node and that forwards or load balances TCP and UDP network traffic for services that run in the cluster.
kube-dashboard
: The Kubernetes dashboard is a web-based GUI that allows users to manage and troubleshoot the cluster and applications that run in the cluster.
heapster
: Heapster is a cluster-wide aggregator of monitoring and event data. The Heapster pod discovers all nodes in the cluster and queries usage information from each node's kubelet. You can find utilization graphs in the Kubernetes dashboard.
Ingress ALB : Ingress is a Kubernetes service that you can use to balance network traffic workloads in your cluster by forwarding public or private requests to multiple apps in your cluster. To expose your apps over the public or private network, you must create an Ingress resource to register your apps with the Ingress application load balancer (ALB). Multiple apps can then be accessed by using a single URL or IP address.
Storage provider : Every cluster is set up with a plug-in to provision file storage. You can choose to install other add-ons, such as block storage.
{: #ibm-system-ns-resources}
Logging and metrics : You can use the {{site.data.keyword.la_full}} and {{site.data.keyword.mon_full}} services to expand your collection and retention capabilities when working with logs and metrics. Load balancer
: A load balancer is a Kubernetes service that can be used to balance network traffic workloads in your cluster by forwarding public or private requests to an app.
{: #default-ns-resources}
App pods and services
: In the default
namespace or in namespaces that you create, you can deploy apps in pods and services to communicate with those pods.
{: #architecture_vpc}
The following diagram and table describe the default components that are set up in an {{site.data.keyword.containerlong_notm}} VPC cluster architecture. {: shortdesc}
The following architectural overviews are specific to the VPC infrastructure provider. For an architectural overview for the classic infrastructure provider, see Classic cluster architecture. {: note}
{: caption="Kubernetes cluster in a VPC" caption-side="bottom"}
Component | Description |
---|---|
Master | Master components, including the API server and etcd, have three replicas and are spread across zones for even higher availability. Masters include the same components as described in the community Kubernetes architecture. The master and all the master components are dedicated only to you, and are not shared with other IBM customers. |
Worker node | With {{site.data.keyword.containerlong_notm}}, the virtual machines that your cluster manages are instances that are called worker nodes. These worker nodes virtual machines and all the worker node components are dedicated to you only and are not shared with other IBM customers. However, the underlying hardware is shared with other IBM customers. You manage the worker nodes through the automation tools that are provided by {{site.data.keyword.containerlong_notm}}, such as the API, CLI, or console. Unlike classic clusters, you don't see VPC compute worker nodes in your infrastructure portal or separate infrastructure bill, but instead manage all maintenance and billing activity for the worker nodes from {{site.data.keyword.containerlong_notm}}. Worker nodes include the same components as described in the Classic architecture. |
Cluster networking | Your worker nodes are created in a VPC subnet in the zone that you specify. By default, the public and private cloud service endpoints for your cluster are enabled. Communication between the master and worker nodes is over the private network. Authenticated external users can communicate with the master over the public network, such as to run kubectl commands. You can optionally set up your cluster to communicate with on-prem services by setting up a VPC VPN on the private network. |
App networking | You can create a Kubernetes LoadBalancer service for your apps in the cluster, which automatically provisions a VPC load balancer in your VPC outside the cluster. The load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker nodes. For more information, see Exposing apps with VPC load balancers. Calico is used as the cluster networking policy fabric. |
Storage | You can set up only block persistent storage. Block storage is available as a cluster add-on. For more information, see Setting up IBM Block Storage for {{site.data.keyword.cloud_notm}}. |
{: caption="Kubernetes cluster in a VPC" caption-side="bottom"} |