Skip to content

Latest commit

 

History

History
1126 lines (939 loc) · 68.2 KB

cs_versions.md

File metadata and controls

1126 lines (939 loc) · 68.2 KB
copyright lastupdated keywords subcollection
years
2014, 2019
2019-04-16
kubernetes, iks
containers

{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:note: .note}

Version information and update actions

{: #cs_versions}

Kubernetes version types

{: #version_types}

{{site.data.keyword.containerlong}} concurrently supports multiple versions of Kubernetes. When a latest version (n) is released, versions up to 2 behind (n-2) are supported. Versions more than 2 behind the latest (n-3) are first deprecated and then unsupported. {:shortdesc}

Supported Kubernetes versions:

  • Latest: 1.13.5
  • Default: 1.12.7
  • Other: 1.11.9

Deprecated and unsupported Kubernetes versions:

  • Deprecated: 1.10
  • Unsupported: 1.5, 1.7, 1.8, 1.9

Deprecated versions: When clusters are running on a deprecated Kubernetes version, you have a minimum of 30 days to review and update to a supported Kubernetes version before the version becomes unsupported. During the deprecation period, your cluster is still functional, but might require updates to a supported release to fix security vulnerabilities. For example, you can add and reload worker nodes, but you cannot create new clusters that use the deprecated version.

Unsupported versions: If your clusters run a Kubernetes version that is not supported, review the following potential update impacts and then immediately update the cluster to continue receiving important security updates and support. Unsupported clusters cannot add or reload existing worker nodes. You can find out if your cluster is unsupported by reviewing the State field in the output of the ibmcloud ks clusters command or in the {{site.data.keyword.containerlong_notm}} console External link icon.

If you wait until your cluster is three or more minor versions behind a supported version, you must force the update, which might cause unexpected results or failure. Updating fails from version 1.7 or 1.8 to version 1.11 or later. For other versions, such as if your cluster runs Kubernetes version 1.9, when you update the master directly to 1.12 or later, most pods fail by entering a state such as MatchNodeSelector, CrashLoopBackOff or ContainerCreating until you update the worker nodes to the same version. To avoid this issue, update the cluster to a supported version less than three ahead of the current version, such as 1.9 to 1.11 and then update to 1.12.

After you update the cluster to a supported version, your cluster can resume normal operations and continue receiving support. {: important}


To check the server version of a cluster, run the following command.

kubectl version  --short | grep -i server

{: pre}

Example output:

Server Version: v1.12.7+IKS

{: screen}

Update types

{: #update_types}

Your Kubernetes cluster has three types of updates: major, minor, and patch. {:shortdesc}

Update type Examples of version labels Updated by Impact
Major 1.x.x You Operation changes for clusters, including scripts or deployments.
Minor x.9.x You Operation changes for clusters, including scripts or deployments.
Patch x.x.4_1510 IBM and you Kubernetes patches, as well as other {{site.data.keyword.Bluemix_notm}} Provider component updates such as security and operating system patches. IBM updates masters automatically, but you apply patches to worker nodes. See more about patches in the following section.
{: caption="Impacts of Kubernetes updates" caption-side="top"}

As updates become available, you are notified when you view information about the worker nodes, such as with the ibmcloud ks workers --cluster <cluster> or ibmcloud ks worker-get --cluster <cluster> --worker <worker> commands.

  • Major and minor updates (1.x): First, update your master node and then update the worker nodes. Worker nodes cannot run a Kubernetes major or minor version that is greater than the masters.
    • By default, you cannot update a Kubernetes master three or more minor versions ahead. For example, if your current master is version 1.9 and you want to update to 1.12, you must update to 1.10 first. You can force the update to continue, but updating more than two minor versions might cause unexpected results or failure.
    • If you use a kubectl CLI version that does match at least the major.minor version of your clusters, you might experience unexpected results. Make sure to keep your Kubernetes cluster and CLI versions up-to-date.
  • Patch updates (x.x.4_1510): Changes across patches are documented in the Version changelog. Master patches are applied automatically, but you initiate worker node patches updates. Worker nodes can also run patch versions that are greater than the masters. As updates become available, you are notified when you view information about the master and worker nodes in the {{site.data.keyword.Bluemix_notm}} console or CLI, such as with the following commands: ibmcloud ks clusters, cluster-get, workers, or worker-get.
    • Worker node patches: Check monthly to see whether an update is available, and use the ibmcloud ks worker-update command or the ibmcloud ks worker-reload command to apply these security and operating system patches. Note that during an update or reload, your worker node machine is reimaged, and data is deleted if not stored outside the worker node.
    • Master patches: Master patches are applied automatically over the course of several days, so a master patch version might show up as available before it is applied to your master. The update automation also skips clusters that are in an unhealthy state or have operations currently in progress. Occasionally, IBM might disable automatic updates for a specific master fix pack, as noted in the changelog, such as a patch that is only needed if a master is updated from one minor version to another. In any of these cases, you can choose to safely use the ibmcloud ks cluster-update command yourself without waiting for the update automation to apply.

{: #prep-up} This information summarizes updates that are likely to have impact on deployed apps when you update a cluster to a new version from the previous version.


For a complete list of changes, review the following information:


Release history

{: #release-history}

The following table records {{site.data.keyword.containerlong_notm}} version release history. You can use this information for planning purposes, such as to estimate general time frames when a certain release might become unsupported. After the Kubernetes community releases a version update, the IBM team begins a process of hardening and testing the release for {{site.data.keyword.containerlong_notm}} environments. Availability and unsupported release dates depend on the results of these tests, community updates, security patches, and technology changes between versions. Plan to keep your cluster master and worker node version up to date according to the n-2 version support policy. {: shortdesc}

{{site.data.keyword.containerlong_notm}} was first generally available with Kubernetes version 1.5. Projected release or unsupported dates are subject to change. To go to the version update preparation steps, click the version number.

Dates that are marked with a dagger () are tentative and subject to change. {: important}

Release history for {{site.data.keyword.containerlong_notm}}.
Supported? Version {{site.data.keyword.containerlong_notm}}
release date
{{site.data.keyword.containerlong_notm}}
unsupported date
This version is supported. [1.13](#cs_v113) 05 Feb 2019 Dec 2019 `†`
This version is supported. [1.12](#cs_v112) 07 Nov 2018 Sep 2019 `†`
This version is supported. [1.11](#cs_v111) 14 Aug 2018 Jun 2019 `†`
This version is deprecated. [1.10](#cs_v110) 01 May 2018 15 May 2019
This version is unsupported. [1.9](#cs_v19) 08 Feb 2018 27 Dec 2018
This version is unsupported. [1.8](#cs_v18) 08 Nov 2017 22 Sep 2018
This version is unsupported. [1.7](#cs_v17) 19 Sep 2017 21 Jun 2018
This version is unsupported. 1.6 N/A N/A
This version is unsupported. [1.5](#cs_v1-5) 23 May 2017 04 Apr 2018

Version 1.13

{: #cs_v113}

This badge indicates Kubernetes version 1.13 certification for IBM Cloud Container Service. {{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.13 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._

Review changes that you might need to make when you update from the previous Kubernetes version to 1.13. {: shortdesc}

Update before master

{: #113_before}

The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}

Changes to make before you update the master to Kubernetes 1.13
Type Description
N/A

Update after master

{: #113_after}

The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}

Changes to make after you update the master to Kubernetes 1.13
Type Description
CoreDNS available as the new default cluster DNS provider CoreDNS is now the default cluster DNS provider for new clusters in Kubernetes 1.13 and later. If you update an existing cluster to 1.13 that uses KubeDNS as the cluster DNS provider, KubeDNS continues to be the cluster DNS provider. However, you can choose to [use CoreDNS instead](/docs/containers?topic=containers-cluster_dns#dns_set).

CoreDNS supports [cluster DNS specification ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/kubernetes/dns/blob/master/docs/specification.md#25---records-for-external-name-services) to enter a domain name as the Kubernetes service `ExternalName` field. The previous cluster DNS provider, KubeDNS, does not follow the cluster DNS specification, and as such, allows IP addresses for `ExternalName`. If any Kubernetes services are using IP addresses instead of DNS, you must update the `ExternalName` to DNS for continued functionality.
`kubectl` output for `Deployment` and `StatefulSet` The `kubectl` output for `Deployment` and `StatefulSet` now includes a `Ready` column and is more human-readable. If your scripts rely on the previous behavior, update them.
`kubectl` output for `PriorityClass` The `kubectl` output for `PriorityClass` now includes a `Value` column. If your scripts rely on the previous behavior, update them.
`kubectl get componentstatuses` The `kubectl get componentstatuses` command does not properly report the health of some Kubernetes master components because these components are no longer accessible from the Kubernetes API server now that `localhost` and insecure (HTTP) ports are disabled. After introducing highly available (HA) masters in Kubernetes version 1.10, each Kubernetes master is set up with multiple `apiserver`, `controller-manager`, `scheduler`, and `etcd` instances. Instead, review the cluster healthy by checking the [{{site.data.keyword.Bluemix_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/kubernetes/landing) or by using the `ibmcloud ks cluster-get` [command](/docs/containers?topic=containers-cs_cli_reference#cs_cluster_get).
Unsupported: `kubectl run-container` The `kubectl run-container` command is removed. Instead, use the `kubectl run` command.
`kubectl rollout undo` When you run `kubectl rollout undo` for a revision that does not exist, an error is returned. If your scripts rely on the previous behavior, update them.
Deprecated: `scheduler.alpha.kubernetes.io/critical-pod` annotation The `scheduler.alpha.kubernetes.io/critical-pod` annotation is now deprecated. Change any pods that rely on this annotation to use [pod priority](/docs/containers?topic=containers-pod_priority#pod_priority) instead.

Update after worker nodes

{: #113_after_workers}

The following table shows the actions that you must take after you update your worker nodes. {: shortdesc}

Changes to make after you update your worker nodes to Kubernetes 1.13
Type Description
containerd `cri` stream server In containerd version 1.2, the `cri` plug-in stream server now serves on a random port, `http://localhost:0`. This change supports the `kubelet` streaming proxy and provides a more secure streaming interface for container `exec` and `logs` operations. Previously, the `cri` stream server listened on the worker node's private network interface by using port 10010. If your apps use the container `cri` plug-in and rely on the previous behavior, update them.

Version 1.12

{: #cs_v112}

This badge indicates Kubernetes version 1.12 certification for IBM Cloud Container Service. {{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.12 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._

Review changes that you might need to make when you update from the previous Kubernetes version to 1.12. {: shortdesc}

Update before master

{: #112_before}

The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}

Changes to make before you update the master to Kubernetes 1.12
Type Description
Kubernetes Metrics Server If you currently have the Kubernetes `metric-server` deployed in your cluster, you must remove the `metric-server` before you update the cluster to Kubernetes 1.12. This removal prevents conflicts with the `metric-server` that is deployed during the update.
Role bindings for `kube-system` `default` service account The `kube-system` `default` service account no longer has **cluster-admin** access to the Kubernetes API. If you deploy features or add-ons such as [Helm](/docs/containers?topic=containers-helm#public_helm_install) that require access to processes in your cluster, set up a [service account ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/). If you need time to create and set up individual service accounts with the appropriate permissions, you can temporarily grant the **cluster-admin** role with the following cluster role binding: `kubectl create clusterrolebinding kube-system:default --clusterrole=cluster-admin --serviceaccount=kube-system:default`

Update after master

{: #112_after}

The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}

Changes to make after you update the master to Kubernetes 1.12
Type Description
APIs for Kubernetes The Kubernetes API replaces deprecated APIs as follows:
  • apps/v1: The `apps/v1` Kubernetes API replaces the `apps/v1beta1` and `apps/v1alpha` APIs. The `apps/v1` API also replaces the `extensions/v1beta1` API for `daemonset`, `deployment`, `replicaset`, and `statefulset` resources. The Kubernetes project is deprecating and phasing out support for the previous APIs from the Kubernetes `apiserver` and the `kubectl` client.
  • networking.k8s.io/v1: The `networking.k8s.io/v1` API replaces the `extensions/v1beta1` API for NetworkPolicy resources.
  • policy/v1beta1: The `policy/v1beta1` API replaces the `extensions/v1beta1` API for `podsecuritypolicy` resources.


Update all your YAML `apiVersion` fields to use the appropriate Kubernetes API before the deprecated APIs become unsupported. Also, review the [Kubernetes docs ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) for changes related to `apps/v1`, such as the following.
  • After creating a deployment, the `.spec.selector` field is immutable.
  • The `.spec.rollbackTo` field is deprecated. Instead, use the `kubectl rollout undo` command.
CoreDNS available as cluster DNS provider The Kubernetes project is in the process of transitioning to support CoreDNS instead of the current Kubernetes DNS (KubeDNS). In version 1.12, the default cluster DNS remains KubeDNS, but you can [choose to use CoreDNS](/docs/containers?topic=containers-cluster_dns#dns_set).
`kubectl apply --force` Now, when you force an apply action (`kubectl apply --force`) on resources that cannot be updated, such as immutable fields in YAML files, the resources are recreated instead. If your scripts rely on the previous behavior, update them.
`kubectl get componentstatuses` The `kubectl get componentstatuses` command does not properly report the health of some Kubernetes master components because these components are no longer accessible from the Kubernetes API server now that `localhost` and insecure (HTTP) ports are disabled. After introducing highly available (HA) masters in Kubernetes version 1.10, each Kubernetes master is set up with multiple `apiserver`, `controller-manager`, `scheduler`, and `etcd` instances. Instead, review the cluster healthy by checking the [{{site.data.keyword.Bluemix_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/kubernetes/landing) or by using the `ibmcloud ks cluster-get` [command](/docs/containers?topic=containers-cs_cli_reference#cs_cluster_get).
`kubectl logs --interactive` The `--interactive` flag is no longer supported for `kubectl logs`. Update any automation that uses this flag.
`kubectl patch` If the `patch` command results in no changes (a redundant patch), the command no longer exits with a `1` return code. If your scripts rely on the previous behavior, update them.
`kubectl version -c` The `-c` shorthand flag is no longer supported. Instead, use the full `--client` flag. Update any automation that uses this flag.
`kubectl wait` If no matching selectors are found, the command now prints an error message and exits with a `1` return code. If your scripts rely on the previous behavior, update them.
kubelet cAdvisor port The [Container Advisor (cAdvisor) ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/google/cadvisor) web UI that the kubelet used by starting the `--cadvisor-port` is removed from Kubernetes 1.12. If you still need to run cAdvisor, [deploy cAdvisor as a daemon set ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/google/cadvisor/tree/master/deploy/kubernetes).

In the daemon set, specify the ports section so that cAdvisor can be reached via `http://node-ip:4194`, such as follows. Note that the cAdvisor pods fail until the worker nodes are updated to 1.12, because earlier versions of kubelet use host port 4194 for cAdvisor.
ports:
          - name: http
            containerPort: 8080
            hostPort: 4194
            protocol: TCP
Kubernetes dashboard If you access the dashboard via `kubectl proxy`, the **SKIP** button on the login page is removed. Instead, [use a **Token** to log in](/docs/containers?topic=containers-app#cli_dashboard).
Kubernetes Metrics Server Kubernetes Metrics Server replaces Kubernetes Heapster (deprecated since Kubernetes version 1.8) as the cluster metrics provider. If you run more than 30 pods per worker node in your cluster, [adjust the `metrics-server` configuration for performance](/docs/containers?topic=containers-kernel#metrics).

The Kubernetes dashboard does not work with the `metrics-server`. If you want to display metrics in a dashboard, choose from the following options.

`rbac.authorization.k8s.io/v1` Kubernetes API The `rbac.authorization.k8s.io/v1` Kubernetes API (supported since Kubernetes 1.8) is replacing the `rbac.authorization.k8s.io/v1alpha1` and `rbac.authorization.k8s.io/v1beta1` API. You can no longer create RBAC objects such as roles or role bindings with the unsupported `v1alpha` API. Existing RBAC objects are converted to the `v1` API.

Version 1.11

{: #cs_v111}

This badge indicates Kubernetes version 1.11 certification for IBM Cloud Container Service. {{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.11 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._

Review changes that you might need to make when you update from the previous Kubernetes version to 1.11. {: shortdesc}

Before you can successfully update a cluster from Kubernetes version 1.9 or earlier to version 1.11, you must follow the steps listed in Preparing to update to Calico v3. {: important}

Update before master

{: #111_before}

The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}

Changes to make before you update the master to Kubernetes 1.11
Type Description
Cluster master high availability (HA) configuration Updated the cluster master configuration to increase high availability (HA). Clusters now have three Kubernetes master replicas that are set up with each master deployed on separate physical hosts. Further, if your cluster is in a multizone-capable zone, the masters are spread across zones.

For actions that you must take, see [Updating to highly available cluster masters](#ha-masters). These preparation actions apply:
  • If you have a firewall or custom Calico network policies.
  • If you are using host ports `2040` or `2041` on your worker nodes.
  • If you used the cluster master IP address for in-cluster access to the master.
  • If you have automation that calls the Calico API or CLI (`calicoctl`), such as to create Calico policies.
  • If you use Kubernetes or Calico network policies to control pod egress access to the master.
`containerd` new Kubernetes container runtime

`containerd` replaces Docker as the new container runtime for Kubernetes. For actions that you must take, see [Updating to `containerd` as the container runtime](#containerd).

Encrypting data in etcd Previously, etcd data was stored on a master’s NFS file storage instance that was encrypted at rest. Now, etcd data is stored on the master’s local disk and backed up to {{site.data.keyword.cos_full_notm}}. Data is encrypted during transit to {{site.data.keyword.cos_full_notm}} and at rest. However, the etcd data on the master’s local disk is not encrypted. If you want your master’s local etcd data to be encrypted, [enable {{site.data.keyword.keymanagementservicelong_notm}} in your cluster](/docs/containers?topic=containers-encryption#keyprotect).
Kubernetes container volume mount propagation The default value for the [`mountPropagation` field ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation) for a container `VolumeMount` changed from `HostToContainer` to `None`. This change restores the behavior that existed in Kubernetes version 1.9 and earlier. If your pod specs rely on `HostToContainer` being the default, update them.
Kubernetes API server JSON deserializer The Kubernetes API server JSON deserializer is now case-sensitive. This change restores the behavior that existed in Kubernetes version 1.7 and earlier. If your JSON resource definitions use the incorrect case, update them.

Only direct Kubernetes API server requests are impacted. The `kubectl` CLI continued to enforce case-sensitive keys in Kubernetes version 1.7 and later, so if you strictly manage your resources with `kubectl`, you are not impacted.

Update after master

{: #111_after}

The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}

Changes to make after you update the master to Kubernetes 1.11
Type Description
Cluster logging configuration The `fluentd` cluster add-on is automatically updated with version 1.11, even when `logging-autoupdate` is disabled.

The container log directory changed from `/var/lib/docker/` to `/var/log/pods/`. If you use your own logging solution that monitors the previous directory, update accordingly.
{{site.data.keyword.Bluemix_notm}} Identity and Access Management (IAM) support Clusters that run Kubernetes version 1.11 or later support IAM [access groups](/docs/iam?topic=iam-groups#groups) and [service IDs](/docs/iam?topic=iam-serviceids#serviceids). You can now use these features to [authorize access to your cluster](/docs/containers?topic=containers-users#users).
Refresh Kubernetes configuration The OpenID Connect configuration for the cluster's Kubernetes API server is updated to support {{site.data.keyword.Bluemix_notm}} Identity and Access Management (IAM) access groups. As a result, you must refresh your cluster's Kubernetes configuration after the master Kubernetes v1.11 update by running `ibmcloud ks cluster-config --cluster `. With this command, the configuration is applied to role bindings in the `default` namespace.

If you do not refresh the configuration, cluster actions fail with the following error message: `You must be logged in to the server (Unauthorized).`
Kubernetes dashboard If you access the dashboard via `kubectl proxy`, the **SKIP** button on the login page is removed. Instead, [use a **Token** to log in](/docs/containers?topic=containers-app#cli_dashboard).
`kubectl` CLI The `kubectl` CLI for Kubernetes version 1.11 requires the `apps/v1` APIs. As a result, the v1.11 `kubectl` CLI does not work for clusters that run Kubernetes version 1.8 or earlier. Use the version of the `kubectl` CLI that matches the Kubernetes API server version of your cluster.
`kubectl auth can-i` Now, when a user is not authorized, the `kubectl auth can-i` command fails with `exit code 1`. If your scripts rely on the previous behavior, update them.
`kubectl delete` Now, when deleting resources by using selection criteria such as labels, the `kubectl delete` command ignores `not found` errors by default. If your scripts rely on the previous behavior, update them.
Kubernetes `sysctls` feature The `security.alpha.kubernetes.io/sysctls` annotation is now ignored. Instead, Kubernetes added fields to the `PodSecurityPolicy` and `Pod` objects for specifying and controlling `sysctls`. For more information, see [Using sysctls in Kubernetes ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/).

After you update the cluster master and workers, update your `PodSecurityPolicy` and `Pod` objects to use the new `sysctls` fields.

Updating to highly available cluster masters in Kubernetes 1.11

{: #ha-masters}

For clusters that run Kubernetes version 1.10.8_1530, 1.11.3_1531, or later, the cluster master configuration is updated to increase high availability (HA). Clusters now have three Kubernetes master replicas that are set up with each master deployed on separate physical hosts. Further, if your cluster is in a multizone-capable zone, the masters are spread across zones. {: shortdesc}

You can check if your cluster has an HA master configuration by checking the cluster's master URL in the console or by running ibmcloud ks cluster-get --cluster <cluster_name_or_ID. If the master URL has a host name such as https://c2.us-south.containers.cloud.ibm.com:xxxxx and not an IP address such as https://169.xx.xx.xx:xxxxx, the cluster has an HA master configuration. You might get an HA master configuration because of an automatic master patch update or by applying an update manually. In either case, you still must review the following items to ensure that your cluster network is set up to take full advantage of the configuration.

  • If you have a firewall or custom Calico network policies.
  • If you are using host ports 2040 or 2041 on your worker nodes.
  • If you used the cluster master IP address for in-cluster access to the master.
  • If you have automation that calls the Calico API or CLI (calicoctl), such as to create Calico policies.
  • If you use Kubernetes or Calico network policies to control pod egress access to the master.

**Updating your firewall or custom Calico host network policies for HA masters**:
{: #ha-firewall} If you use a firewall or custom Calico host network policies to control egress from your worker nodes, allow outgoing traffic to the ports and IP addresses for all the zones within the region that your cluster is in. See [Allowing the cluster to access infrastructure resources and other services](/docs/containers?topic=containers-firewall#firewall_outbound).
**Reserving host ports `2040` and `2041` on your worker nodes**:
{: #ha-ports} To allow access to the cluster master in an HA configuration, you must leave host ports `2040` and `2041` available on all worker nodes. * Update any pods with `hostPort` set to `2040` or `2041` to use different ports. * Update any pods with `hostNetwork` set to `true` that listen on ports `2040` or `2041` to use different ports.

To check if your pods are currently using ports 2040 or 2041, target your cluster and run the following command.

kubectl get pods --all-namespaces -o yaml | grep -B 3 "hostPort: 204[0,1]"

{: pre}

If you already have an HA master configuration, you see results for ibm-master-proxy-* in the kube-system namespace, such as in the following example. If other pods are returned, update their ports.

name: ibm-master-proxy-static
ports:
- containerPort: 2040
  hostPort: 2040
  name: apiserver
  protocol: TCP
- containerPort: 2041
  hostPort: 2041
...

{: screen}


**Using `kubernetes` service cluster IP or domain for in-cluster access to the master**:
{: #ha-incluster} To access the cluster master in an HA configuration from within the cluster, use one of the following: * The `kubernetes` service cluster IP address, which by default is: `https://172.21.0.1` * The `kubernetes` service domain name, which by default is: `https://kubernetes.default.svc.cluster.local`

If you previously used the cluster master IP address, this method continues to work. However, for improved availability, update to use the kubernetes service cluster IP address or domain name.


**Configuring Calico for out-of-cluster access to master with HA configuration**:
{: #ha-outofcluster} The data that is stored in the `calico-config` configmap in the `kube-system` namespace is changed to support HA master configuration. In particular, the `etcd_endpoints` value now supports in-cluster access only. Using this value to configure Calico CLI for access from outside the cluster no longer works.

Instead, use the data that is stored in the cluster-info configmap in the kube-system namespace. In particular, use the etcd_host and etcd_port values to configure the endpoint for the Calico CLI to access the master with HA configuration from outside the cluster.


**Updating Kubernetes or Calico network policies**:
{: #ha-networkpolicies} You need to take additional actions if you use [Kubernetes or Calico network policies](/docs/containers?topic=containers-network_policies#network_policies) to control pod egress access to the cluster master and you are currently using: * The Kubernetes service cluster IP, which you can get by running `kubectl get service kubernetes -o yaml | grep clusterIP`. * The Kubernetes service domain name, which by default is `https://kubernetes.default.svc.cluster.local`. * The cluster master IP, which you can get by running `kubectl cluster-info | grep Kubernetes`.

The following steps describe how to update your Kubernetes network policies. To update Calico network policies, repeat these steps with some minor policy syntax changes and calicoctl to search policies for impacts. {: note}

Before you begin: Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster.

  1. Get your cluster master IP address.

    kubectl cluster-info | grep Kubernetes
    

    {: pre}

  2. Search your Kubernetes network policies for impacts. If no YAML is returned, your cluster is not impacted and you do not need to make additional changes.

    kubectl get networkpolicies --all-namespaces -o yaml | grep <cluster-master-ip>
    

    {: pre}

  3. Review the YAML. For example, if your cluster uses the following Kubernetes network policy to allow pods in the default namespace to access the cluster master via the kubernetes service cluster IP or the cluster master IP, then you must update the policy.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: all-master-egress
      namespace: default
    spec:
      egress:
      # Allow access to cluster master using kubernetes service cluster IP address
      # or domain name or cluster master IP address.
      - ports:
        - protocol: TCP
        to:
        - ipBlock:
            cidr: 161.202.126.210/32
      # Allow access to Kubernetes DNS in order to resolve the kubernetes service
      # domain name.
      - ports:
        - protocol: TCP
          port: 53
        - protocol: UDP
          port: 53
      podSelector: {}
      policyTypes:
      - Egress
    

    {: screen}

  4. Revise the Kubernetes network policy to allow egress to the in-cluster master proxy IP address 172.20.0.1. For now, keep the cluster master IP address. For example, the previous network policy example changes to the following.

    If you previously set up your egress policies to open up only the single IP address and port for the single Kubernetes master, now use the in-cluster master proxy IP address range 172.20.0.1/32 and port 2040. {: tip}

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: all-master-egress
      namespace: default
    spec:
      egress:
      # Allow access to cluster master using kubernetes service cluster IP address
      # or domain name.
      - ports:
        - protocol: TCP
        to:
        - ipBlock:
            cidr: 172.20.0.1/32
        - ipBlock:
            cidr: 161.202.126.210/32
      # Allow access to Kubernetes DNS in order to resolve the kubernetes service domain name.
      - ports:
        - protocol: TCP
          port: 53
        - protocol: UDP
          port: 53
      podSelector: {}
      policyTypes:
      - Egress
    

    {: screen}

  5. Apply the revised network policy to your cluster.

    kubectl apply -f all-master-egress.yaml
    

    {: pre}

  6. After you complete all the preparation actions (including these steps), update your cluster master to the HA master fix pack.

  7. After the update is complete, remove the cluster master IP address from the network policy. For example, from the previous network policy, remove the following lines, and then reapply the policy.

    - ipBlock:
        cidr: 161.202.126.210/32
    

    {: screen}

    kubectl apply -f all-master-egress.yaml
    

    {: pre}

Updating to containerd as the container runtime

{: #containerd}

For clusters that run Kubernetes version 1.11 or later, containerd replaces Docker as the new container runtime for Kubernetes to enhance performance. If your pods rely on Docker as the Kubernetes container runtime, you must update them to handle containerd as the container runtime. For more information, see the Kubernetes containerd announcement External link icon. {: shortdesc}

How do I know if my apps rely on docker instead of containerd?
Examples of times that you might rely on Docker as the container runtime:

  • If you access the Docker engine or API directly by using privileged containers, update your pods to support containerd as the runtime. For example, you might call the Docker socket directly to launch containers or perform other Docker operations. The Docker socket changed from /var/run/docker.sock to /run/containerd/containerd.sock. The protocol that is used in the containerd socket is slightly different to the one in Docker. Try to update your app to the containerd socket. If you want to continue using the Docker socket, look into using Docker-inside-Docker (DinD) External link icon.
  • Some third-party add-ons, such as logging and monitoring tools, that you install in your cluster might rely on the Docker engine. Check with your provider to make sure the tools are compatible with containerd. Possible use cases include:
    • Your logging tool might use the container stderr/stdout directory /var/log/pods/<pod_uuid>/<container_name>/*.log to access logs. In Docker, this directory is a symlink to /var/data/cripersistentstorage/containers/<container_uuid>/<container_uuid>-json.log whereas in containerd you access the directory directly without a symlink.
    • Your monitoring tool accesses the Docker socket directly. The Docker socket changed from /var/run/docker.sock to /run/containerd/containerd.sock.

Besides reliance on the runtime, do I need to take other preparation actions?

Manifest tool: If you have multi-platform images that are built with the experimental docker manifest tool External link icon before Docker version 18.06, you cannot pull the image from DockerHub by using containerd.

When you check the pod events, you might see an error such as the following.

failed size validation

{: screen}

To use an image that is built by using the manifest tool with containerd, choose from the following options.

  • Rebuild the image with the manifest tool External link icon.
  • Rebuild the image with the docker-manifest tool after you update to Docker version 18.06 or later.

What is not affected? Do I need to change how I deploy my containers?
In general, your container deployment processes do not change. You can still use a Dockerfile to define a Docker image and build a Docker container for your apps. If you use docker commands to build and push images to a registry, you can continue to use docker or use ibmcloud cr commands instead.

Preparing to update to Calico v3

{: #111_calicov3}

If you are updating a cluster from Kubernetes version 1.9 or earlier to version 1.11, prepare for the Calico v3 update before you update the master. During the master upgrade to Kubernetes v1.11, new pods and new Kubernetes or Calico network policies are not scheduled. The amount of time that the update prevents new scheduling varies. Small clusters can take a few minutes, with a few extra minutes for every 10 nodes. Existing network policies and pods continue to run. {: shortdesc}

If you are updating a cluster from Kubernetes version 1.10 to version 1.11, skip these steps because you completed these steps when you updated to 1.10. {: note}

Before you begin, your cluster master and all worker nodes must be running Kubernetes version 1.8 or 1.9, and must have at least one worker node.

  1. Verify that your Calico pods are healthy.

    kubectl get pods -n kube-system -l k8s-app=calico-node -o wide
    

    {: pre}

  2. If any pod is not in a Running state, delete the pod and wait until it is in a Running state before you continue. If the pod does not return to a Running state:

    1. Check the State and Status of the worker node.
      ibmcloud ks workers --cluster <cluster_name_or_ID>
      
      {: pre}
    2. If the worker node state is not Normal, follow the Debugging worker nodes steps. For example, a Critical or Unknown state is often resolved by reloading the worker node.
  3. If you auto-generate Calico policies or other Calico resources, update your automation tooling to generate these resources with Calico v3 syntax External link icon.

  4. If you use strongSwan for VPN connectivity, the strongSwan 2.0.0 Helm chart does not work with Calico v3 or Kubernetes 1.11. Update strongSwan to the 2.1.0 Helm chart, which is backward compatible with Calico 2.6, and Kubernetes 1.7, 1.8, and 1.9.

  5. Update your cluster master to Kubernetes v1.11.


Deprecated: Version 1.10

{: #cs_v110}

This badge indicates Kubernetes version 1.10 certification for IBM Cloud Container Service. {{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.10 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._

Review changes that you might need to make when you update from the previous Kubernetes version to 1.10. {: shortdesc}

Kubernetes version 1.10 is deprecated and becomes unsupported on 15 May 2019. Review the potential impact of each Kubernetes version update, and then update your clusters immediately to at least 1.11. {: deprecated}

Before you can successfully update to Kubernetes 1.10, you must follow the steps listed in Preparing to update to Calico v3. {: important}


Update before master

{: #110_before}

The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}

Changes to make before you update the master to Kubernetes 1.10
Type Description
Calico v3 Updating to Kubernetes version 1.10 also updates Calico from v2.6.5 to v3.1.1. Important: Before you can successfully update to Kubernetes v1.10, you must follow the steps listed in [Preparing to update to Calico v3](#110_calicov3).
Cluster master high availability (HA) configuration Updated the cluster master configuration to increase high availability (HA). Clusters now have three Kubernetes master replicas that are set up with each master deployed on separate physical hosts. Further, if your cluster is in a multizone-capable zone, the masters are spread across zones.

For actions that you must take, see [Updating to highly available cluster masters](#110_ha-masters). These preparation actions apply:
  • If you have a firewall or custom Calico network policies.
  • If you are using host ports `2040` or `2041` on your worker nodes.
  • If you used the cluster master IP address for in-cluster access to the master.
  • If you have automation that calls the Calico API or CLI (`calicoctl`), such as to create Calico policies.
  • If you use Kubernetes or Calico network policies to control pod egress access to the master.
Kubernetes Dashboard network policy In Kubernetes 1.10, the kubernetes-dashboard network policy in the kube-system namespace blocks all pods from accessing the Kubernetes dashboard. However, this does not impact the ability to access the dashboard from the {{site.data.keyword.Bluemix_notm}} console or by using kubectl proxy. If a pod requires access to the dashboard, you can add a kubernetes-dashboard-policy: allow label to a namespace and then deploy the pod to the namespace.
Kubelet API access Kubelet API authorization is now delegated to the Kubernetes API server. Access to the Kubelet API is based on ClusterRoles that grant permission to access node subresources. By default, Kubernetes Heapster has ClusterRole and ClusterRoleBinding. However, if the Kubelet API is used by other users or apps, you must grant them permission to use the API. Refer to the Kubernetes documentation on [Kubelet authorization![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/).
Cipher suites The supported cipher suites to the Kubernetes API server and Kubelet API are now restricted to a subset with high strength encryption (128 bits or more). If you have existing automation or resources that use weaker ciphers and rely on communicating with the Kubernetes API server or Kubelet API, enable stronger cipher support before you update the master.
strongSwan VPN If you use [strongSwan](/docs/containers?topic=containers-vpn#vpn-setup) for VPN connectivity, you must remove the chart before you update the cluster by running `helm delete --purge `. After the cluster update is complete, reinstall the strongSwan Helm chart.

Update after master

{: #110_after}

The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}

Changes to make after you update the master to Kubernetes 1.10
Type Description
Calico v3 When the cluster is updated, all existing Calico data that is applied to the cluster is automatically migrated to use Calico v3 syntax. To view, add, or modify Calico resources with Calico v3 syntax, update your [Calico CLI configuration to version 3.1.1](#110_calicov3).
Node ExternalIP address The ExternalIP field of a node is now set to the public IP address value of the node. Review and update any resources that depend on this value.
Kubernetes dashboard If you access the dashboard via `kubectl proxy`, the **SKIP** button on the login page is removed. Instead, [use a **Token** to log in](/docs/containers?topic=containers-app#cli_dashboard).
kubectl port-forward Now when you use the kubectl port-forward command, it no longer supports the -p flag. If your scripts rely on the previous behavior, update them to replace the -p flag with the pod name.
`kubectl --show-all, -a` flag The `--show-all, -a` flag, which applied only to human-readable pod commands (not API calls), is deprecated and is unsupported in future versions. The flag is used to display pods in a terminal state. To track information about terminated apps and containers, [set up log forwarding in your cluster](/docs/containers?topic=containers-health#health).
Read-only API data volumes Now `secret`, `configMap`, `downwardAPI`, and projected volumes are mounted read-only. Previously, apps were allowed to write data to these volumes that might be reverted automatically by the system. This change is required to fix security vulnerability [CVE-2017-1002102![External link icon](../icons/launch-glyph.svg "External link icon")](https://cve.mitre.org/cgi-bin/cvename.cgi?name=2017-1002102). If your apps rely on the previous insecure behavior, modify them accordingly.
strongSwan VPN If you use [strongSwan](/docs/containers?topic=containers-vpn#vpn-setup) for VPN connectivity and deleted your chart before updating your cluster, you can now re-install your strongSwan Helm chart.

Updating to highly available cluster masters in Kubernetes 1.10

{: #110_ha-masters}

For clusters that run Kubernetes version 1.10.8_1530, 1.11.3_1531, or later, the cluster master configuration is updated to increase high availability (HA). Clusters now have three Kubernetes master replicas that are set up with each master deployed on separate physical hosts. Further, if your cluster is in a multizone-capable zone, the masters are spread across zones. {: shortdesc}

You can check if your cluster has an HA master configuration by checking the cluster's master URL in the console or by running ibmcloud ks cluster-get --cluster <cluster_name_or_ID. If the master URL has a host name such as https://c2.us-south.containers.cloud.ibm.com:xxxxx and not an IP address such as https://169.xx.xx.xx:xxxxx, the cluster has an HA master configuration. You might get an HA master configuration because of an automatic master patch update or by applying an update manually. In either case, you still must review the following items to ensure that your cluster network is set up to take full advantage of the configuration.

  • If you have a firewall or custom Calico network policies.
  • If you are using host ports 2040 or 2041 on your worker nodes.
  • If you used the cluster master IP address for in-cluster access to the master.
  • If you have automation that calls the Calico API or CLI (calicoctl), such as to create Calico policies.
  • If you use Kubernetes or Calico network policies to control pod egress access to the master.

**Updating your firewall or custom Calico host network policies for HA masters**:
{: #110_ha-firewall} If you use a firewall or custom Calico host network policies to control egress from your worker nodes, allow outgoing traffic to the ports and IP addresses for all the zones within the region that your cluster is in. See [Allowing the cluster to access infrastructure resources and other services](/docs/containers?topic=containers-firewall#firewall_outbound).
**Reserving host ports `2040` and `2041` on your worker nodes**:
{: #110_ha-ports} To allow access to the cluster master in an HA configuration, you must leave host ports `2040` and `2041` available on all worker nodes. * Update any pods with `hostPort` set to `2040` or `2041` to use different ports. * Update any pods with `hostNetwork` set to `true` that listen on ports `2040` or `2041` to use different ports.

To check if your pods are currently using ports 2040 or 2041, target your cluster and run the following command.

kubectl get pods --all-namespaces -o yaml | grep -B 3 "hostPort: 204[0,1]"

{: pre}

If you already have an HA master configuration, you see results for ibm-master-proxy-* in the kube-system namespace, such as in the following example. If other pods are returned, update their ports.

name: ibm-master-proxy-static
ports:
- containerPort: 2040
  hostPort: 2040
  name: apiserver
  protocol: TCP
- containerPort: 2041
  hostPort: 2041
...

{: screen}


**Using `kubernetes` service cluster IP or domain for in-cluster access to the master**:
{: #110_ha-incluster} To access the cluster master in an HA configuration from within the cluster, use one of the following: * The `kubernetes` service cluster IP address, which by default is: `https://172.21.0.1` * The `kubernetes` service domain name, which by default is: `https://kubernetes.default.svc.cluster.local`

If you previously used the cluster master IP address, this method continues to work. However, for improved availability, update to use the kubernetes service cluster IP address or domain name.


**Configuring Calico for out-of-cluster access to master with HA configuration**:
{: #110_ha-outofcluster} The data that is stored in the `calico-config` configmap in the `kube-system` namespace is changed to support HA master configuration. In particular, the `etcd_endpoints` value now supports in-cluster access only. Using this value to configure Calico CLI for access from outside the cluster no longer works.

Instead, use the data that is stored in the cluster-info configmap in the kube-system namespace. In particular, use the etcd_host and etcd_port values to configure the endpoint for the Calico CLI to access the master with HA configuration from outside the cluster.


**Updating Kubernetes or Calico network policies**:
{: #110_ha-networkpolicies} You need to take additional actions if you use [Kubernetes or Calico network policies](/docs/containers?topic=containers-network_policies#network_policies) to control pod egress access to the cluster master and you are currently using: * The Kubernetes service cluster IP, which you can get by running `kubectl get service kubernetes -o yaml | grep clusterIP`. * The Kubernetes service domain name, which by default is `https://kubernetes.default.svc.cluster.local`. * The cluster master IP, which you can get by running `kubectl cluster-info | grep Kubernetes`.

The following steps describe how to update your Kubernetes network policies. To update Calico network policies, repeat these steps with some minor policy syntax changes and use calicoctl to search policies for impacts. {: note}

Before you begin: Log in to your account. Target the appropriate region and, if applicable, resource group. Set the context for your cluster.

  1. Get your cluster master IP address.

    kubectl cluster-info | grep Kubernetes
    

    {: pre}

  2. Search your Kubernetes network policies for impacts. If no YAML is returned, your cluster is not impacted and you do not need to make additional changes.

    kubectl get networkpolicies --all-namespaces -o yaml | grep <cluster-master-ip>
    

    {: pre}

  3. Review the YAML. For example, if your cluster uses the following Kubernetes network policy to allow pods in the default namespace to access the cluster master via the kubernetes service cluster IP or the cluster master IP, then you must update the policy.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: all-master-egress
      namespace: default
    spec:
      egress:
      # Allow access to cluster master using kubernetes service cluster IP address
      # or domain name or cluster master IP address.
      - ports:
        - protocol: TCP
        to:
        - ipBlock:
            cidr: 161.202.126.210/32
      # Allow access to Kubernetes DNS in order to resolve the kubernetes service
      # domain name.
      - ports:
        - protocol: TCP
          port: 53
        - protocol: UDP
          port: 53
      podSelector: {}
      policyTypes:
      - Egress
    

    {: screen}

  4. Revise the Kubernetes network policy to allow egress to the in-cluster master proxy IP address 172.20.0.1. For now, keep the cluster master IP address. For example, the previous network policy example changes to the following.

    If you previously set up your egress policies to open up only the single IP address and port for the single Kubernetes master, now use the in-cluster master proxy IP address range 172.20.0.1/32 and port 2040. {: tip}

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: all-master-egress
      namespace: default
    spec:
      egress:
      # Allow access to cluster master using kubernetes service cluster IP address
      # or domain name.
      - ports:
        - protocol: TCP
        to:
        - ipBlock:
            cidr: 172.20.0.1/32
        - ipBlock:
            cidr: 161.202.126.210/32
      # Allow access to Kubernetes DNS in order to resolve the kubernetes service domain name.
      - ports:
        - protocol: TCP
          port: 53
        - protocol: UDP
          port: 53
      podSelector: {}
      policyTypes:
      - Egress
    

    {: screen}

  5. Apply the revised network policy to your cluster.

    kubectl apply -f all-master-egress.yaml
    

    {: pre}

  6. After you complete all the preparation actions (including these steps), update your cluster master to the HA master fix pack.

  7. After the update is complete, remove the cluster master IP address from the network policy. For example, from the previous network policy, remove the following lines, and then reapply the policy.

    - ipBlock:
        cidr: 161.202.126.210/32
    

    {: screen}

    kubectl apply -f all-master-egress.yaml
    

    {: pre}

Preparing to update to Calico v3

{: #110_calicov3}

Before you begin, your cluster master and all worker nodes must be running Kubernetes version 1.8 or later, and must have at least one worker node. {: shortdesc}

Prepare for the Calico v3 update before you update the master. During the master upgrade to Kubernetes v1.10, new pods and new Kubernetes or Calico network policies are not scheduled. The amount of time that the update prevents new scheduling varies. Small clusters can take a few minutes, with a few extra minutes for every 10 nodes. Existing network policies and pods continue to run. {: important}

  1. Verify that your Calico pods are healthy.

    kubectl get pods -n kube-system -l k8s-app=calico-node -o wide
    

    {: pre}

  2. If any pod is not in a Running state, delete the pod and wait until it is in a Running state before you continue. If the pod does not return to a Running state:

    1. Check the State and Status of the worker node.
      ibmcloud ks workers --cluster <cluster_name_or_ID>
      
      {: pre}
    2. If the worker node state is not Normal, follow the Debugging worker nodes steps. For example, a Critical or Unknown state is often resolved by reloading the worker node.
  3. If you auto-generate Calico policies or other Calico resources, update your automation tooling to generate these resources with Calico v3 syntax External link icon.

  4. If you use strongSwan for VPN connectivity, the strongSwan 2.0.0 Helm chart does not work with Calico v3 or Kubernetes 1.10. Update strongSwan to the 2.1.0 Helm chart, which is backward compatible with Calico 2.6, and Kubernetes 1.7, 1.8, and 1.9.

  5. Update your cluster master to Kubernetes v1.10.


Archive

{: #k8s_version_archive}

Find an overview of Kubernetes versions that are unsupported in {{site.data.keyword.containerlong_notm}}. {: shortdesc}

Version 1.9 (Unsupported)

{: #cs_v19}

As of 27 December 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.9 are unsupported. Version 1.9 clusters cannot receive security updates or support unless they are updated to the next most recent version (Kubernetes 1.10). {: shortdesc}

Review the potential impact of each Kubernetes version update, and then update your clusters immediately to at least 1.10.

Version 1.8 (Unsupported)

{: #cs_v18}

As of 22 September 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.8 are unsupported. Version 1.8 clusters cannot receive security updates or support unless they are updated to the next most recent version (Kubernetes 1.10). {: shortdesc}

Review the potential impact of each Kubernetes version update, and then update your clusters immediately to 1.10. Updates fail from version 1.8 to version 1.11 or later.

Version 1.7 (Unsupported)

{: #cs_v17}

As of 21 June 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.7 are unsupported. Version 1.7 clusters cannot receive security updates or support unless they are updated to the next most recently supported version (Kubernetes 1.10). {: shortdesc}

Review the potential impact of each Kubernetes version update, and then update your clusters immediately to version 1.10. Updates fail from version 1.7 to version 1.11 or later.

Version 1.5 (Unsupported)

{: #cs_v1-5}

As of 4 April 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.5 are unsupported. Version 1.5 clusters cannot receive security updates or support. {: shortdesc}

To continue running your apps in {{site.data.keyword.containerlong_notm}}, create a new cluster and deploy your apps to the new cluster.