copyright: years: 2014, 2019 lastupdated: "2019-09-18"
keywords: kubernetes, iks, vpc, classic
subcollection: containers
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download} {:preview: .preview}
{: #infrastructure_providers}
With {{site.data.keyword.containerlong}}, you can create a cluster from a variety of infrastructure providers. All the worker nodes in a cluster must be from the same provider. Originally, {{site.data.keyword.containerlong_notm}} provisioned your worker nodes in a single provider, classic infrastructure. As of 19 August 2019, you can choose from Classic or Virtual Private Cloud (VPC) on Classic infrastructure providers.
Area | Classic | VPC on Classic | |
---|---|---|---|
Overview | Generally available now, your clusters on IBM Cloud infrastructure Classic include all of the {{site.data.keyword.containerlong_notm}} mature and robust features for compute, networking, and storage. To get started, create a [Kubernetes](/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial) or [OpenShift](/docs/openshift?topic=openshift-openshift_tutorial) cluster. | With the VPC on Classic, you can create your cluster in the next generation of the {{site.data.keyword.cloud_notm}} platform, in your [Virtual Private Cloud](/docs/vpc-on-classic?topic=vpc-on-classic-about). VPC gives you the security of a private cloud environment with the dynamic scalability of a public cloud. To get started, try out the [VPC on Classic cluster tutorial](/docs/containers?topic=containers-vpc_ks_tutorial#vpc_ks_tutorial). | |
Doc icons | Throughout the documentation, pages and topics might include a note that the content applies only to the classic infrastructure provider. | Throughout the documentation, pages and topics might include a note that the content applies only to the VPC infrastructure provider. | |
Compute and worker node resources | [Virtual](/docs/containers?topic=containers-planning_worker_nodes#vm), [bare metal](/docs/containers?topic=containers-planning_worker_nodes#bm), and [software-defined storage](/docs/containers?topic=containers-planning_worker_nodes#sds) machines are available for your worker nodes. Your worker node instances reside in your IBM Cloud infrastructure account, but you can manage them through {{site.data.keyword.containerlong_notm}}. You own the worker node instances. | VPC on Classic has a [select group of virtual machines only](/docs/containers?topic=containers-planning_worker_nodes#vm) for your worker nodes. Unlike classic clusters, your VPC cluster worker nodes do not appear in your infrastructure portal or a separate infrastructure bill. Instead, you manage all maintenance and billing activity for the worker nodes through {{site.data.keyword.containerlong_notm}}. Your worker node instances are connected to certain VPC instances that do reside in your infrastructure account, such as the VPC subnet or storage volumes. | |
Security | Classic clusters have many built-in security features that help you protect your cluster infrastructure, isolate resources, and ensure security compliance. For more information, see [Security in your Virtual Private Cloud](/docs/vpc-on-classic-network?topic=vpc-on-classic-network-security-in-your-ibm-cloud-vpc). | With VPC on Classic, your cluster runs in an isolated environment in the public cloud. Security groups act as virtual firewalls for instance-level protection, and network access control lists protect the subnets that provide the floating IPs for your worker nodes. For more information, see [About Virtual Private Cloud](/docs/vpc-on-classic?topic=vpc-on-classic-about) in the VPC docs. | |
High availability | For both classic and VPC on Classic clusters, the master includes three replicas for high availability. Further, if you create your cluster in a multizone metro, the master replicas are spread across zones and you can also spread your worker pools across zones. For more information, see [High availability for {{site.data.keyword.containerlong_notm}}](/docs/containers?topic=containers-ha). | ||
Cluster administration | Classic clusters support the entire set of v1 API automation, such as resizing worker pools, reloading worker nodes, and updating masters and worker nodes across major, minor, and patch versions. When you delete a cluster, you can choose to remove any attached subnet or storage instances. | VPC on Classic clusters cannot be reloaded or updated. Instead, use the [`worker replace --update` operation](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_worker_replace) to replace worker nodes that are outdated or in a troubled state. | |
Cluster networking | Your worker nodes are provisioned on private VLANs that provide private IP addresses to communicate on the private IBM Cloud infrastructure network. For communication on the public network, you can also provision the worker nodes on a public VLAN. Communication to the cluster master can be on the public or private service endpoint. For more information, see [Understanding cluster network basics](/docs/containers?topic=containers-plan_clusters#plan_basics). | Unlike classic infrastructure, your VPC on Classic cluster's worker nodes are attached to VPC subnets and assigned private IP addresses. The worker nodes are not connected to the public network, which instead is accessed through a public gateway, floating IP, or VPN gateway. For more information, see [About Networking for VPC](/docs/vpc-on-classic-network?topic=vpc-on-classic-network-about-networking-for-vpc). | |
Apps and container platform | You can choose to create [community Kubernetes or OpenShift clusters](/docs/containers?topic=containers-faqs#container_platforms) to manage your containerized apps. Your app build processes do not differ because of the infrastructure provider, but how you expose the app does, as described in the next _App networking_ entry. | You can create only community Kubernetes clusters, not OpenShift clusters. | |
App networking | All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range and are routed between worker nodes on the worker node private IP address of the private VLAN. To expose the app on the public network, your cluster must have worker nodes on the public VLAN. Then, you can create a NodePort, LoadBalancer (NLB), or Ingress (ALB) service. For more information, see [Planning in-cluster and external networking for apps](/docs/containers?topic=containers-cs_network_planning). | All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range and are routed between worker nodes on the worker node private IP address of the private VPC subnet. To expose the app on the public network, you can create a Kubernetes `LoadBalancer` service, which provisions a VPC load balancer and public hostname address for your worker nodes. For more information, see [Setting up a VPC load balancer to expose your app publicly](/docs/containers?topic=containers-vpc_ks_tutorial#vpc_ks_vpc_lb). | |
Storage | You can choose from non-persistent and persistent storage solutions such as file, block, object, and software-defined storage. For more information, see [Planning highly available persistent storage](/docs/containers?topic=containers-storage_planning). | For persistent storage, use [block](/docs/containers?topic=containers-vpc-block). Only 5 volumes can be attached per worker node, with volume limits of 2TB and 20,000 IOPS. For non-persistent storage, secondary storage on the local worker node is not available. | |
User access | To create classic infrastructure clusters, you must set up [infrastructure credentials](/docs/containers?topic=containers-users#api_key) for each region and resource group. To let users manage the cluster, use [{{site.data.keyword.cloud_notm}} IAM platform roles](/docs/containers?topic=containers-access_reference#iam_platform). To grant users access to Kubernetes resources within the cluster, use [{{site.data.keyword.cloud_notm}} IAM service roles](/docs/containers?topic=containers-access_reference#service), which correspond with Kubernetes RBAC roles. | Unlike for classic infrastructure, with VPC on Classic, you can use only [{{site.data.keyword.cloud_notm}} IAM access policies](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc) to authorize users to create infrastructure, manage your cluster, and access Kubernetes resources. | |
Integrations | You can extend your cluster and app capabilities with a variety of {{site.data.keyword.cloud_notm}} services, add-ons, and third-party integrations. For a list, see [Supported {{site.data.keyword.cloud_notm}} and third-party integrations](/docs/containers?topic=containers-supported_integrations). | VPC on Classic supports a select list of supported {{site.data.keyword.cloud_notm}} services, add-ons, and third-party integrations. For a list, see [Supported {{site.data.keyword.cloud_notm}} and third-party integrations](/docs/containers?topic=containers-supported_integrations). | |
Available locations and versions | Classic clusters are [available worldwide](/docs/containers?topic=containers-regions-and-zones#locations), including all six {{site.data.keyword.cloud_notm}} multizone metros and 20 single zone locations in more than a dozen countries. | VPC clusters are available in the following multizone metros: Dallas, Frankfurt, London, Sydney, and Tokyo. | |
Service interface (API, CLI, UI) | Classic clusters are fully supported in the {{site.data.keyword.containershort_notm}} [v1 API ![External link icon](../icons/launch-glyph.svg "External link icon")](https://containers.cloud.ibm.com/global/swagger-global-api/), [CLI](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli), and [console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/kubernetes/clusters). | VPC on Classic clusters are supported by the [next version (`v2`) of the {{site.data.keyword.containerlong_notm}} API](/docs/containers?topic=containers-cs_api_install#about_api), and you can manage your VPC on Classic clusters through the same CLI and console as classic clusters. | |
Service limitations | See [Service limitations](/docs/containers?topic=containers-ibm-cloud-kubernetes-service-technology#tech_limits). Feature-specific limitations are documented in their respective sections. | See [Service limitations](/docs/containers?topic=containers-ibm-cloud-kubernetes-service-technology#tech_limits). For VPC on Classic cluster limitations, see [VPC cluster limitations](/docs/containers?topic=containers-ibm-cloud-kubernetes-service-technology#vpc_ks_limits). For VPC-specific limitations, see [Known limitations](/docs/vpc-on-classic?topic=vpc-on-classic-known-limitations). | |
Troubleshooting and support | Both classic and VPC on Classic clusters are supported through the same {{site.data.keyword.cloud_notm}} Support processes. For cluster issues, check out the [Debugging your clusters](/docs/containers?topic=containers-cs_troubleshoot) guide. For questions, try posting in the [internal](https://ibm-argonauts.slack.com/messages/C4S4NUCB1) or [external](https://ibm-container-service.slack.com/messages/C4G6362ER) Slack channels. | Both classic and VPC on Classic clusters are supported through the same {{site.data.keyword.cloud_notm}} Support processes. For cluster issues, check out the [VPC troubleshooting](/docs/containers?topic=containers-vpc_troubleshoot#vpc_troubleshoot) specific information. For questions, try posting in the [internal](https://ibm-argonauts.slack.com/messages/CJ58JHD9C) or [external](https://ibm-container-service.slack.com/messages/C4G6362ER) Slack channels. |