This document also applies to using the kops
API to customize a Kubernetes cluster with or without using YAML or JSON.
We like to think of it as
kubectl
for Clusters.
Because of the above statement kops
includes an API which provides a feature for users to utilize YAML or JSON manifests for managing their kops
created Kubernetes installations. In the same way, you can use a YAML manifest to deploy a Job you can deploy and manage a kops
Kuberenetes instance with a manifest. All of these values are also usable via the interactive editor with kops edit
.
The following is a list of the benefits of using a file to manage instances.
- Capability to access API values that are not accessible via the command line such as setting the max price for spot instances.
- Create, update, and delete clusters without entering an interactive editor. This feature is helpful when automating cluster creation.
- Ability to check-in files to source control that represents an installation.
- Run commands such as
kops delete -f mycuster.yaml
.
At this time you must run kops create cluster
and then export the YAML from the state store. We plan in the future to have the capability to generate kops YAML via the command line. The following is an example of creating a cluster and exporting the YAML.
export NAME=k8s.example.com
export KOPS_STATE_STORE=s3://example-state-store
kops create cluster $NAME \
--zones "us-east-2a,us-east-2b,us-east-2c" \
--master-zones "us-east-2a,us-east-2b,us-east-2c" \
--networking weave \
--topology private \
--bastion \
--node-count 3 \
--node-size m4.xlarge \
--kubernetes-version v1.6.6 \
--master-size m4.large \
--vpc vpc-6335dd1a
The next step is to export the configuration to a YAML document. kops
has a command that allows the export in a single YAML document, but since JSON files need to separate documents, we only export YAML with a single command. You can export JSON with multiple commands.
kops get $NAME -o yaml > $NAME.yaml
The above command exports a YAML document which contains the definition of the cluster, kind: Cluster
, and the definitions of the instance groups, kind: InstanceGroup
.
The following is the contents of the exported YAML file.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2017-05-04T23:21:47Z
name: k8s.example.com
spec:
api:
loadBalancer:
type: Public
authorization:
alwaysAllow: {}
channel: stable
cloudProvider: aws
configBase: s3://example-state-store/k8s.example.com
etcdClusters:
- etcdMembers:
- instanceGroup: master-us-east-2d
name: a
- instanceGroup: master-us-east-2b
name: b
- instanceGroup: master-us-east-2c
name: c
name: main
- etcdMembers:
- instanceGroup: master-us-east-2d
name: a
- instanceGroup: master-us-east-2b
name: b
- instanceGroup: master-us-east-2c
name: c
name: events
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.6.6
masterPublicName: api.k8s.example.com
networkCIDR: 172.20.0.0/16
networkID: vpc-6335dd1a
networking:
weave: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: 172.20.32.0/19
name: us-east-2d
type: Private
zone: us-east-2d
- cidr: 172.20.64.0/19
name: us-east-2b
type: Private
zone: us-east-2b
- cidr: 172.20.96.0/19
name: us-east-2c
type: Private
zone: us-east-2c
- cidr: 172.20.0.0/22
name: utility-us-east-2d
type: Utility
zone: us-east-2d
- cidr: 172.20.4.0/22
name: utility-us-east-2b
type: Utility
zone: us-east-2b
- cidr: 172.20.8.0/22
name: utility-us-east-2c
type: Utility
zone: us-east-2c
topology:
bastion:
bastionPublicName: bastion.k8s.example.com
dns:
type: Public
masters: private
nodes: private
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:48Z
labels:
kops.k8s.io/cluster: k8s.example.com
name: bastions
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: t2.micro
maxSize: 1
minSize: 1
role: Bastion
subnets:
- utility-us-east-2d
- utility-us-east-2b
- utility-us-east-2c
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:47Z
labels:
kops.k8s.io/cluster: k8s.example.com
name: master-us-east-2d
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: m4.large
maxSize: 1
minSize: 1
role: Master
subnets:
- us-east-2d
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:47Z
labels:
kops.k8s.io/cluster: k8s.example.com
name: master-us-east-2b
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: m4.large
maxSize: 1
minSize: 1
role: Master
subnets:
- us-east-2b
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:48Z
labels:
kops.k8s.io/cluster: k8s.example.com
name: master-us-east-2c
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: m4.large
maxSize: 1
minSize: 1
role: Master
subnets:
- us-east-2c
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:48Z
labels:
kops.k8s.io/cluster: k8s.example.com
name: nodes
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: m4.xlarge
maxSize: 3
minSize: 3
role: Node
subnets:
- us-east-2d
- us-east-2b
- us-east-2c
Next, delete the cluster from the state store.
kops delete -f $NAME.yaml
# validate that you want to remove the cluster
kops delete -f $NAME.yaml --yes
FIXME: rename this section.
With the above YAML file, a user can add configurations that are not available via the command line. For instance, you can add a MaxPrice
value to a new instance group and use spot instances. Also add node and cloud labels for the new instance group.
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:48Z
labels:
kops.k8s.io/cluster: k8s.example.com
name: my-crazy-big-nodes
spec:
nodeLabels:
spot: "true"
cloudLabels:
team: example
project: ion
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: m4.10xlarge
maxSize: 42
minSize: 42
maxPrice: "0.35"
role: Node
subnets:
- us-east-2c
This configuration will create an autoscale group that will include 42 m4.10xlarge nodes running as spot instances with custom labels.
To create the cluster execute:
kops create -f $NAME.yaml
kops create secret --name $NAME sshpublickey admin -i ~/.ssh/id_rsa.pub
kops update cluster $NAME --yes
kops rolling-update cluster $NAME --yes
Please refer to the rolling-update documentation.
Update the cluster spec YAML file, and to update the cluster run:
kops replace -f $NAME.yaml
kops update cluster $NAME --yes
kops rolling-update cluster $NAME --yes
Please refer to the rolling-update documentation.
kops
implements a full API that defines the various elements in the YAML file exported above. Two top level components exist; ClusterSpec
and InstanceGroup
.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2017-05-04T23:21:47Z
name: k8s.example.com
spec:
api:
Full documentation is accessible via godoc.
The ClusterSpec
allows a user to set configurations for such values as Docker log driver, Kubernetes API server log level, VPC for reusing a VPC (NetworkID
), and the Kubernetes version.
More information about some of the elements in the ClusterSpec
is available in the following:
- Cluster Spec document which outlines some of the values in the Cluster Specification.
- Ectd Encryption
- GPU setup
- IAM Roles - adding additional IAM roles.
- Labels
- Run In Existing VPC
To access the full configuration that a kops
installation is running execute:
kops get cluster $NAME --full -o yaml
This command prints the entire YAML configuration. But do not use the full document, you may experience strange and unique unwanted behaviors.
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-05-04T23:21:48Z
name: foo
spec:
Full documentation is accessible via [godocs] (https://godoc.org/k8s.io/kops/pkg/apis/kops#InstanceGroupSpec).
Instance Groups map to auto scale groups in AWS, and Instance Groups in GCE. They are an API level description of a group of compute instances used as Masters or Nodes.
More documentation is available in the Instance Group document.
Using YAML or JSON-based configuration for building and managing kops clusters is powerful, but use with caution.
- If you do not need to define or customize a value, let kops set that value Setting too many values dont allow kops to do its job in setting up the cluster and you may end up with strange bugs
- If you end up with strange bugs, try letting kops do more
- Be cautious, take care, and test test test outside of production!
If you need to run a custom version of Kubernetes Controller Manager, set kubeControllerManager.image
and update your cluster, and that is the beauty of using a manifest for your cluster.