Skip to content

Commit

Permalink
Merge pull request #39 from jaypipes/cleanup-nth
Browse files Browse the repository at this point in the history
Fix up node termination handler chart
  • Loading branch information
nckturner authored Nov 28, 2019
2 parents 06b0a3e + cf975bd commit d31fcba
Show file tree
Hide file tree
Showing 7 changed files with 253 additions and 68 deletions.
26 changes: 24 additions & 2 deletions stable/aws-node-termination-handler/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,27 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: aws-node-termination-handler
description: A Helm chart for the AWS Node Termination Handler
version: 0.1.0
appVersion: 1.0.0
home: https://github.com/aws/eks-charts
icon: https://raw.githubusercontent.com/aws/eks-charts/master/docs/logo/aws.png
sources:
- https://github.com/aws/eks-charts
maintainers:
- name: Nicholas Turner
url: https://github.com/nckturner
email: [email protected]
- name: Stefan Prodan
url: https://github.com/stefanprodan
email: [email protected]
- name: Jillian Montalvo
url: https://github.com/jillmon
email: [email protected]
- name: Matthew Becker
url: https://github.com/mattrandallbecker
email: [email protected]
keywords:
- eks
- ec2
- node-termination
- spot
64 changes: 48 additions & 16 deletions stable/aws-node-termination-handler/README.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,69 @@
# AWS Node Termination Handler Chart
# AWS Node Termination Handler

AWS Node Termination Handler Helm chart for Kubernetes. For more information on this project see the project repo at https://github.com/aws/aws-node-termination-handler.
## Prerequisite

## Prerequisites

* Kubernetes >= 1.11

## Installing the Chart

Add the EKS repository to Helm:
```sh
helm repo add eks https://aws.github.io/eks-charts
```
Install AWS Node Termination Handler:
To install the chart with the release name aws-node-termination-handler and default configuration:

```sh
helm upgrade -i aws-node-termination-handler eks/aws-node-termination-handler
helm install --name aws-node-termination-handler \
--namespace kube-system eks/aws-node-termination-handler
```

To install into an EKS cluster where the Node Termination Handler is already installed, you can run:

```sh
helm upgrade --install --recreate-pods --force \
aws-node-termination-handler --namespace kube-system eks/aws-node-termination-handler
```

If you receive an error similar to `Error: release aws-node-termination-handler
failed: <resource> "aws-node-termination-handler" already exists`, simply rerun
the above command.

The [configuration](#configuration) section lists the parameters that can be configured during installation.

## Uninstalling the Chart

To uninstall/delete the `aws-node-termination-handler` deployment:

```sh
helm delete --purge aws-node-termination-handler
```

The command removes all the Kubernetes components associated with the chart and deletes the release.

## Configuration

The following tables lists the configurable parameters of the chart and their default values.

Parameter | Description | Default
--- | --- | ---
`deleteLocalData` | Tells kubectl to continue even if there are pods using emptyDir (local data that will be deleted when the node is drained). | `false`
`fullnameOverride` | Override the full name of the chart | `"node-termination-handler"`
`gracePeriod` | The time in seconds given to each pod to terminate gracefully. If negative, the default value specified in the pod will be used. | `30`
`ignoreDaemonsSets` | Causes kubectl to skip daemon set managed pods | `true`
`imageName` | Refers to docker image located [here](https://hub.docker.com/r/amazon/aws-node-termination-handler). | `"amazon/aws-node-termination-handler"`
`imageVersion` | Refers to current docker image version found [here](https://hub.docker.com/r/amazon/aws-node-termination-handler/tags). | `"v1.0.0"`
`nameOverride` | Override the name of the chart | `"node-termination-handler"`
`namespace` | The [kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) | `"kube-system"`
`nodeSelector` | Tells the daemon set where to place the node-termination-handler pods. For example: `lifecycle: "Ec2Spot"`, `on-demand: "false"`, `aws.amazon.com/purchaseType: "spot"`, etc. Value must be a valid yaml expression. | `{}`
`serviceAccount.name` | The name of the ServiceAccount to use | `nil`
`serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true`
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `amazon/aws-node-termination-handler`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`deleteLocalData` | Tells kubectl to continue even if there are pods using emptyDir (local data that will be deleted when the node is drained). | `false`
`gracePeriod` | The time in seconds given to each pod to terminate gracefully. If negative, the default value specified in the pod will be used. | `30`
`ignoreDaemonsSets` | Causes kubectl to skip daemon set managed pods | `true`
`affinity` | node/pod affinities | None
`podSecurityContext` | Pod Security Context | `{}`
`podAnnotations` | annotations to add to each pod | `{}`
`priorityClassName` | Name of the priorityClass | `system-node-critical`
`resources` | Resources for the pods | `requests.cpu: 50m, requests.memory: 64Mi, limits.cpu: 100m, limits.memory: 128Mi`
`securityContext` | Container Security context | `privileged: true`
`nodeSelector` | Tells the daemon set where to place the node-termination-handler pods. For example: `lifecycle: "Ec2Spot"`, `on-demand: "false"`, `aws.amazon.com/purchaseType: "spot"`, etc. Value must be a valid yaml expression. | `{}`
`tolerations` | list of node taints to tolerate | `[]`
`rbac.create` | if `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`serviceAccount.create` | If `true`, create a new service account | `true`
`serviceAccount.name` | Service account to be used | None
14 changes: 14 additions & 0 deletions stable/aws-node-termination-handler/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,20 @@ If release name contains chart name it will be used as a full name.
{{- end -}}
{{- end -}}

{{/*
Common labels
*/}}
{{- define "aws-node-termination-handler.labels" -}}
app.kubernetes.io/name: {{ include "aws-node-termination-handler.name" . }}
helm.sh/chart: {{ include "aws-node-termination-handler.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
k8s-app: aws-node-termination-handler
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}

{{/*
Create chart name and version as used by the chart label.
*/}}
Expand Down
110 changes: 72 additions & 38 deletions stable/aws-node-termination-handler/templates/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,51 +2,85 @@ apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "aws-node-termination-handler.fullname" . }}
labels:
{{ include "aws-node-termination-handler.labels" . | indent 4 }}
spec:
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
selector:
matchLabels:
app: {{ include "aws-node-termination-handler.name" . }}
app.kubernetes.io/name: {{ include "aws-node-termination-handler.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app: {{ include "aws-node-termination-handler.name" . }}
app.kubernetes.io/name: {{ include "aws-node-termination-handler.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
k8s-app: aws-node-termination-handler
spec:
priorityClassName: "{{ .Values.priorityClassName }}"
affinity:
nodeAffinity:
# NOTE(jaypipes): Change when we complete
# https://github.com/aws/aws-node-termination-handler/issues/8
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "beta.kubernetes.io/os"
operator: In
values:
- linux
- key: "beta.kubernetes.io/arch"
operator: In
values:
- amd64
serviceAccountName: {{ template "aws-node-termination-handler.serviceAccountName" . }}
containers:
- name: {{ include "aws-node-termination-handler.name" . }}
image: {{ .Values.imageName }}:{{ .Values.imageVersion }}
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SPOT_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DELETE_LOCAL_DATA
value: {{ .Values.deleteLocalData | quote }}
- name: IGNORE_DAEMON_SETS
value: {{ .Values.ignoreDaemonSets | quote }}
- name: GRACE_PERIOD
value: {{ .Values.gracePeriod | quote }}
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
{{- with .Values.nodeSelector }}
- name: {{ include "aws-node-termination-handler.name" . }}
image: {{ .Values.image.repository}}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SPOT_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DELETE_LOCAL_DATA
value: {{ .Values.deleteLocalData | quote }}
- name: IGNORE_DAEMON_SETS
value: {{ .Values.ignoreDaemonSets | quote }}
- name: GRACE_PERIOD
value: {{ .Values.gracePeriod | quote }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
57 changes: 57 additions & 0 deletions stable/aws-node-termination-handler/templates/psp.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
{{- if .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}
labels:
{{ include "aws-node-termination-handler.labels" . | indent 4 }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
hostIPC: false
hostNetwork: false
hostPID: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}-psp
labels:
{{ include "aws-node-termination-handler.labels" . | indent 4 }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "aws-node-termination-handler.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "aws-node-termination-handler.fullname" . }}-psp
labels:
{{ include "aws-node-termination-handler.labels" . | indent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "aws-node-termination-handler.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "aws-node-termination-handler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "aws-node-termination-handler.serviceAccountName" . }}
labels:
{{ include "aws-node-termination-handler.labels" . | indent 4 }}
{{- end -}}
48 changes: 36 additions & 12 deletions stable/aws-node-termination-handler/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,30 @@
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

nameOverride: "node-termination-handler"
fullnameOverride: "node-termination-handler"
image:
repository: amazon/aws-node-termination-handler
tag: 1.0.0
pullPolicy: IfNotPresent

namespace: "kube-system"
nameOverride: ""
fullnameOverride: ""

# image values
imageName: "amazon/aws-node-termination-handler"
imageVersion: "v1.0.0"
priorityClassName: system-node-critical

serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use. If namenot set and create is true,
# a name is generated using fullname template
name:
podSecurityContext: {}

podAnnotations: {}

securityContext:
privileged: true

resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"

# deleteLocalData tells kubectl to continue even if there are pods using
# emptyDir (local data that will be deleted when the node is drained).
Expand All @@ -32,3 +41,18 @@ gracePeriod: 30
# nodeSelector tells the daemonset where to place the node-termination-handler
# pods. By default, this value is empty and every node will receive a pod.
nodeSelector: {}

tolerations: []

affinity: {}

serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use. If namenot set and create is true,
# a name is generated using fullname template
name:

rbac:
# rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
pspEnabled: false

0 comments on commit d31fcba

Please sign in to comment.