Skip to content

Commit

Permalink
Merge pull request #86 from bcgov/chore/84-create-tf-chart
Browse files Browse the repository at this point in the history
Chore/84 create tf chart
  • Loading branch information
joshgamache authored Feb 16, 2024
2 parents 0da2da5 + dba3acb commit edffa9c
Show file tree
Hide file tree
Showing 13 changed files with 436 additions and 3 deletions.
31 changes: 31 additions & 0 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: Release Charts

on:
push:
branches:
- main

jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "[email protected]"
- name: Install Helm
uses: azure/[email protected]
with:
version: v3.6.2

- name: Run chart-releaser
uses: helm/[email protected]
with:
charts_dir: helm/terraform-job
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
26 changes: 23 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,27 @@ Prior to using Helm to deploy applications to the OpenShift cluster, the CAS tea

## Terraform in CAS repos

See an example of our containerized Terraform process in an OpenShift job that is integrated into a the 'cas-registration' Helm chart. It deploys at the pre-install, pre-upgrade hooks. The Terraform scripts are located in the `/terraform` subdirectory in the chart, which is then pulled in via a ConfigMap utilized by the job at `/templates/backend/job/terraform-apply.yaml`.
### Components

- `terraform-apply.yaml`: This file defines the Job that deploys a container to run Terraform. Secrets (deployed by `make provision`) contain the credentials and `.tfbackend` Terraform uses to access the GCP buckets where it stores state. The `terraform-modules.yaml` ConfigMap is what pulls in the Terraform scripts that will be run.
- `terraform-modules.yaml`: This file defines a ConfigMap that sources terraform `.tf` files from a subdirectory in the chart. All `.tf` files in the subdirectory are pulled into the ConfigMap, which is then mounted as a Volume on the container created in `terraform-apply.yaml`. Changes to these files are *automatically applied* when the helm chart is installed/upgraded. Currently, this is `-auto-approved`.
#### `~/helm/terraform-bucket-provision/`

This repo contains a Helm chart that contains a job that will import and run Terraform files. It deploys at the pre-install, pre-upgrade hooks. This chart references secrets and config that is deployed to a namespace when a project is provisioned by *`cas-pipeline`* (credentials, project_id, kubeconfig, terraform backendconfig).

`terraform-apply.yaml`: This file defines the Job that deploys a container to run Terraform. Secrets (deployed by `make provision`) contain the credentials and `.tfbackend` Terraform uses to access the GCP buckets where it stores state. The `terraform-modules.yaml` ConfigMap is what pulls in the Terraform scripts that will be run.

#### `~/helm/terraform-bucket-provision/terraform`

In tandem with the Helm chart is a Terraform module that creates GCP storage buckets, service accounts to access those buckets (admins and viewers) and injects those credentials into OpenShift for usage. These modules are pulled in via a configMap which pulls all files from this charts `/terraform` directory. These are bundled with the chart as the way we use Terraform is currently identical in our CAS projects.

### Usage

1. Import the Helm Chart into your project's main chart as a dependency.
2. Update your `values.yaml` (and any environmental versions of values) with those required by the terraform-bucket-provision chart:

```yaml
terraform-bucket-provision:
terraform:
namespace_apps: '["example-project-backups", "example-project-uploads"]'
```
---
23 changes: 23 additions & 0 deletions helm/terraform-bucket-provision/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
6 changes: 6 additions & 0 deletions helm/terraform-bucket-provision/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
apiVersion: v2
name: terraform-bucket-provision
description: A chart to deploy a job to run terraform that utilizes terraform modules to create storage buckets and associated service accounts.
type: application
version: 0.1.0
appVersion: "1.16.0"
62 changes: 62 additions & 0 deletions helm/terraform-bucket-provision/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "terraform-job.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "terraform-job.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "terraform-job.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "terraform-job.labels" -}}
helm.sh/chart: {{ include "terraform-job.chart" . }}
{{ include "terraform-job.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "terraform-job.selectorLabels" -}}
app.kubernetes.io/name: {{ include "terraform-job.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "terraform-job.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "terraform-job.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
77 changes: 77 additions & 0 deletions helm/terraform-bucket-provision/templates/terraform-apply.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
apiVersion: batch/v1
kind: Job
metadata:
name: terraform-apply
labels:
component: infrastructure
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "10"
spec:
backoffLimit: 0
activeDeadlineSeconds: 900
template:
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
containers:
- name: terraform-apply
resources: {{ toYaml .Values.resources | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- mountPath: /etc/gcp
name: service-account-credentials-volume
readOnly: True
- mountPath: /etc/tf
name: terraform-backend-config-volume
readOnly: True
- name: tf-working-dir
mountPath: /working
readOnly: False
- name: terraform-modules
mountPath: /terraform
readOnly: False
env:
- name: TF_VAR_project_id
valueFrom:
secretKeyRef:
name: gcp-credentials-secret
key: gcp_project_id
- name: TF_VAR_openshift_namespace
value: {{ .Release.Namespace | quote }}
- name: TF_VAR_apps
value: {{ .Values.terraform.namespace_apps | quote }}
- name: kubernetes_host
value: {{ .Values.openShift.host }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/gcp/credentials.json"
# Terraform was having an issue pulling kubernetes_host in as a TF_VAR, so we add it as a attribute to the command
command:
- /bin/sh
- -c
- |
set -euo pipefail;
cp -r /terraform/. /working;
cd working;
export TF_VAR_kubernetes_token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token );
terraform init -backend-config=/etc/tf/gcs.tfbackend;
terraform apply -var="kubernetes_host=$kubernetes_host" -auto-approve;
restartPolicy: Never
volumes:
- name: service-account-credentials-volume
secret:
secretName: gcp-credentials-secret
items:
- key: sa_json
path: credentials.json
- name: terraform-backend-config-volume
secret:
secretName: gcp-credentials-secret
items:
- key: tf_backend
path: gcs.tfbackend
- name: tf-working-dir
emptyDir: {}
- name: terraform-modules
configMap:
name: terraform-modules
16 changes: 16 additions & 0 deletions helm/terraform-bucket-provision/templates/terraform-modules.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{{/*
Creates an list of files with thier base64 values from the context's "terraform" directory.
*/}}
apiVersion: v1
kind: ConfigMap
metadata:
name: terraform-modules
# Because terraform-apply.yaml is pre-install, pre-upgrade, this configmap needs to be in place before it
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "-10"
binaryData:
{{- range $path, $data := .Files.Glob "terraform/**.tf" }}
{{ $path | base | indent 2 }}: >-
{{- $data | toString | b64enc | nindent 4 }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ .Values.serviceAccount.name }}-secret-admin-binding"
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "-5"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Values.serviceAccount.roleName }}
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
20 changes: 20 additions & 0 deletions helm/terraform-bucket-provision/templates/terraform-role.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Values.serviceAccount.roleName }}
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "-10"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: [
"create",
"delete",
"deletecollection",
"get",
"list",
"patch",
"update",
"watch",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.name }}
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "-10"
98 changes: 98 additions & 0 deletions helm/terraform-bucket-provision/terraform/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
terraform {
required_version = ">=1.4.6"

required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
google = {
source = "hashicorp/google"
version = "~> 5.2.0"
}
}

backend "gcs" {}
}

# Configure OCP infrastructure to setup the host and authentication token
provider "kubernetes" {
host = var.kubernetes_host
token = var.kubernetes_token
}

# Configure GCP infrastructure to setup the credentials, default project and location (zone and/or region) for your resources
provider "google" {
project = var.project_id
region = local.region
}

# Create GCS buckets
resource "google_storage_bucket" "bucket" {
for_each = { for v in var.apps : v => v }
name = "${var.openshift_namespace}-${each.value}"
location = local.region
}

# Create GCP service accounts for each GCS bucket
resource "google_service_account" "account" {
for_each = { for v in var.apps : v => v }
account_id = "sa-${var.openshift_namespace}-${each.value}"
display_name = "${var.openshift_namespace}-${each.value} Service Account"
depends_on = [google_storage_bucket.bucket]
}

# Assign Storage Admin role for the corresponding service accounts
resource "google_storage_bucket_iam_member" "admin" {
for_each = { for v in var.apps : v => v }
bucket = "${var.openshift_namespace}-${each.value}"
role = "roles/storage.admin"
member = "serviceAccount:${google_service_account.account[each.key].email}"
depends_on = [google_service_account.account]
}

# Create viewer GCP service accounts for each GCS bucket
resource "google_service_account" "viewer_account" {
for_each = { for v in var.apps : v => v }
account_id = "ro-${var.openshift_namespace}-${each.value}"
display_name = "${var.openshift_namespace}-${each.value} Viewer Service Account"
depends_on = [google_storage_bucket.bucket]
}

# Assign (manually created) Storage Viewer role for the corresponding service accounts
resource "google_storage_bucket_iam_member" "viewer" {
for_each = { for v in var.apps : v => v }
bucket = "${var.openshift_namespace}-${each.value}"
role = "projects/${var.project_id}/roles/${var.iam_storage_role_template_id}"
member = "serviceAccount:${google_service_account.viewer_account[each.key].email}"
depends_on = [google_service_account.viewer_account]
}

# Create keys for the service accounts
resource "google_service_account_key" "key" {
for_each = { for v in var.apps : v => v }
service_account_id = google_service_account.account[each.key].name
}

# Create keys for the viewer service accounts
resource "google_service_account_key" "viewer_key" {
for_each = { for v in var.apps : v => v }
service_account_id = google_service_account.viewer_account[each.key].name
}

resource "kubernetes_secret" "secret_sa" {
for_each = { for v in var.apps : v => v }
metadata {
name = "gcp-${var.openshift_namespace}-${each.value}-service-account-key"
namespace = var.openshift_namespace
labels = {
created-by = "Terraform"
}
}

data = {
"bucket_name" = "${var.openshift_namespace}-${each.value}"
"credentials.json" = base64decode(google_service_account_key.key[each.key].private_key)
"viewer_credentials.json" = base64decode(google_service_account_key.viewer_key[each.key].private_key)
}
}
Loading

0 comments on commit edffa9c

Please sign in to comment.