Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release/node app #7

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Stage 1: Install dependencies
FROM node:14 AS build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install

# Stage 2: Copy files and run the application
FROM node:14
WORKDIR /usr/src/app
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
110 changes: 110 additions & 0 deletions README_DevOps.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# DevOps Practical: Dockerizing and Deploying the Application with Security and Scalability

This repository provides a step-by-step guide to dockerizing the application from the [DevOps Practical GitHub repository](https://github.com/swimlane/devops-practical), deploying MongoDB as a Docker container, and deploying the application to a Kubernetes cluster using Helm charts. Additionally, it includes security and scalability enhancements.

## Prerequisites

Ensure you have the following installed on your local machine:
- Docker
- Kubernetes cluster (Minikube on AWS EC2 in this case)
- Helm (v3)
- kubectl
- Terraform
- AWS CLI

## Dockerizing the Application

**Dockerfile**
- A Dockerfile is created to containerize the application, with stages for installing dependencies and running the application.

docker build -t devops-practical:latest .

## Deploy MongoDB as a Docker Container

**Command**
- docker run -p 3000:3000 devops-practical:latest


## Creating a Kubernetes Cluster Using Terraform

Navigate to the `terraform` directory and initialize and apply the Terraform configuration.

### Directory Structure for Terraform

**main.tf**
- Contains the main configuration for provisioning the infrastructure, including VPC, subnets, security groups, EKS cluster, IAM roles, and node groups.

**outputs.tf**
- Defines the outputs for your infrastructure, such as the kubeconfig file and cluster endpoint.

**providers.tf**
- Configures the AWS and Kubernetes providers.

**variables.tf**
- Contains variables used in the Terraform configuration.

**vpc.tf**
- Defines the VPC and subnet configurations.

### Commands

1. Initialize Terraform:
terraform init

2. Apply the Terraform configuration:
terraform apply

### Configure kubectl

Configure `kubectl` to use the new EKS cluster:
aws eks --region <your-region> update-kubeconfig --name eks-cluster

# Deploying the Application Using Helm

## Navigate to the `swimlane-app` Directory and Deploy the Application Using Helm

### Directory Structure for Helm

**Chart.yaml**
- Contains Helm chart metadata.

**values.yaml**
- Defines default values for the Helm chart.

**templates/_helpers.tpl**
- Placeholder for template helper functions.

**templates/deployment.yaml**
- Defines the Kubernetes deployment for the application.

**templates/hpa.yaml**
- Configures Horizontal Pod Autoscaler for automatic scaling based on CPU utilization.

**templates/mongodb-deployment.yaml**
- Defines the Kubernetes deployment for MongoDB.

**templates/mongodb-service.yaml**
- Defines the Kubernetes service for MongoDB.

**templates/networkpolicy.yaml**
- Configures network policies to control traffic between pods for better security.

**templates/service.yaml**
- Defines the Kubernetes service for the application.

### Commands

#### Deploy the application and MongoDB using Helm:
helm install devops-practical ./swimlane-app

#### Apply Horizontal Pod Autoscaler:
kubectl apply -f ./swimlane-app/templates/hpa.yaml

#### Apply Network Policy:
kubectl apply -f ./swimlane-app/templates/networkpolicy.yaml

#### Accessing the Application:
To access the application, use the NodePort or LoadBalancer service exposed by Kubernetes.


This guide provides steps to dockerize the application, deploy MongoDB as a Docker container, create a Kubernetes cluster using Minikube on AWS EC2, and deploy the application using Helm charts. It also includes security and scalability enhancements such as IAM roles for service accounts, encryption, network policies, and horizontal pod autoscaling.
3 changes: 3 additions & 0 deletions config/config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
module.exports = {
db: process.env.MONGO_URI || 'mongodb://localhost:27017/mydatabase'
};
47 changes: 47 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
services:
app:
build: .
container_name: node_app
ports:
- "3000:3000"
depends_on:
- mongodb
environment:
- PORT=3000
- MONGO_URI=mongodb://mongodb:27017/mydatabase
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: ["node", "server.js"]

mongodb:
image: mongo
container_name: mongodb
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_ADMIN_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_ADMIN_PASS}
volumes:
- mongo_data:/data/db
healthcheck:
test: ["CMD-SHELL", "mongo --eval 'db.runCommand(\"ping\").ok'"]
interval: 10s
timeout: 10s
retries: 5

mongo-express:
image: mongo-express
container_name: mongo_express
restart: always
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_ADMIN_USER}
- ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_ADMIN_PASS}
- ME_CONFIG_MONGODB_SERVER=mongodb
depends_on:
- mongodb

volumes:
mongo_data:
158 changes: 158 additions & 0 deletions scripts/wait-for-it.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
#!/usr/bin/env bash
# Use this script to test if a given TCP host/port are available
# Source: https://github.com/vishnubob/wait-for-it

WAITFORIT_cmdname=${0##*/}

echoerr() {
if [[ $WAITFORIT_QUIET -ne 1 ]]; then
echo "$@" 1>&2;
fi
}

usage() {
cat << USAGE >&2
Usage:
$WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args]
-h HOST | --host=HOST Host or IP under test
-p PORT | --port=PORT TCP port under test
Alternatively, you specify the host and port as host:port
-s | --strict Only execute subcommand if the test succeeds
-q | --quiet Don't output any status messages
-t TIMEOUT | --timeout=TIMEOUT
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
exit 1
}

wait_for() {
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
else
echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
fi
WAITFORIT_start_ts=$(date +%s)
while :
do
if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
nc -z $WAITFORIT_HOST $WAITFORIT_PORT
WAITFORIT_result=$?
else
(echo > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
WAITFORIT_result=$?
fi
if [[ $WAITFORIT_result -eq 0 ]]; then
WAITFORIT_end_ts=$(date +%s)
echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
break
fi
sleep 1
done
return $WAITFORIT_result
}

wait_for_wrapper() {
# In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
if [[ $WAITFORIT_QUIET -eq 1 ]]; then
timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT bash $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
else
timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT bash $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
fi
WAITFORIT_PID=$!
trap "kill -INT -$WAITFORIT_PID" INT
wait $WAITFORIT_PID
WAITFORIT_RESULT=$?
if [[ $WAITFORIT_RESULT -ne 0 ]]; then
echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
fi
return $WAITFORIT_RESULT
}

# process arguments
while [[ $# -gt 0 ]]
do
case "$1" in
*:* )
WAITFORIT_hostport=(${1//:/ })
WAITFORIT_HOST=${WAITFORIT_hostport[0]}
WAITFORIT_PORT=${WAITFORIT_hostport[1]}
shift 1
;;
-h | --host)
WAITFORIT_HOST="$2"
if [[ $WAITFORIT_HOST == "" ]]; then break; fi
shift 2
;;
-p | --port)
WAITFORIT_PORT="$2"
if [[ $WAITFORIT_PORT == "" ]]; then break; fi
shift 2
;;
-q | --quiet)
WAITFORIT_QUIET=1
shift 1
;;
-s | --strict)
WAITFORIT_STRICT=1
shift 1
;;
-t | --timeout)
WAITFORIT_TIMEOUT="$2"
if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
shift 2
;;
--child)
WAITFORIT_CHILD=1
shift 1
;;
--)
shift
WAITFORIT_CLI=("$@")
break
;;
-*)
echoerr "Unknown flag: $1"
usage
;;
*)
echoerr "Unknown argument: $1"
usage
;;
esac
done

if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then
echoerr "Error: you need to provide a host and port to test."
usage
fi

WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}
WAITFORIT_STRICT=${WAITFORIT_STRICT:-0}
WAITFORIT_QUIET=${WAITFORIT_QUIET:-0}
WAITFORIT_ISBUSY=$(which busybox nc)
WAITFORIT_BUSYTIMEFLAG="-t"

if [[ $WAITFORIT_CHILD -gt 0 ]]; then
wait_for
WAITFORIT_RESULT=$?
exit $WAITFORIT_RESULT
else
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
wait_for_wrapper
WAITFORIT_RESULT=$?
else
wait_for
WAITFORIT_RESULT=$?
fi
fi

if [[ $WAITFORIT_CLI != "" ]]; then
if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
exit $WAITFORIT_RESULT
fi
exec "${WAITFORIT_CLI[@]}"
else
exit $WAITFORIT_RESULT
fi
46 changes: 23 additions & 23 deletions server.js
Original file line number Diff line number Diff line change
@@ -1,40 +1,24 @@
'use strict';

/*
* nodejs-express-mongoose-demo
* Copyright(c) 2013 Madhusudhan Srinivasa <madhums8@gmail.com>
* MIT Licensed
*/

/**
* Module dependencies
*/

require('dotenv').config();

const fs = require('fs');
const join = require('path').join;
const express = require('express');
const mongoose = require('mongoose');
const passport = require('passport');
const config = require('./config');
const config = require('./config/config'); // Ensure this path is correct

const models = join(__dirname, 'app/models');
const port = process.env.PORT || 3000;
const app = express();

/**
* Expose
*/

module.exports = app;

// Bootstrap models
fs.readdirSync(models)
.filter(file => ~file.search(/^[^.].*\.js$/))
.forEach(file => require(join(models, file)));

// Bootstrap routes
require('./config/passport')(passport);
require('./config/express')(app, passport);
require('./config/routes')(app, passport);
@@ -43,18 +27,34 @@ connect();

function listen() {
if (app.get('env') === 'test') return;
app.listen(port);
console.log('Express app started on port ' + port);
app.listen(port, () => {
console.log('Express app started on port ' + port);
});
}

function connect() {
function connect(retries = 5, delay = 5000) {
const mongoURI = process.env.MONGO_URI || config.db;
console.log(`Connecting to MongoDB at ${mongoURI}`);

mongoose.set('debug', true);

mongoose.connection
.on('error', console.log)
.on('error', (err) => {
console.error('MongoDB connection error:', err);
if (retries === 0) {
console.error('Exhausted all retries. Could not connect to MongoDB.');
process.exit(1);
}
setTimeout(() => connect(retries - 1, delay), delay);
})
.on('disconnected', connect)
.once('open', listen);
return mongoose.connect(config.db, {

mongoose.connect(mongoURI, {
keepAlive: 1,
useNewUrlParser: true,
useUnifiedTopology: true
}).catch(err => {
console.error('Initial MongoDB connection error:', err);
});
}
}
4 changes: 4 additions & 0 deletions swimlane-app/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
apiVersion: v2
name: swimlane-app
description: A Helm chart for deploying the Node.js swimlane-app with MongoDB
version: 0.1.0
45 changes: 45 additions & 0 deletions swimlane-app/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
{{/*
Return the name of the chart
*/}}
{{- define "swimlane-app.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Return the full name of the chart
*/}}
{{- define "swimlane-app.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}

{{/*
Create standard labels
*/}}
{{- define "swimlane-app.labels" -}}
helm.sh/chart: {{ include "swimlane-app.chart" . }}
{{ include "swimlane-app.selectorLabels" . }}
{{- with .Chart.AppVersion }}
app.kubernetes.io/version: {{ . | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}

{{/*
Create standard selector labels
*/}}
{{- define "swimlane-app.selectorLabels" -}}
app.kubernetes.io/name: {{ include "swimlane-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}

{{/*
Create the name of the chart
*/}}
{{- define "swimlane-app.chart" -}}
{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
{{- end -}}
25 changes: 25 additions & 0 deletions swimlane-app/templates/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "swimlane-app.fullname" . }}
labels:
{{- include "swimlane-app.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "swimlane-app.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "swimlane-app.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 3000
env:
- name: MONGO_URI
value: mongodb://swimlane-app-mongo:27017/mydatabase
18 changes: 18 additions & 0 deletions swimlane-app/templates/hpa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: devops-practical-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: devops-practical
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
27 changes: 27 additions & 0 deletions swimlane-app/templates/mongodb-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-mongo
labels:
app: {{ .Release.Name }}-mongo
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}-mongo
template:
metadata:
labels:
app: {{ .Release.Name }}-mongo
spec:
containers:
- name: mongo
image: "{{ .Values.mongodb.image.repository }}:{{ .Values.mongodb.image.tag }}"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-data
mountPath: /data/db
volumes:
- name: mongo-data
emptyDir: {}
11 changes: 11 additions & 0 deletions swimlane-app/templates/mongodb-service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-mongo
labels:
app: {{ .Release.Name }}-mongo
spec:
ports:
- port: 27017
selector:
app: {{ .Release.Name }}-mongo
21 changes: 21 additions & 0 deletions swimlane-app/templates/networkpolicy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-traffic
spec:
podSelector:
matchLabels:
app: devops-practical
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: devops-practical
egress:
- to:
- podSelector:
matchLabels:
app: devops-practical
14 changes: 14 additions & 0 deletions swimlane-app/templates/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "swimlane-app.fullname" . }}
labels:
{{- include "swimlane-app.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3000
nodePort: {{ .Values.service.nodePort | default 30080 }}
selector:
{{- include "swimlane-app.selectorLabels" . | nindent 4 }}
38 changes: 38 additions & 0 deletions swimlane-app/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
replicaCount: 1

image:
repository: public.ecr.aws/t9i7n6c5/swimlane-app
tag: latest
pullPolicy: IfNotPresent

mongodb:
image:
repository: mongo
tag: latest
pullPolicy: IfNotPresent
service:
port: 27017

service:
type: NodePort
port: 80
NodePort: 30080

ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}
124 changes: 124 additions & 0 deletions terraform/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
provider "aws" {
region = var.aws_region
}

resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "subnet" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)
}

resource "aws_security_group" "eks" {
vpc_id = aws_vpc.main.id

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_eks_cluster" "eks_cluster" {
name = "eks-cluster"
role_arn = aws_iam_role.eks_cluster_role.arn

vpc_config {
subnet_ids = aws_subnet.subnet[*].id
}

encryption_config {
provider {
key_arn = aws_kms_key.eks_key.arn
}
resources = ["secrets"]
}

depends_on = [aws_iam_role_policy_attachment.eks_cluster_AmazonEKSClusterPolicy]
}

resource "aws_iam_role" "eks_cluster_role" {
name = "eks_cluster_role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
},
]
})
}

resource "aws_iam_role_policy_attachment" "eks_cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}

resource "aws_eks_node_group" "eks_node_group" {
cluster_name = aws_eks_cluster.eks_cluster.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_node_role.arn
subnet_ids = aws_subnet.subnet[*].id

scaling_config {
desired_size = 2
max_size = 5
min_size = 1
}

depends_on = [aws_eks_cluster.eks_cluster]
}

resource "aws_iam_role" "eks_node_role" {
name = "eks_node_role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}

resource "aws_iam_role_policy_attachment" "eks_node_AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_role.name
}

resource "aws_iam_role_policy_attachment" "eks_node_AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_role.name
}

resource "aws_iam_role_policy_attachment" "eks_node_AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_role.name
}

data "aws_availability_zones" "available" {}

resource "aws_kms_key" "eks_key" {
description = "KMS key for EKS secrets encryption"
enable_key_rotation = true
}

resource "local_file" "kubeconfig" {
content = aws_eks_cluster.eks_cluster.kubeconfig[0].value
filename = "kubeconfig_eks.yml"
}
11 changes: 11 additions & 0 deletions terraform/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
output "kubeconfig" {
value = aws_eks_cluster.eks_cluster.kubeconfig[0].value
}

output "cluster_endpoint" {
value = aws_eks_cluster.eks_cluster.endpoint
}

output "cluster_name" {
value = aws_eks_cluster.eks_cluster.name
}
13 changes: 13 additions & 0 deletions terraform/providers.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
provider "aws" {
region = var.aws_region
}

provider "kubernetes" {
host = aws_eks_cluster.eks_cluster.endpoint
token = data.aws_eks_cluster_auth.eks_cluster_auth.token
cluster_ca_certificate = base64decode(aws_eks_cluster.eks_cluster.certificate_authority.0.data)
}

data "aws_eks_cluster_auth" "eks_cluster_auth" {
name = aws_eks_cluster.eks_cluster.name
}
4 changes: 4 additions & 0 deletions terraform/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
variable "aws_region" {
description = "The AWS region to deploy the infrastructure in"
default = "us-west-2"
}
12 changes: 12 additions & 0 deletions terraform/vpc.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "subnet" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)
}

data "aws_availability_zones" "available" {}