Skip to content

Commit

Permalink
Support for Mixed Instances ASG in worker_groups_launch_template vari…
Browse files Browse the repository at this point in the history
…able (terraform-aws-modules#468)

* Create ASG tags via for - utility from terraform 12

* Updated support for mixed ASG in worker_groups_launch_template variable

* Updated launch_template example to include spot and mixed ASG with worker_groups_launch_template variable

* Removed old config

* Removed workers_launch_template_mixed.tf file, added support for mixed/spot in workers_launch_template variable

* Updated examples/spot_instances/main.tf with Mixed Spot and ondemand instances

* Removed launch_template_mixed from relevant files

* Updated README.md file

* Removed workers_launch_template.tf.bkp

* Fixed case with null on_demand_allocation_strategy and Spot allocation

* Fixed workers_launch_template.tf, covered spot instances via Launch Template
  • Loading branch information
sppwf authored and max-rocket-internet committed Sep 13, 2019
1 parent a47f464 commit 461cf54
Show file tree
Hide file tree
Showing 12 changed files with 97 additions and 485 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,9 @@ project adheres to [Semantic Versioning](http://semver.org/).
- Added support for initial lifecycle hooks for autosacling groups (@barryib)
- Added option to recreate ASG when LT or LC changes (by @barryib)
- Ability to specify workers role name (by @ivanich)
- Added support for Mixed Instance ASG using `worker_groups_launch_template` variable (by @sppwf)
- Changed ASG Tags generation using terraform 12 `for` utility (by @sppwf)
- Removed `worker_groups_launch_template_mixed` variable (by @sppwf)

### Changed

Expand Down
13 changes: 6 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,15 +118,15 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| cluster\_enabled\_log\_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) | `[]` | no |
| cluster\_endpoint\_private\_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | bool | `"false"` | no |
| cluster\_endpoint\_public\_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | bool | `"true"` | no |
| cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false. | string | `""` | no |
| cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage\_cluster\_iam\_resources is set to false. | string | `""` | no |
| cluster\_log\_kms\_key\_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string | `""` | no |
| cluster\_log\_retention\_in\_days | Number of days to retain log events. Default retention - 90 days. | number | `"90"` | no |
| cluster\_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | n/a | yes |
| cluster\_security\_group\_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers | string | `""` | no |
| cluster\_version | Kubernetes version to use for the EKS cluster. | string | `"1.14"` | no |
| config\_output\_path | Where to save the Kubectl config file (if `write_kubeconfig = true`). Should end in a forward slash `/` . | string | `"./"` | no |
| iam\_path | If provided, all IAM roles will be created on this path. | string | `"/"` | no |
| kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list(string) | `[]` | no |
| kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. \["-r", "MyEksRole"\]. | list(string) | `[]` | no |
| kubeconfig\_aws\_authenticator\_command | Command to use to fetch AWS EKS credentials. | string | `"aws-iam-authenticator"` | no |
| kubeconfig\_aws\_authenticator\_command\_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]. | list(string) | `[]` | no |
| kubeconfig\_aws\_authenticator\_env\_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map(string) | `{}` | no |
Expand All @@ -144,15 +144,14 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| tags | A map of tags to add to all resources. | map(string) | `{}` | no |
| vpc\_id | VPC where the cluster and workers will be deployed. | string | n/a | yes |
| worker\_additional\_security\_group\_ids | A list of additional security group ids to attach to worker instances | list(string) | `[]` | no |
| worker\_ami\_name\_filter | Additional name filter for AWS EKS worker AMI. Default behaviour will get latest for the cluster_version but could be set to a release from amazon-eks-ami, e.g. "v20190220" | string | `"v*"` | no |
| worker\_ami\_name\_filter | Additional name filter for AWS EKS worker AMI. Default behaviour will get latest for the cluster\_version but could be set to a release from amazon-eks-ami, e.g. "v20190220" | string | `"v*"` | no |
| worker\_create\_security\_group | Whether to create a security group for the workers or attach the workers to `worker_security_group_id`. | bool | `"true"` | no |
| worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys. | any | `[]` | no |
| worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any | `[]` | no |
| worker\_groups\_launch\_template\_mixed | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any | `[]` | no |
| worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers\_group\_defaults for valid keys. | any | `[]` | no |
| worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers\_group\_defaults for valid keys. | any | `[]` | no |
| worker\_security\_group\_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `""` | no |
| worker\_sg\_ingress\_from\_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | number | `"1025"` | no |
| workers\_additional\_policies | Additional policies to be added to workers | list(string) | `[]` | no |
| workers\_group\_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys. | any | `{}` | no |
| workers\_group\_defaults | Override default values for target groups. See workers\_group\_defaults\_defaults in local.tf for valid keys. | any | `{}` | no |
| write\_aws\_auth\_config | Whether to write the aws-auth configmap file. | bool | `"true"` | no |
| write\_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | bool | `"true"` | no |
Expand Down
16 changes: 0 additions & 16 deletions aws_auth.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,21 +35,6 @@ EOS
data "aws_caller_identity" "current" {
}

data "template_file" "launch_template_mixed_worker_role_arns" {
count = local.worker_group_launch_template_mixed_count
template = file("${path.module}/templates/worker-role.tpl")

vars = {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(
coalescelist(
aws_iam_instance_profile.workers_launch_template_mixed.*.role,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_name,
),
count.index,
)}"
}
}

data "template_file" "launch_template_worker_role_arns" {
count = local.worker_group_launch_template_count
template = file("${path.module}/templates/worker-role.tpl")
Expand Down Expand Up @@ -91,7 +76,6 @@ data "template_file" "config_map_aws_auth" {
concat(
data.template_file.launch_template_worker_role_arns.*.rendered,
data.template_file.worker_role_arns.*.rendered,
data.template_file.launch_template_mixed_worker_role_arns.*.rendered,
),
),
)
Expand Down
41 changes: 0 additions & 41 deletions data.tf
Original file line number Diff line number Diff line change
Expand Up @@ -147,37 +147,6 @@ data "template_file" "launch_template_userdata" {
}
}

data "template_file" "workers_launch_template_mixed" {
count = local.worker_group_launch_template_mixed_count
template = file("${path.module}/templates/userdata.sh.tpl")

vars = {
cluster_name = aws_eks_cluster.this.name
endpoint = aws_eks_cluster.this.endpoint
cluster_auth_base64 = aws_eks_cluster.this.certificate_authority[0].data
pre_userdata = lookup(
var.worker_groups_launch_template_mixed[count.index],
"pre_userdata",
local.workers_group_defaults["pre_userdata"],
)
additional_userdata = lookup(
var.worker_groups_launch_template_mixed[count.index],
"additional_userdata",
local.workers_group_defaults["additional_userdata"],
)
bootstrap_extra_args = lookup(
var.worker_groups_launch_template_mixed[count.index],
"bootstrap_extra_args",
local.workers_group_defaults["bootstrap_extra_args"],
)
kubelet_extra_args = lookup(
var.worker_groups_launch_template_mixed[count.index],
"kubelet_extra_args",
local.workers_group_defaults["kubelet_extra_args"],
)
}
}

data "aws_iam_role" "custom_cluster_iam_role" {
count = var.manage_cluster_iam_resources ? 0 : 1
name = var.cluster_iam_role_name
Expand All @@ -200,13 +169,3 @@ data "aws_iam_instance_profile" "custom_worker_group_launch_template_iam_instanc
local.workers_group_defaults["iam_instance_profile_name"],
)
}

data "aws_iam_instance_profile" "custom_worker_group_launch_template_mixed_iam_instance_profile" {
count = var.manage_worker_iam_resources ? 0 : local.worker_group_launch_template_mixed_count
name = lookup(
var.worker_groups_launch_template_mixed[count.index],
"iam_instance_profile_name",
local.workers_group_defaults["iam_instance_profile_name"],
)
}

1 change: 1 addition & 0 deletions examples/launch_templates/pre_userdata.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
yum update -y
2 changes: 1 addition & 1 deletion examples/spot_instances/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ module "eks" {
subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id

worker_groups_launch_template_mixed = [
worker_groups_launch_template = [
{
name = "spot-1"
override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"]
Expand Down
16 changes: 11 additions & 5 deletions local.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
locals {
asg_tags = null_resource.tags_as_list_of_maps.*.triggers
asg_tags = [
for item in keys(var.tags) :
map(
"key", item,
"value", element(values(var.tags), index(keys(var.tags), item)),
"propagate_at_launch", "true"
)
]

cluster_security_group_id = var.cluster_create_security_group ? aws_security_group.cluster[0].id : var.cluster_security_group_id
cluster_iam_role_name = var.manage_cluster_iam_resources ? aws_iam_role.cluster[0].name : var.cluster_iam_role_name
Expand All @@ -9,9 +16,8 @@ locals {
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name

worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
worker_group_launch_template_mixed_count = length(var.worker_groups_launch_template_mixed)
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)

workers_group_defaults_defaults = {
name = "count.index" # Name of the worker group. Literal count.index will never be used but if name is not set, the count.index interpolation will be used.
Expand Down Expand Up @@ -61,7 +67,7 @@ locals {
market_type = null
# Settings for launch templates with mixed instances policy
override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"] # A list of override instance types for mixed instances policy
on_demand_allocation_strategy = "prioritized" # Strategy to use when launching on-demand instances. Valid values: prioritized.
on_demand_allocation_strategy = null # Strategy to use when launching on-demand instances. Valid values: prioritized.
on_demand_base_capacity = "0" # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances
on_demand_percentage_above_base_capacity = "0" # Percentage split between on-demand and Spot instances above the base on-demand capacity
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.
Expand Down
4 changes: 0 additions & 4 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,6 @@ output "workers_asg_arns" {
value = concat(
aws_autoscaling_group.workers.*.arn,
aws_autoscaling_group.workers_launch_template.*.arn,
aws_autoscaling_group.workers_launch_template_mixed.*.arn,
)
}

Expand All @@ -72,7 +71,6 @@ output "workers_asg_names" {
value = concat(
aws_autoscaling_group.workers.*.id,
aws_autoscaling_group.workers_launch_template.*.id,
aws_autoscaling_group.workers_launch_template_mixed.*.id,
)
}

Expand Down Expand Up @@ -125,7 +123,6 @@ output "worker_iam_role_name" {
aws_iam_role.workers.*.name,
data.aws_iam_instance_profile.custom_worker_group_iam_instance_profile.*.role_name,
data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile.*.role_name,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_name,
[""]
)[0]
}
Expand All @@ -136,7 +133,6 @@ output "worker_iam_role_arn" {
aws_iam_role.workers.*.arn,
data.aws_iam_instance_profile.custom_worker_group_iam_instance_profile.*.role_arn,
data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile.*.role_arn,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_arn,
[""]
)[0]
}
Expand Down
6 changes: 0 additions & 6 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -114,12 +114,6 @@ variable "worker_groups_launch_template" {
default = []
}

variable "worker_groups_launch_template_mixed" {
description = "A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys."
type = any
default = []
}

variable "worker_security_group_id" {
description = "If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster."
type = string
Expand Down
10 changes: 0 additions & 10 deletions workers.tf
Original file line number Diff line number Diff line change
Expand Up @@ -359,16 +359,6 @@ resource "aws_iam_role_policy_attachment" "workers_additional_policies" {
policy_arn = var.workers_additional_policies[count.index]
}

resource "null_resource" "tags_as_list_of_maps" {
count = length(keys(var.tags))

triggers = {
key = keys(var.tags)[count.index]
value = values(var.tags)[count.index]
propagate_at_launch = "true"
}
}

resource "aws_iam_role_policy_attachment" "workers_autoscaling" {
count = var.manage_worker_iam_resources ? 1 : 0
policy_arn = aws_iam_policy.worker_autoscaling[0].arn
Expand Down
82 changes: 75 additions & 7 deletions workers_launch_template.tf
Original file line number Diff line number Diff line change
Expand Up @@ -73,13 +73,81 @@ resource "aws_autoscaling_group" "workers_launch_template" {
local.workers_group_defaults["termination_policies"]
)

launch_template {
id = aws_launch_template.workers_launch_template.*.id[count.index]
version = lookup(
var.worker_groups_launch_template[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
dynamic mixed_instances_policy {
iterator = item
for_each = (lookup(var.worker_groups_launch_template[count.index], "override_instance_types", null) != null) || (lookup(var.worker_groups_launch_template[count.index], "on_demand_allocation_strategy", null) != null) ? list(var.worker_groups_launch_template[count.index]) : []

content {
instances_distribution {
on_demand_allocation_strategy = lookup(
item.value,
"on_demand_allocation_strategy",
"prioritized",
)
on_demand_base_capacity = lookup(
item.value,
"on_demand_base_capacity",
local.workers_group_defaults["on_demand_base_capacity"],
)
on_demand_percentage_above_base_capacity = lookup(
item.value,
"on_demand_percentage_above_base_capacity",
local.workers_group_defaults["on_demand_percentage_above_base_capacity"],
)
spot_allocation_strategy = lookup(
item.value,
"spot_allocation_strategy",
local.workers_group_defaults["spot_allocation_strategy"],
)
spot_instance_pools = lookup(
item.value,
"spot_instance_pools",
local.workers_group_defaults["spot_instance_pools"],
)
spot_max_price = lookup(
item.value,
"spot_max_price",
local.workers_group_defaults["spot_max_price"],
)
}

launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.workers_launch_template.*.id[count.index]
version = lookup(
var.worker_groups_launch_template[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
}

dynamic "override" {
for_each = lookup(
var.worker_groups_launch_template[count.index],
"override_instance_types",
local.workers_group_defaults["override_instance_types"]
)

content {
instance_type = override.value
}
}

}
}
}
dynamic launch_template {
iterator = item
for_each = (lookup(var.worker_groups_launch_template[count.index], "override_instance_types", null) != null) || (lookup(var.worker_groups_launch_template[count.index], "on_demand_allocation_strategy", null) != null) ? [] : list(var.worker_groups_launch_template[count.index])

content {
id = aws_launch_template.workers_launch_template.*.id[count.index]
version = lookup(
var.worker_groups_launch_template[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
}
}

dynamic "initial_lifecycle_hook" {
Expand Down
Loading

0 comments on commit 461cf54

Please sign in to comment.