Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support spot-to-spot consolidation. #5377

Merged
merged 6 commits into from
Jan 13, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/actions/e2e/install-karpenter/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ runs:
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::$ACCOUNT_ID:role/karpenter-irsa-$CLUSTER_NAME" \
--set settings.clusterName="$CLUSTER_NAME" \
--set settings.interruptionQueue="$CLUSTER_NAME" \
--set settings.featureGates.spotToSpotConsolidation=true \
--set controller.resources.requests.cpu=3 \
--set controller.resources.requests.memory=3Gi \
--set controller.resources.limits.cpu=3 \
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ HELM_OPTS ?= --set serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn=${K
--set controller.resources.requests.memory=1Gi \
--set controller.resources.limits.cpu=1 \
--set controller.resources.limits.memory=1Gi \
--set settings.featureGates.spotToSpotConsolidation=true \
--create-namespace

# CR for local builds of Karpenter
Expand Down
7 changes: 4 additions & 3 deletions charts/karpenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ helm upgrade --install --namespace karpenter --create-namespace \
| podAnnotations | object | `{}` | Additional annotations for the pod. |
| podDisruptionBudget.maxUnavailable | int | `1` | |
| podDisruptionBudget.name | string | `"karpenter"` | |
| podSecurityContext | object | `{"fsGroup":65536}` | SecurityContext for the pod. |
| podLabels | object | `{}` | Additional labels for the pod. |
| podSecurityContext | object | `{"fsGroup":65536}` | SecurityContext for the pod. |
| priorityClassName | string | `"system-cluster-critical"` | PriorityClass name for the pod. |
| replicas | int | `2` | Number of replicas. |
| revisionHistoryLimit | int | `10` | The number of old ReplicaSets to retain to allow rollback. |
Expand All @@ -74,16 +74,17 @@ helm upgrade --install --namespace karpenter --create-namespace \
| serviceMonitor.additionalLabels | object | `{}` | Additional labels for the ServiceMonitor. |
| serviceMonitor.enabled | bool | `false` | Specifies whether a ServiceMonitor should be created. |
| serviceMonitor.endpointConfig | object | `{}` | Endpoint configuration for the ServiceMonitor. |
| settings | object | `{"assumeRoleARN":"","assumeRoleDuration":"15m","batchIdleDuration":"1s","batchMaxDuration":"10s","clusterCABundle":"","clusterEndpoint":"","clusterName":"","featureGates":{"drift":true},"interruptionQueue":"","isolatedVPC":false,"reservedENIs":"0","vmMemoryOverheadPercent":0.075}` | Global Settings to configure Karpenter |
| settings | object | `{"assumeRoleARN":"","assumeRoleDuration":"15m","batchIdleDuration":"1s","batchMaxDuration":"10s","clusterCABundle":"","clusterEndpoint":"","clusterName":"","featureGates":{"drift":true,"spotToSpotConsolidation":false},"interruptionQueue":"","isolatedVPC":false,"reservedENIs":"0","vmMemoryOverheadPercent":0.075}` | Global Settings to configure Karpenter |
| settings.assumeRoleARN | string | `""` | Role to assume for calling AWS services. |
| settings.assumeRoleDuration | string | `"15m"` | Duration of assumed credentials in minutes. Default value is 15 minutes. Not used unless assumeRoleARN set. |
| settings.batchIdleDuration | string | `"1s"` | The maximum amount of time with no new ending pods that if exceeded ends the current batching window. If pods arrive faster than this time, the batching window will be extended up to the maxDuration. If they arrive slower, the pods will be batched separately. |
| settings.batchMaxDuration | string | `"10s"` | The maximum length of a batch window. The longer this is, the more pods we can consider for provisioning at one time which usually results in fewer but larger nodes. |
| settings.clusterCABundle | string | `""` | Cluster CA bundle for TLS configuration of provisioned nodes. If not set, this is taken from the controller's TLS configuration for the API server. |
| settings.clusterEndpoint | string | `""` | Cluster endpoint. If not set, will be discovered during startup (EKS only) |
| settings.clusterName | string | `""` | Cluster name. |
| settings.featureGates | object | `{"drift":true}` | Feature Gate configuration values. Feature Gates will follow the same graduation process and requirements as feature gates in Kubernetes. More information here https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features |
| settings.featureGates | object | `{"drift":true,"spotToSpotConsolidation":false}` | Feature Gate configuration values. Feature Gates will follow the same graduation process and requirements as feature gates in Kubernetes. More information here https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features |
| settings.featureGates.drift | bool | `true` | drift is in BETA and is enabled by default. Setting drift to false disables the drift disruption method to watch for drift between currently deployed nodes and the desired state of nodes set in nodepools and nodeclasses |
| settings.featureGates.spotToSpotConsolidation | bool | `false` | spotToSpotConsolidation is disabled by default. Setting this to true will enable spot replacement consolidation for both single and multi-node consolidation. |
| settings.interruptionQueue | string | `""` | interruptionQueue is disabled if not specified. Enabling interruption handling may require additional permissions on the controller service account. Additional permissions are outlined in the docs. |
| settings.isolatedVPC | bool | `false` | If true then assume we can't reach AWS services which don't have a VPC endpoint This also has the effect of disabling look-ups to the AWS pricing endpoint |
| settings.reservedENIs | string | `"0"` | Reserved ENIs are not included in the calculations for max-pods or kube-reserved This is most often used in the VPC CNI custom networking setup https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html |
Expand Down
2 changes: 1 addition & 1 deletion charts/karpenter/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ spec:
divisor: "0"
resource: limits.memory
- name: FEATURE_GATES
value: "Drift={{ .Values.settings.featureGates.drift }}"
value: "Drift={{ .Values.settings.featureGates.drift }},SpotToSpotConsolidation={{ .Values.settings.featureGates.spotToSpotConsolidation }}"
{{- with .Values.settings.batchMaxDuration }}
- name: BATCH_MAX_DURATION
value: "{{ . }}"
Expand Down
3 changes: 3 additions & 0 deletions charts/karpenter/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -202,3 +202,6 @@ settings:
# Setting drift to false disables the drift disruption method to watch for drift between currently deployed nodes
# and the desired state of nodes set in nodepools and nodeclasses
drift: true
# -- spotToSpotConsolidation is disabled by default.
# Setting this to true will enable spot replacement consolidation for both single and multi-node consolidation.
spotToSpotConsolidation: false
2 changes: 2 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ module github.com/aws/karpenter-provider-aws

go 1.21

replace sigs.k8s.io/karpenter => github.com/nikmohan123/karpenter v0.0.0-20240112055946-d36f95317847

require (
github.com/Pallinder/go-randomdata v1.2.0
github.com/PuerkitoBio/goquery v1.8.1
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,8 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nikmohan123/karpenter v0.0.0-20240112055946-d36f95317847 h1:+XgS/KSXDdGDO6HZeRfRqeH+oWCHW3Xrwd43vu/H3z4=
github.com/nikmohan123/karpenter v0.0.0-20240112055946-d36f95317847/go.mod h1:h/O8acLmwFmYYmDD9b57+Fknlf7gQThuY19l7jpThYs=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/onsi/ginkgo/v2 v2.13.2 h1:Bi2gGVkfn6gQcjNjZJVO8Gf0FHzMPf2phUei9tejVMs=
Expand Down Expand Up @@ -763,8 +765,6 @@ sigs.k8s.io/controller-runtime v0.16.3 h1:2TuvuokmfXvDUamSx1SuAOO3eTyye+47mJCigw
sigs.k8s.io/controller-runtime v0.16.3/go.mod h1:j7bialYoSn142nv9sCOJmQgDXQXxnroFU4VnX/brVJ0=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/karpenter v0.33.1-0.20240110172322-1fc448d0415d h1:xB/ckmh8WlR416uEI+NcgUR8+yPnEOIwjU19gvOuHZw=
sigs.k8s.io/karpenter v0.33.1-0.20240110172322-1fc448d0415d/go.mod h1:h/O8acLmwFmYYmDD9b57+Fknlf7gQThuY19l7jpThYs=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
Expand Down
2 changes: 1 addition & 1 deletion pkg/cloudprovider/cloudprovider.go
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ func (c *CloudProvider) resolveInstanceTypes(ctx context.Context, nodeClaim *cor
reqs := scheduling.NewNodeSelectorRequirements(nodeClaim.Spec.Requirements...)
return lo.Filter(instanceTypes, func(i *cloudprovider.InstanceType, _ int) bool {
return reqs.Compatible(i.Requirements, scheduling.AllowUndefinedWellKnownLabels) == nil &&
len(i.Offerings.Requirements(reqs).Available()) > 0 &&
len(i.Offerings.Compatible(reqs).Available()) > 0 &&
resources.Fits(nodeClaim.Spec.Resources.Requests, i.Allocatable())
}), nil
}
Expand Down
8 changes: 4 additions & 4 deletions pkg/providers/instance/instance.go
Original file line number Diff line number Diff line change
Expand Up @@ -374,11 +374,11 @@ func orderInstanceTypesByPrice(instanceTypes []*cloudprovider.InstanceType, requ
sort.Slice(instanceTypes, func(i, j int) bool {
iPrice := math.MaxFloat64
jPrice := math.MaxFloat64
if len(instanceTypes[i].Offerings.Available().Requirements(requirements)) > 0 {
iPrice = instanceTypes[i].Offerings.Available().Requirements(requirements).Cheapest().Price
if len(instanceTypes[i].Offerings.Available().Compatible(requirements)) > 0 {
iPrice = instanceTypes[i].Offerings.Available().Compatible(requirements).Cheapest().Price
}
if len(instanceTypes[j].Offerings.Available().Requirements(requirements)) > 0 {
jPrice = instanceTypes[j].Offerings.Available().Requirements(requirements).Cheapest().Price
if len(instanceTypes[j].Offerings.Available().Compatible(requirements)) > 0 {
jPrice = instanceTypes[j].Offerings.Available().Compatible(requirements).Cheapest().Price
}
if iPrice == jPrice {
return instanceTypes[i].Name < instanceTypes[j].Name
Expand Down
8 changes: 4 additions & 4 deletions pkg/providers/instancetype/suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -339,8 +339,8 @@ var _ = Describe("InstanceTypes", func() {
// We need some way to deterministically order them if their prices match
reqs := scheduling.NewNodeSelectorRequirements(nodePool.Spec.Template.Spec.Requirements...)
sort.Slice(its, func(i, j int) bool {
iPrice := its[i].Offerings.Requirements(reqs).Cheapest().Price
jPrice := its[j].Offerings.Requirements(reqs).Cheapest().Price
iPrice := its[i].Offerings.Compatible(reqs).Cheapest().Price
jPrice := its[j].Offerings.Compatible(reqs).Cheapest().Price
if iPrice == jPrice {
return its[i].Name < its[j].Name
}
Expand Down Expand Up @@ -397,8 +397,8 @@ var _ = Describe("InstanceTypes", func() {
// We need some way to deterministically order them if their prices match
reqs := scheduling.NewNodeSelectorRequirements(nodePool.Spec.Template.Spec.Requirements...)
sort.Slice(its, func(i, j int) bool {
iPrice := its[i].Offerings.Requirements(reqs).Cheapest().Price
jPrice := its[j].Offerings.Requirements(reqs).Cheapest().Price
iPrice := its[i].Offerings.Compatible(reqs).Cheapest().Price
jPrice := its[j].Offerings.Compatible(reqs).Cheapest().Price
if iPrice == jPrice {
return its[i].Name < its[j].Name
}
Expand Down
Loading
Loading