Skip to content

Commit

Permalink
feat: Get Neuron device and core count from EC2 API for trn* and `i…
Browse files Browse the repository at this point in the history
…nf*` instance types
  • Loading branch information
bryantbiggs committed Jul 13, 2024
1 parent c32ccc4 commit 3d1b8da
Show file tree
Hide file tree
Showing 8 changed files with 79 additions and 45 deletions.
6 changes: 4 additions & 2 deletions designs/limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,12 @@ The next large problem is the inability to define a hard ceiling on cluster cost

We need to provide similar functionality via Karpenter as well wherein there's a hard limit a customer can configure.


## Current State

To address the runaway-scaling problem the current fix in place is to detect if the kubelet for a worker node has never reported its status to the K8s control plane. If it's been longer than 15 minutes, Karpenter assumes that there's a hard failure mode due to which this worker node will never become healthy and terminates the worker node. If the condition map of the node object in the API Server says `NodeStatusNeverUpdated` then we use that as an indicator of the node having never come up.

This fix ensures that if there are other scenarios where a worker node has become unhealthy due to a network partition or power outage in a availability zone, we don't terminate those worker nodes. It's important we don't make the static stability of a cluster worse during such an event. On the other hand, if there is an edge case where worker nodes come online and soon go offline, it will lead to runaway scaling again. This edge case should be unlikely to happen in the near term, so this document focuses on just the ability to limit costs within Karpenter. That way even if runaway scaling does occur there's a way to bound it. A longer-term solution to handle the runaway problem will be discussed separately.


## Proposed Solution for Limits

There are two broad forms of limiting we could apply. The first is that we could introduce a limit to the number of in-flight worker node being provisioned at a point in time. A worker node that's in the `NotReady` state could be considered to be in-flight. The second form is an absolute limit of the number of resources Karpenter can provision.
Expand All @@ -37,6 +35,7 @@ In the above example - `20%` indicates that if at any point in time, more than 2
The good bit about this approach is that we don't constrain how many total worker nodes can be spun up by Karpenter, while also making sure that if we keep launching worker nodes that aren't healthy, we stop the scaling and save costs.

The two main problems with this approach though are -

1. This limit while meant to just constrain the number of unhealthy worker nodes in a cluster, will also inhibit the rate at which Karpenter can respond to pods that aren't schedulable. This somewhat goes against the goal of minimizing launch times of workers.
2. While this helps ensure that costs don't increase due to runaway scaling, it won't help those who want a stricter cap on the amount of resources that's being provisioned even when nodes are otherwise healthy.

Expand All @@ -62,11 +61,14 @@ As a cost control mechanism, this requires a little more work from our users if
[CPU limits](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units), memory limits and GPU limits will be defined similar to resource requests and will not be required by default. Karpenter will also will not default to any limits itself.

The list of supported resource types is -

- `cpu`
- `memory`
- `nvidia.com/gpu`
- `amd.com/gpu`
- `aws.amazon.com/neuron`
- `aws.amazon.com/neuroncore`
- `aws.amazon.com/neurondevice`
- `habana.ai/gaudi`

Limits will be defined at the per-provisioner level. We'll rely on the `karpenter.sh/provisioner-name` node label when calculating resource usage by a specific provisioner. This is useful when multiple teams share a single cluster and use separate provisioners since each team's resource consumption will be limited separately.
Expand Down
2 changes: 1 addition & 1 deletion examples/workloads/neuron.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ spec:
aws.amazon.com/neuron: "1"
requests:
cpu: "1"
memory: 256M
memory: 256M
3 changes: 3 additions & 0 deletions pkg/apis/v1/labels.go
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ var (
ResourceNVIDIAGPU corev1.ResourceName = "nvidia.com/gpu"
ResourceAMDGPU corev1.ResourceName = "amd.com/gpu"
ResourceAWSNeuron corev1.ResourceName = "aws.amazon.com/neuron"
ResourceAWSNeuronCore corev1.ResourceName = "aws.amazon.com/neuroncore"
ResourceAWSNeuronDevice corev1.ResourceName = "aws.amazon.com/neurondevice"
ResourceHabanaGaudi corev1.ResourceName = "habana.ai/gaudi"
ResourceAWSPodENI corev1.ResourceName = "vpc.amazonaws.com/pod-eni"
ResourcePrivateIPv4Address corev1.ResourceName = "vpc.amazonaws.com/PrivateIPv4Address"
Expand Down Expand Up @@ -118,6 +120,7 @@ var (
LabelInstanceAcceleratorName = apis.Group + "/instance-accelerator-name"
LabelInstanceAcceleratorManufacturer = apis.Group + "/instance-accelerator-manufacturer"
LabelInstanceAcceleratorCount = apis.Group + "/instance-accelerator-count"
LabelInstanceAcceleratorMemory = apis.Group + "/instance-accelerator-memory"
AnnotationEC2NodeClassHash = apis.Group + "/ec2nodeclass-hash"
AnnotationEC2NodeClassHashVersion = apis.Group + "/ec2nodeclass-hash-version"
AnnotationInstanceTagged = apis.Group + "/tagged"
Expand Down
2 changes: 2 additions & 0 deletions pkg/apis/v1beta1/labels.go
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ var (
ResourceNVIDIAGPU corev1.ResourceName = "nvidia.com/gpu"
ResourceAMDGPU corev1.ResourceName = "amd.com/gpu"
ResourceAWSNeuron corev1.ResourceName = "aws.amazon.com/neuron"
ResourceAWSNeuronCore corev1.ResourceName = "aws.amazon.com/neuroncore"
ResourceAWSNeuronDevice corev1.ResourceName = "aws.amazon.com/neurondevice"
ResourceHabanaGaudi corev1.ResourceName = "habana.ai/gaudi"
ResourceAWSPodENI corev1.ResourceName = "vpc.amazonaws.com/pod-eni"
ResourcePrivateIPv4Address corev1.ResourceName = "vpc.amazonaws.com/PrivateIPv4Address"
Expand Down
2 changes: 1 addition & 1 deletion pkg/providers/instance/instance.go
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,7 @@ func filterExoticInstanceTypes(instanceTypes []*cloudprovider.InstanceType) []*c
if _, ok := lo.Find(it.Requirements.Get(v1.LabelInstanceSize).Values(), func(size string) bool { return strings.Contains(size, "metal") }); ok {
continue
}
if !resources.IsZero(it.Capacity[v1.ResourceAWSNeuron]) ||
if !resources.IsZero(it.Capacity[v1.ResourceAWSNeuronDevice]) ||
!resources.IsZero(it.Capacity[v1.ResourceAMDGPU]) ||
!resources.IsZero(it.Capacity[v1.ResourceNVIDIAGPU]) ||
!resources.IsZero(it.Capacity[v1.ResourceHabanaGaudi]) {
Expand Down
60 changes: 44 additions & 16 deletions pkg/providers/instancetype/suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -754,28 +754,28 @@ var _ = Describe("InstanceTypeProvider", func() {
}
Expect(nodeNames.Len()).To(Equal(1))
})
It("should launch instances for aws.amazon.com/neuron resource requests", func() {
It("should launch instances for aws.amazon.com/neurondevice resource requests", func() {
nodeNames := sets.NewString()
ExpectApplied(ctx, env.Client, nodePool, nodeClass)
pods := []*corev1.Pod{
coretest.UnschedulablePod(coretest.PodOptions{
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
},
}),
// Should pack onto same instance
coretest.UnschedulablePod(coretest.PodOptions{
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("2")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("2")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("2")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("2")},
},
}),
// Should pack onto a separate instance
coretest.UnschedulablePod(coretest.PodOptions{
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("4")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("4")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("4")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("4")},
},
}),
}
Expand All @@ -787,7 +787,7 @@ var _ = Describe("InstanceTypeProvider", func() {
}
Expect(nodeNames.Len()).To(Equal(2))
})
It("should launch trn1 instances for aws.amazon.com/neuron resource requests", func() {
It("should launch trn1 instances for aws.amazon.com/neurondevice resource requests", func() {
nodeNames := sets.NewString()
nodePool.Spec.Template.Spec.Requirements = []karpv1.NodeSelectorRequirementWithMinValues{
{
Expand All @@ -802,8 +802,8 @@ var _ = Describe("InstanceTypeProvider", func() {
pods := []*corev1.Pod{
coretest.UnschedulablePod(coretest.PodOptions{
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
},
}),
}
Expand All @@ -815,6 +815,34 @@ var _ = Describe("InstanceTypeProvider", func() {
}
Expect(nodeNames.Len()).To(Equal(1))
})
It("should launch inf1 instances for aws.amazon.com/neuroncore resource requests", func() {
nodeNames := sets.NewString()
nodePool.Spec.Template.Spec.Requirements = []karpv1.NodeSelectorRequirementWithMinValues{
{
NodeSelectorRequirement: corev1.NodeSelectorRequirement{
Key: corev1.LabelInstanceTypeStable,
Operator: corev1.NodeSelectorOpIn,
Values: []string{"inf1.xlarge"},
},
},
}
ExpectApplied(ctx, env.Client, nodePool, nodeClass)
pods := []*corev1.Pod{
coretest.UnschedulablePod(coretest.PodOptions{
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuronCore: resource.MustParse("4")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronCore: resource.MustParse("4")},
},
}),
}
ExpectProvisioned(ctx, env.Client, cluster, cloudProvider, prov, pods...)
for _, pod := range pods {
node := ExpectScheduled(ctx, env.Client, pod)
Expect(node.Labels).To(HaveKeyWithValue(corev1.LabelInstanceTypeStable, "inf1.xlarge"))
nodeNames.Insert(node.Name)
}
Expect(nodeNames.Len()).To(Equal(1))
})
It("should launch instances for vpc.amazonaws.com/efa resource requests", func() {
nodePool.Spec.Template.Spec.Requirements = []karpv1.NodeSelectorRequirementWithMinValues{
{
Expand Down Expand Up @@ -1905,15 +1933,15 @@ var _ = Describe("InstanceTypeProvider", func() {
coretest.UnschedulablePod(coretest.PodOptions{
NodeSelector: map[string]string{corev1.LabelTopologyZone: "test-zone-1a"},
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
},
}),
coretest.UnschedulablePod(coretest.PodOptions{
NodeSelector: map[string]string{corev1.LabelTopologyZone: "test-zone-1a"},
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("1")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("1")},
},
}),
}
Expand Down Expand Up @@ -1998,8 +2026,8 @@ var _ = Describe("InstanceTypeProvider", func() {
pod := coretest.UnschedulablePod(coretest.PodOptions{
NodeSelector: map[string]string{corev1.LabelInstanceTypeStable: "inf1.6xlarge"},
ResourceRequirements: corev1.ResourceRequirements{
Requests: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("2")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuron: resource.MustParse("2")},
Requests: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("2")},
Limits: corev1.ResourceList{v1.ResourceAWSNeuronDevice: resource.MustParse("2")},
},
})
ExpectProvisioned(ctx, env.Client, cluster, cloudProvider, prov, pod)
Expand Down
47 changes: 22 additions & 25 deletions pkg/providers/instancetype/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -141,25 +141,18 @@ func computeRequirements(info *ec2.InstanceTypeInfo, offerings cloudprovider.Off
requirements.Get(v1.LabelInstanceGPUCount).Insert(fmt.Sprint(aws.Int64Value(gpu.Count)))
requirements.Get(v1.LabelInstanceGPUMemory).Insert(fmt.Sprint(aws.Int64Value(gpu.MemoryInfo.SizeInMiB)))
}
// Accelerators
if info.InferenceAcceleratorInfo != nil && len(info.InferenceAcceleratorInfo.Accelerators) == 1 {
accelerator := info.InferenceAcceleratorInfo.Accelerators[0]
// Neuron
if info.NeuronInfo != nil && len(info.NeuronInfo.NeuronDevices) == 1 {
accelerator := info.NeuronInfo.NeuronDevices[0]
requirements.Get(v1.LabelInstanceAcceleratorName).Insert(lowerKabobCase(aws.StringValue(accelerator.Name)))
requirements.Get(v1.LabelInstanceAcceleratorManufacturer).Insert(lowerKabobCase(aws.StringValue(accelerator.Manufacturer)))
requirements.Get(v1.LabelInstanceAcceleratorManufacturer).Insert(lowerKabobCase("AWS"))
requirements.Get(v1.LabelInstanceAcceleratorCount).Insert(fmt.Sprint(aws.Int64Value(accelerator.Count)))
requirements.Get(v1.LabelInstanceAcceleratorMemory).Insert(fmt.Sprint(aws.Int64Value(info.NeuronInfo.TotalNeuronDeviceMemoryInMiB)))
}
// Windows Build Version Labels
if family, ok := amiFamily.(*amifamily.Windows); ok {
requirements.Get(corev1.LabelWindowsBuild).Insert(family.Build)
}
// Trn1 Accelerators
// TODO: remove function once DescribeInstanceTypes contains the accelerator data
// Values found from: https://aws.amazon.com/ec2/instance-types/trn1/
if strings.HasPrefix(*info.InstanceType, "trn1") {
requirements.Get(v1.LabelInstanceAcceleratorName).Insert(lowerKabobCase("Inferentia"))
requirements.Get(v1.LabelInstanceAcceleratorManufacturer).Insert(lowerKabobCase("AWS"))
requirements.Get(v1.LabelInstanceAcceleratorCount).Insert(fmt.Sprint(awsNeurons(info)))
}
// CPU Manufacturer, valid options: aws, intel, amd
if info.ProcessorInfo != nil {
requirements.Get(v1.LabelInstanceCPUManufacturer).Insert(lowerKabobCase(aws.StringValue(info.ProcessorInfo.Manufacturer)))
Expand Down Expand Up @@ -202,7 +195,9 @@ func computeCapacity(ctx context.Context, info *ec2.InstanceTypeInfo, amiFamily
v1.ResourceAWSPodENI: *awsPodENI(aws.StringValue(info.InstanceType)),
v1.ResourceNVIDIAGPU: *nvidiaGPUs(info),
v1.ResourceAMDGPU: *amdGPUs(info),
v1.ResourceAWSNeuron: *awsNeurons(info),
v1.ResourceAWSNeuron: *awsNeuronDevices(info),
v1.ResourceAWSNeuronCore: *awsNeuronCores(info),
v1.ResourceAWSNeuronDevice: *awsNeuronDevices(info),
v1.ResourceHabanaGaudi: *habanaGaudis(info),
v1.ResourceEFA: *efas(info),
}
Expand Down Expand Up @@ -297,19 +292,21 @@ func amdGPUs(info *ec2.InstanceTypeInfo) *resource.Quantity {
return resources.Quantity(fmt.Sprint(count))
}

// TODO: remove trn1 hardcode values once DescribeInstanceTypes contains the accelerator data
// Values found from: https://aws.amazon.com/ec2/instance-types/trn1/
func awsNeurons(info *ec2.InstanceTypeInfo) *resource.Quantity {
func awsNeuronCores(info *ec2.InstanceTypeInfo) *resource.Quantity {
count := int64(0)
if info.NeuronInfo != nil {
for _, device := range info.NeuronInfo.NeuronDevices {
count += *device.CoreInfo.Count
}
}
return resources.Quantity(fmt.Sprint(count))
}

func awsNeuronDevices(info *ec2.InstanceTypeInfo) *resource.Quantity {
count := int64(0)
if *info.InstanceType == "trn1.2xlarge" {
count = int64(1)
} else if *info.InstanceType == "trn1.32xlarge" {
count = int64(16)
} else if *info.InstanceType == "trn1n.32xlarge" {
count = int64(16)
} else if info.InferenceAcceleratorInfo != nil {
for _, accelerator := range info.InferenceAcceleratorInfo.Accelerators {
count += *accelerator.Count
if info.NeuronInfo != nil {
for _, device := range info.NeuronInfo.NeuronDevices {
count += *device.Count
}
}
return resources.Quantity(fmt.Sprint(count))
Expand Down
2 changes: 2 additions & 0 deletions website/content/en/preview/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ Accelerator (e.g., GPU) values include
- `nvidia.com/gpu`
- `amd.com/gpu`
- `aws.amazon.com/neuron`
- `aws.amazon.com/neuroncore`
- `aws.amazon.com/neurondevice`
- `habana.ai/gaudi`

Karpenter supports accelerators, such as GPUs.
Expand Down

0 comments on commit 3d1b8da

Please sign in to comment.