Skip to content

Commit

Permalink
feat: Get Neuron device and core count from EC2 API for all trn* an…
Browse files Browse the repository at this point in the history
…d `inf*` instance types (#6510)

Co-authored-by: Jason Deal <[email protected]>
  • Loading branch information
bryantbiggs and jmdeal authored Oct 31, 2024
1 parent 939f23b commit c2f019d
Show file tree
Hide file tree
Showing 16 changed files with 993 additions and 112 deletions.
5 changes: 3 additions & 2 deletions designs/limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,12 @@ The next large problem is the inability to define a hard ceiling on cluster cost

We need to provide similar functionality via Karpenter as well wherein there's a hard limit a customer can configure.


## Current State

To address the runaway-scaling problem the current fix in place is to detect if the kubelet for a worker node has never reported its status to the K8s control plane. If it's been longer than 15 minutes, Karpenter assumes that there's a hard failure mode due to which this worker node will never become healthy and terminates the worker node. If the condition map of the node object in the API Server says `NodeStatusNeverUpdated` then we use that as an indicator of the node having never come up.

This fix ensures that if there are other scenarios where a worker node has become unhealthy due to a network partition or power outage in a availability zone, we don't terminate those worker nodes. It's important we don't make the static stability of a cluster worse during such an event. On the other hand, if there is an edge case where worker nodes come online and soon go offline, it will lead to runaway scaling again. This edge case should be unlikely to happen in the near term, so this document focuses on just the ability to limit costs within Karpenter. That way even if runaway scaling does occur there's a way to bound it. A longer-term solution to handle the runaway problem will be discussed separately.


## Proposed Solution for Limits

There are two broad forms of limiting we could apply. The first is that we could introduce a limit to the number of in-flight worker node being provisioned at a point in time. A worker node that's in the `NotReady` state could be considered to be in-flight. The second form is an absolute limit of the number of resources Karpenter can provision.
Expand All @@ -37,6 +35,7 @@ In the above example - `20%` indicates that if at any point in time, more than 2
The good bit about this approach is that we don't constrain how many total worker nodes can be spun up by Karpenter, while also making sure that if we keep launching worker nodes that aren't healthy, we stop the scaling and save costs.

The two main problems with this approach though are -

1. This limit while meant to just constrain the number of unhealthy worker nodes in a cluster, will also inhibit the rate at which Karpenter can respond to pods that aren't schedulable. This somewhat goes against the goal of minimizing launch times of workers.
2. While this helps ensure that costs don't increase due to runaway scaling, it won't help those who want a stricter cap on the amount of resources that's being provisioned even when nodes are otherwise healthy.

Expand All @@ -62,11 +61,13 @@ As a cost control mechanism, this requires a little more work from our users if
[CPU limits](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units), memory limits and GPU limits will be defined similar to resource requests and will not be required by default. Karpenter will also will not default to any limits itself.

The list of supported resource types is -

- `cpu`
- `memory`
- `nvidia.com/gpu`
- `amd.com/gpu`
- `aws.amazon.com/neuron`
- `aws.amazon.com/neuroncore`
- `habana.ai/gaudi`

Limits will be defined at the per-provisioner level. We'll rely on the `karpenter.sh/provisioner-name` node label when calculating resource usage by a specific provisioner. This is useful when multiple teams share a single cluster and use separate provisioners since each team's resource consumption will be limited separately.
Expand Down
2 changes: 1 addition & 1 deletion examples/workloads/neuron.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ spec:
cpu: "1"
memory: 256M
securityContext:
allowPrivilegeEscalation: false
allowPrivilegeEscalation: false
23 changes: 15 additions & 8 deletions hack/code/instancetype_testdata_gen/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -147,11 +147,11 @@ func getInstanceTypeInfo(info *ec2.InstanceTypeInfo) string {
fmt.Fprintf(src, "NvmeSupport: aws.String(\"%s\"),\n", lo.FromPtr(info.EbsInfo.NvmeSupport))
fmt.Fprintf(src, "},\n")
}
if info.InferenceAcceleratorInfo != nil {
fmt.Fprintf(src, "InferenceAcceleratorInfo: &ec2.InferenceAcceleratorInfo{\n")
fmt.Fprintf(src, "Accelerators: []*ec2.InferenceDeviceInfo{\n")
for _, elem := range info.InferenceAcceleratorInfo.Accelerators {
fmt.Fprintf(src, getInferenceAcceleratorDeviceInfo(elem))
if info.NeuronInfo != nil {
fmt.Fprintf(src, "NeuronInfo: &ec2.NeuronInfo{\n")
fmt.Fprintf(src, "NeuronDevices: []*ec2.NeuronDeviceInfo{\n")
for _, elem := range info.NeuronInfo.NeuronDevices {
fmt.Fprintf(src, getNeuronDeviceInfo(elem))
}
fmt.Fprintf(src, "},\n")
fmt.Fprintf(src, "},\n")
Expand Down Expand Up @@ -199,12 +199,19 @@ func getNetworkCardInfo(info *ec2.NetworkCardInfo) string {
return src.String()
}

func getInferenceAcceleratorDeviceInfo(info *ec2.InferenceDeviceInfo) string {
func getNeuronDeviceInfo(info *ec2.NeuronDeviceInfo) string {

src := &bytes.Buffer{}
fmt.Fprintf(src, "{\n")
fmt.Fprintf(src, "Name: aws.String(\"%s\"),\n", lo.FromPtr(info.Name))
fmt.Fprintf(src, "Manufacturer: aws.String(\"%s\"),\n", lo.FromPtr(info.Manufacturer))
fmt.Fprintf(src, "Count: aws.Int64(%d),\n", lo.FromPtr(info.Count))
fmt.Fprintf(src, "Name: aws.String(\"%s\"),\n", lo.FromPtr(info.Name))
fmt.Fprintf(src, "CoreInfo: &ec2.NeuronDeviceCoreInfo{\n")
fmt.Fprintf(src, "Count: aws.Int64(%d),\n", lo.FromPtr(info.CoreInfo.Count))
fmt.Fprintf(src, "Version: aws.Int64(%d),\n", lo.FromPtr(info.CoreInfo.Version))
fmt.Fprintf(src, "},\n")
fmt.Fprintf(src, "MemoryInfo: &ec2.NeuronDeviceMemoryInfo{\n")
fmt.Fprintf(src, "SizeInMiB: aws.Int64(%d),\n", lo.FromPtr(info.MemoryInfo.SizeInMiB))
fmt.Fprintf(src, "},\n")
fmt.Fprintf(src, "},\n")
return src.String()
}
Expand Down
2 changes: 1 addition & 1 deletion hack/codegen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ instanceTypeTestData() {
GENERATED_FILE="pkg/fake/zz_generated.describe_instance_types.go"

go run hack/code/instancetype_testdata_gen/main.go --out-file ${GENERATED_FILE} \
--instance-types t3.large,m5.large,m5.xlarge,p3.8xlarge,g4dn.8xlarge,c6g.large,inf1.2xlarge,inf1.6xlarge,trn1.2xlarge,m5.metal,dl1.24xlarge,m6idn.32xlarge,t4g.small,t4g.xlarge,t4g.medium,g4ad.16xlarge
--instance-types t3.large,m5.large,m5.xlarge,p3.8xlarge,g4dn.8xlarge,c6g.large,inf2.xlarge,inf2.24xlarge,trn1.2xlarge,m5.metal,dl1.24xlarge,m6idn.32xlarge,t4g.small,t4g.xlarge,t4g.medium,g4ad.16xlarge

checkForUpdates "${GENERATED_FILE}"
}
Expand Down
2 changes: 1 addition & 1 deletion hack/docs/instancetypes_gen/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ below are the resources available with some assumptions and after the instance o
resourceNameMap := sets.New[string]()

// Iterate through regions and take the union of instance types we discover across both
for _, region := range []string{"us-east-1", "us-west-2"} {
for _, region := range []string{"us-east-1", "us-east-2", "us-west-2"} {
sess := session.Must(session.NewSession(&aws.Config{Region: lo.ToPtr(region)}))
ec2api := ec2.New(sess)
subnetProvider := subnet.NewDefaultProvider(ec2api, cache.New(awscache.DefaultTTL, awscache.DefaultCleanupInterval), cache.New(awscache.AvailableIPAddressTTL, awscache.DefaultCleanupInterval), cache.New(awscache.AssociatePublicIPAddressTTL, awscache.DefaultCleanupInterval))
Expand Down
1 change: 1 addition & 0 deletions pkg/apis/v1/labels.go
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ var (
ResourceNVIDIAGPU corev1.ResourceName = "nvidia.com/gpu"
ResourceAMDGPU corev1.ResourceName = "amd.com/gpu"
ResourceAWSNeuron corev1.ResourceName = "aws.amazon.com/neuron"
ResourceAWSNeuronCore corev1.ResourceName = "aws.amazon.com/neuroncore"
ResourceHabanaGaudi corev1.ResourceName = "habana.ai/gaudi"
ResourceAWSPodENI corev1.ResourceName = "vpc.amazonaws.com/pod-eni"
ResourcePrivateIPv4Address corev1.ResourceName = "vpc.amazonaws.com/PrivateIPv4Address"
Expand Down
4 changes: 2 additions & 2 deletions pkg/fake/ec2api.go
Original file line number Diff line number Diff line change
Expand Up @@ -631,11 +631,11 @@ func (e *EC2API) DescribeInstanceTypeOfferingsWithContext(_ context.Context, _ *
Location: aws.String("test-zone-1b"),
},
{
InstanceType: aws.String("inf1.2xlarge"),
InstanceType: aws.String("inf2.xlarge"),
Location: aws.String("test-zone-1a"),
},
{
InstanceType: aws.String("inf1.6xlarge"),
InstanceType: aws.String("inf2.24xlarge"),
Location: aws.String("test-zone-1a"),
},
{
Expand Down
103 changes: 65 additions & 38 deletions pkg/fake/zz_generated.describe_instance_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -267,107 +267,119 @@ var defaultDescribeInstanceTypesOutput = &ec2.DescribeInstanceTypesOutput{
},
},
{
InstanceType: aws.String("inf1.2xlarge"),
InstanceType: aws.String("inf2.24xlarge"),
SupportedUsageClasses: aws.StringSlice([]string{"on-demand", "spot"}),
SupportedVirtualizationTypes: aws.StringSlice([]string{"hvm"}),
BurstablePerformanceSupported: aws.Bool(false),
BareMetal: aws.Bool(false),
Hypervisor: aws.String("nitro"),
ProcessorInfo: &ec2.ProcessorInfo{
Manufacturer: aws.String("Intel"),
Manufacturer: aws.String("AMD"),
SupportedArchitectures: aws.StringSlice([]string{"x86_64"}),
},
VCpuInfo: &ec2.VCpuInfo{
DefaultCores: aws.Int64(4),
DefaultVCpus: aws.Int64(8),
DefaultCores: aws.Int64(48),
DefaultVCpus: aws.Int64(96),
},
MemoryInfo: &ec2.MemoryInfo{
SizeInMiB: aws.Int64(16384),
SizeInMiB: aws.Int64(393216),
},
EbsInfo: &ec2.EbsInfo{
EbsOptimizedInfo: &ec2.EbsOptimizedInfo{
BaselineBandwidthInMbps: aws.Int64(1190),
BaselineIops: aws.Int64(6000),
BaselineThroughputInMBps: aws.Float64(148.75),
MaximumBandwidthInMbps: aws.Int64(4750),
MaximumIops: aws.Int64(20000),
MaximumThroughputInMBps: aws.Float64(593.75),
BaselineBandwidthInMbps: aws.Int64(30000),
BaselineIops: aws.Int64(120000),
BaselineThroughputInMBps: aws.Float64(3750.00),
MaximumBandwidthInMbps: aws.Int64(30000),
MaximumIops: aws.Int64(120000),
MaximumThroughputInMBps: aws.Float64(3750.00),
},
EbsOptimizedSupport: aws.String("default"),
EncryptionSupport: aws.String("supported"),
NvmeSupport: aws.String("required"),
},
InferenceAcceleratorInfo: &ec2.InferenceAcceleratorInfo{
Accelerators: []*ec2.InferenceDeviceInfo{
NeuronInfo: &ec2.NeuronInfo{
NeuronDevices: []*ec2.NeuronDeviceInfo{
{
Name: aws.String("Inferentia"),
Manufacturer: aws.String("AWS"),
Count: aws.Int64(1),
Count: aws.Int64(6),
Name: aws.String("Inferentia2"),
CoreInfo: &ec2.NeuronDeviceCoreInfo{
Count: aws.Int64(2),
Version: aws.Int64(2),
},
MemoryInfo: &ec2.NeuronDeviceMemoryInfo{
SizeInMiB: aws.Int64(32768),
},
},
},
},
NetworkInfo: &ec2.NetworkInfo{
MaximumNetworkInterfaces: aws.Int64(4),
Ipv4AddressesPerInterface: aws.Int64(10),
MaximumNetworkInterfaces: aws.Int64(15),
Ipv4AddressesPerInterface: aws.Int64(50),
EncryptionInTransitSupported: aws.Bool(true),
DefaultNetworkCardIndex: aws.Int64(0),
NetworkCards: []*ec2.NetworkCardInfo{
{
NetworkCardIndex: aws.Int64(0),
MaximumNetworkInterfaces: aws.Int64(4),
MaximumNetworkInterfaces: aws.Int64(15),
},
},
},
},
{
InstanceType: aws.String("inf1.6xlarge"),
InstanceType: aws.String("inf2.xlarge"),
SupportedUsageClasses: aws.StringSlice([]string{"on-demand", "spot"}),
SupportedVirtualizationTypes: aws.StringSlice([]string{"hvm"}),
BurstablePerformanceSupported: aws.Bool(false),
BareMetal: aws.Bool(false),
Hypervisor: aws.String("nitro"),
ProcessorInfo: &ec2.ProcessorInfo{
Manufacturer: aws.String("Intel"),
Manufacturer: aws.String("AMD"),
SupportedArchitectures: aws.StringSlice([]string{"x86_64"}),
},
VCpuInfo: &ec2.VCpuInfo{
DefaultCores: aws.Int64(12),
DefaultVCpus: aws.Int64(24),
DefaultCores: aws.Int64(2),
DefaultVCpus: aws.Int64(4),
},
MemoryInfo: &ec2.MemoryInfo{
SizeInMiB: aws.Int64(49152),
SizeInMiB: aws.Int64(16384),
},
EbsInfo: &ec2.EbsInfo{
EbsOptimizedInfo: &ec2.EbsOptimizedInfo{
BaselineBandwidthInMbps: aws.Int64(4750),
BaselineIops: aws.Int64(20000),
BaselineThroughputInMBps: aws.Float64(593.75),
MaximumBandwidthInMbps: aws.Int64(4750),
MaximumIops: aws.Int64(20000),
MaximumThroughputInMBps: aws.Float64(593.75),
BaselineBandwidthInMbps: aws.Int64(1250),
BaselineIops: aws.Int64(6000),
BaselineThroughputInMBps: aws.Float64(156.25),
MaximumBandwidthInMbps: aws.Int64(10000),
MaximumIops: aws.Int64(40000),
MaximumThroughputInMBps: aws.Float64(1250.00),
},
EbsOptimizedSupport: aws.String("default"),
EncryptionSupport: aws.String("supported"),
NvmeSupport: aws.String("required"),
},
InferenceAcceleratorInfo: &ec2.InferenceAcceleratorInfo{
Accelerators: []*ec2.InferenceDeviceInfo{
NeuronInfo: &ec2.NeuronInfo{
NeuronDevices: []*ec2.NeuronDeviceInfo{
{
Name: aws.String("Inferentia"),
Manufacturer: aws.String("AWS"),
Count: aws.Int64(4),
Count: aws.Int64(1),
Name: aws.String("Inferentia2"),
CoreInfo: &ec2.NeuronDeviceCoreInfo{
Count: aws.Int64(2),
Version: aws.Int64(2),
},
MemoryInfo: &ec2.NeuronDeviceMemoryInfo{
SizeInMiB: aws.Int64(32768),
},
},
},
},
NetworkInfo: &ec2.NetworkInfo{
MaximumNetworkInterfaces: aws.Int64(8),
Ipv4AddressesPerInterface: aws.Int64(30),
MaximumNetworkInterfaces: aws.Int64(4),
Ipv4AddressesPerInterface: aws.Int64(15),
EncryptionInTransitSupported: aws.Bool(true),
DefaultNetworkCardIndex: aws.Int64(0),
NetworkCards: []*ec2.NetworkCardInfo{
{
NetworkCardIndex: aws.Int64(0),
MaximumNetworkInterfaces: aws.Int64(8),
MaximumNetworkInterfaces: aws.Int64(4),
},
},
},
Expand Down Expand Up @@ -821,6 +833,21 @@ var defaultDescribeInstanceTypesOutput = &ec2.DescribeInstanceTypesOutput{
EncryptionSupport: aws.String("supported"),
NvmeSupport: aws.String("required"),
},
NeuronInfo: &ec2.NeuronInfo{
NeuronDevices: []*ec2.NeuronDeviceInfo{
{
Count: aws.Int64(1),
Name: aws.String("Trainium"),
CoreInfo: &ec2.NeuronDeviceCoreInfo{
Count: aws.Int64(2),
Version: aws.Int64(2),
},
MemoryInfo: &ec2.NeuronDeviceMemoryInfo{
SizeInMiB: aws.Int64(32768),
},
},
},
},
InstanceStorageInfo: &ec2.InstanceStorageInfo{NvmeSupport: aws.String("required"),
TotalSizeInGB: aws.Int64(474),
},
Expand Down
1 change: 1 addition & 0 deletions pkg/providers/instance/instance.go
Original file line number Diff line number Diff line change
Expand Up @@ -461,6 +461,7 @@ func filterExoticInstanceTypes(instanceTypes []*cloudprovider.InstanceType) []*c
continue
}
if !resources.IsZero(it.Capacity[v1.ResourceAWSNeuron]) ||
!resources.IsZero(it.Capacity[v1.ResourceAWSNeuronCore]) ||
!resources.IsZero(it.Capacity[v1.ResourceAMDGPU]) ||
!resources.IsZero(it.Capacity[v1.ResourceNVIDIAGPU]) ||
!resources.IsZero(it.Capacity[v1.ResourceHabanaGaudi]) {
Expand Down
Loading

0 comments on commit c2f019d

Please sign in to comment.