Skip to content

Commit

Permalink
docs: Manually release docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ellistarn committed Dec 5, 2023
1 parent 1df05b4 commit c8e39fe
Show file tree
Hide file tree
Showing 161 changed files with 12,127 additions and 32,488 deletions.
3 changes: 2 additions & 1 deletion website/content/en/docs/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,7 @@ Take care to ensure the label domains are correct. A well known label like `karp
| karpenter.k8s.aws/instance-cpu | 32 | [AWS Specific] Number of CPUs on the instance |
| karpenter.k8s.aws/instance-memory | 131072 | [AWS Specific] Number of mebibytes of memory on the instance |
| karpenter.k8s.aws/instance-network-bandwidth | 131072 | [AWS Specific] Number of [baseline megabits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) available on the instance |
| karpenter.k8s.aws/instance-pods | 110 | [AWS Specific] Number of pods the instance supports |
| karpenter.k8s.aws/instance-gpu-name | t4 | [AWS Specific] Name of the GPU on the instance, if available |
| karpenter.k8s.aws/instance-gpu-manufacturer | nvidia | [AWS Specific] Name of the GPU manufacturer |
| karpenter.k8s.aws/instance-gpu-count | 1 | [AWS Specific] Number of GPUs on the instance |
Expand Down Expand Up @@ -607,5 +608,5 @@ topologySpreadConstraints:
topologyKey: capacity-spread
whenUnsatisfiable: DoNotSchedule
labelSelector:
...
...
```
10 changes: 5 additions & 5 deletions website/content/en/docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for
AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well.

### Can I write my own cloud provider for Karpenter?
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.32.3/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.33.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.

### What operating system nodes does Karpenter deploy?
By default, Karpenter uses Amazon Linux 2 images.
Expand All @@ -26,7 +26,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref
Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}).

### What RBAC access is required?
All the required RBAC rules can be found in the helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.32.3/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.32.3/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.32.3/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.32.3/charts/karpenter/templates/role.yaml) files for details.
All the required RBAC rules can be found in the helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.33.0/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.33.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.33.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.33.0/charts/karpenter/templates/role.yaml) files for details.

### Can I run Karpenter outside of a Kubernetes cluster?
Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API.
Expand Down Expand Up @@ -92,9 +92,9 @@ Yes, Karpenter supports provisioning metal instance types when a NodePool's `nod

### How does Karpenter dynamically select instance types?

Karpenter batches pending pods and then binpacks them based on CPU, memory, and GPUs required, taking into account node overhead, VPC CNI resources required, and daemonsets that will be packed when bringing up a new node. Karpenter [recommends the use of C, M, and R >= Gen 3 instance types]({{< ref "./concepts/nodepools#spectemplatespecrequirements" >}}) for most generic workloads, but it can be constrained in the NodePool spec with the [instance-type](https://kubernetes.io/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type) well-known label in the requirements section.
Karpenter batches pending pods and then binpacks them based on CPU, memory, and GPUs required, taking into account node overhead, VPC CNI resources required, and daemonsets that will be packed when bringing up a new node. Karpenter [recommends the use of C, M, and R >= Gen 3 instance types]({{< ref "./concepts/nodepools#spectemplatespecrequirements" >}}) for most generic workloads, but it can be constrained in the NodePool spec with the [instance-type](https://kubernetes.io/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type) well-known label in the requirements section.

After the pods are binpacked on the most efficient instance type (i.e. the smallest instance type that can fit the pod batch), Karpenter takes 59 other instance types that are larger than the most efficient packing, and passes all 60 instance type options to an API called Amazon EC2 Fleet.
After the pods are binpacked on the most efficient instance type (i.e. the smallest instance type that can fit the pod batch), Karpenter takes 59 other instance types that are larger than the most efficient packing, and passes all 60 instance type options to an API called Amazon EC2 Fleet.


The EC2 fleet API attempts to provision the instance type based on the [Price Capacity Optimized allocation strategy](https://aws.amazon.com/blogs/compute/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances/). For the on-demand capacity type, this is effectively equivalent to the `lowest-price` allocation strategy. For the spot capacity type, Fleet will determine an instance type that has both the lowest price combined with the lowest chance of being interrupted. Note that this may not give you the instance type with the strictly lowest price for spot.
Expand Down Expand Up @@ -206,7 +206,7 @@ For information on upgrading Karpenter, see the [Upgrade Guide]({{< ref "./upgra

### How do I upgrade an EKS Cluster with Karpenter?

When upgrading an Amazon EKS cluster, [Karpenter's Drift feature]({{<ref "./concepts/disruption#drift" >}}) can automatically upgrade the Karpenter-provisioned nodes to stay in-sync with the EKS control plane. Karpenter Drift currently needs to be enabled using a [feature gate]({{<ref "./reference/settings#feature-gates" >}}).
When upgrading an Amazon EKS cluster, [Karpenter's Drift feature]({{<ref "./concepts/disruption#drift" >}}) can automatically upgrade the Karpenter-provisioned nodes to stay in-sync with the EKS control plane. Karpenter Drift currently needs to be enabled using a [feature gate]({{<ref "./reference/settings#feature-gates" >}}).

{{% alert title="Note" color="primary" %}}
Karpenter's default [EC2NodeClass `amiFamily` configuration]({{<ref "./concepts/nodeclasses#specamifamily" >}}) uses the latest EKS Optimized AL2 AMI for the same major and minor version as the EKS cluster's control plane, meaning that an upgrade of the control plane will cause Karpenter to auto-discover the new AMIs for that version.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ authenticate properly by running `aws sts get-caller-identity`.
After setting up the tools, set the Karpenter and Kubernetes version:

```bash
export KARPENTER_NAMESPACE=karpenter
export KARPENTER_NAMESPACE=kube-system
export KARPENTER_VERSION=v0.32.3
export K8S_VERSION={{< param "latest_k8s_version" >}}
```
Expand Down Expand Up @@ -90,6 +90,10 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/

{{% script file="./content/en/{VERSION}/getting-started/getting-started-with-karpenter/scripts/step08-apply-helm-chart.sh" language="bash"%}}

{{% alert title="Warning" color="warning" %}}
Karpenter supports using [Kubernetes Common Expression Language](https://kubernetes.io/docs/reference/using-api/cel/) for validating its Custom Resource Definitions out-of-the-box; however, this feature is not supported on versions of Kubernetes < 1.25. If you are running an earlier version of Kubernetes, you will need to use the Karpenter admission webhooks for validation instead. You can enable these webhooks with `--set webhook.enabled=true` when applying the Karpenter helm chart.
{{% /alert %}}

{{% alert title="Warning" color="warning" %}}
Karpenter creates a mapping between CloudProvider machines and CustomResources in the cluster for capacity tracking. To ensure this mapping is consistent, Karpenter utilizes the following tag keys:

Expand Down Expand Up @@ -159,7 +163,7 @@ The new stack has only one user, `admin`, and the password is stored in a secret

## Advanced Installation

The section below covers advanced installation techniques for installing Karpenter. This includes running Karpenter on a cluster without public internet access or ensuring that Karpenter avoids request throttling by other components in your cluster.
The section below covers advanced installation techniques for installing Karpenter. This includes things such as running Karpenter on a cluster without public internet access or ensuring that Karpenter avoids getting throttled by other components in your cluster.

### Private Clusters

Expand All @@ -170,7 +174,7 @@ privateCluster:
enabled: true
```

Private clusters have no outbound access to the internet. This means that in order for Karpenter to reach out to AWS services, you need to enable specific VPC private endpoints. Below shows the endpoints that you need to enable to successfully run Karpenter in a private cluster:
Private clusters have no outbound access to the internet. This means that in order for Karpenter to reach out to the services that it needs to access, you need to enable specific VPC private endpoints. Below shows the endpoints that you need to enable to successfully run Karpenter in a private cluster:

```text
com.amazonaws.<region>.ec2
Expand All @@ -182,15 +186,28 @@ com.amazonaws.<region>.ssm - For resolving default AMIs
com.amazonaws.<region>.sqs - For accessing SQS if using interruption handling
```

If you do not currently have these endpoints surfaced in your VPC, you can add the endpoints by running the following command.
If you do not currently have these endpoints surfaced in your VPC, you can add the endpoints by running

```bash
aws ec2 create-vpc-endpoint --vpc-id ${VPC_ID} --service-name ${SERVICE_NAME} --vpc-endpoint-type Interface --subnet-ids ${SUBNET_IDS} --security-group-ids ${SECURITY_GROUP_IDS}
```

{{% alert title="Note" color="primary" %}}

Karpenter (controller and webhook deployment) container images must be in or copied to Amazon ECR private or to a another private registry accessible from inside the VPC. If these are not available from within the VPC, or from networks peered with the VPC, you will get Image pull errors when Kubernetes tries to pull these images from ECR public.
Karpenter (controller and webhook deployment) container images must be in or copied to Amazon ECR private or to another private registry accessible from inside the VPC. If these are not available from within the VPC, or from networks peered with the VPC, you will get Image pull errors when Kubernetes tries to pull these images from ECR public.

{{% /alert %}}

{{% alert title="Note" color="primary" %}}

There is currently no VPC private endpoint for the [IAM API](https://docs.aws.amazon.com/IAM/latest/APIReference/welcome.html). As a result, you cannot use the default `spec.role` field in your `EC2NodeClass`. Instead, you need to provision and manage an instance profile manually and then specify Karpenter to use this instance profile through the `spec.instanceProfile` field.

You can provision an instance profile manually and assign a Node role to it by calling the following command

```bash
aws iam create-instance-profile --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"
aws iam add-role-to-instance-profile --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" --role-name "KarpenterNodeRole-${CLUSTER_NAME}"
```

{{% /alert %}}

Expand All @@ -212,8 +229,8 @@ caused by: Post "https://api.pricing.us-east-1.amazonaws.com/": dial tcp 52.94.2

Kubernetes uses [FlowSchemas](https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#flowschema) and [PriorityLevelConfigurations](https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#prioritylevelconfiguration) to map calls to the API server into buckets which determine each user agent's throttling limits.

By default, Karpenter is placed in the `workload-low` PriorityLevelConfiguration for all APIServer requests. This means that other components that make a high number of requests to the APIServer may affect the ability for Karpenter to make requests.
By default, Karpenter is installed into the `kube-system` namespace, which leverages the `system-leader-election` and `kube-system-service-accounts` [FlowSchemas] to map calls from the `kube-system` namespace to the `leader-election` and `workload-high` PriorityLevelConfigurations respectively. By putting Karpenter in these PriorityLevelConfigurations, we ensure that Karpenter and other critical cluster components are able to run even if other components on the cluster are throttled in other PriorityLevelConfigurations.

To ensure that Karpenter is unaffected by these other, lower priority components, we can place Karpenter into a higher-priority PriorityLevelConfiguration using a custom FlowSchema.
If you install Karpenter in a different namespace than the default `kube-system` namespace, Karpenter will not be put into these higher-priority FlowSchemas by default. Instead, you will need to create custom FlowSchemas for the namespace and service account where Karpenter is installed to ensure that requests are put into this higher PriorityLevelConfiguration.

{{% script file="./content/en/{VERSION}/getting-started/getting-started-with-karpenter/scripts/step15-apply-flowschemas.sh" language="bash"%}}
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ Resources:
"Resource": [
"arn:${AWS::Partition}:ec2:${AWS::Region}::image/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}::snapshot/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:spot-instances-request/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:security-group/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:subnet/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:launch-template/*"
Expand All @@ -58,7 +57,8 @@ Resources:
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:instance/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:volume/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:network-interface/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:launch-template/*"
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:launch-template/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:spot-instances-request/*"
],
"Action": [
"ec2:RunInstances",
Expand All @@ -82,7 +82,8 @@ Resources:
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:instance/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:volume/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:network-interface/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:launch-template/*"
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:launch-template/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:spot-instances-request/*"
],
"Action": "ec2:CreateTags",
"Condition": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@ dashboardProviders:
dashboards:
default:
capacity-dashboard:
url: https://karpenter.sh/v0.32/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json
url: https://karpenter.sh/v0.33/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json
performance-dashboard:
url: https://karpenter.sh/v0.32/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json
url: https://karpenter.sh/v0.33/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ You can also perform many of these steps in the console, but we will use the com
Set a variable for your cluster name.

```bash
KARPENTER_NAMESPACE=karpenter
KARPENTER_NAMESPACE=kube-system
CLUSTER_NAME=<your cluster name>
```

Expand Down Expand Up @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group.
First set the Karpenter release you want to deploy.
```bash
export KARPENTER_VERSION=v0.32.3
export KARPENTER_VERSION=v0.33.0
```

We can now generate a full Karpenter deployment yaml from the helm chart.
Expand Down Expand Up @@ -133,7 +133,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t
## Create default NodePool
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v0.32.3/examples/v1beta1) for specific needs.
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v0.33.0/examples/v1beta1) for specific needs.
{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,4 @@
kubectl create namespace "${KARPENTER_NAMESPACE}" || true
kubectl create -f \
https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_provisioners.yaml
kubectl create -f \
https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml
kubectl create -f \
https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_machines.yaml
kubectl create -f \
https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_nodepools.yaml
kubectl create -f \
Expand Down
Loading

0 comments on commit c8e39fe

Please sign in to comment.