diff --git a/website/content/en/docs/concepts/nodeclasses.md b/website/content/en/docs/concepts/nodeclasses.md index fa16330a6984..a3d482908228 100644 --- a/website/content/en/docs/concepts/nodeclasses.md +++ b/website/content/en/docs/concepts/nodeclasses.md @@ -207,7 +207,7 @@ status: status: "True" type: Ready ``` -Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.7/examples/v1/). +Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.8/examples/v1/). ## spec.kubelet @@ -1041,7 +1041,7 @@ spec: chown -R ec2-user ~ec2-user/.ssh ``` -For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.7/examples/v1). +For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.8/examples/v1). Karpenter will merge the userData you specify with the default userData for that AMIFamily. See the [AMIFamily]({{< ref "#specamifamily" >}}) section for more details on these defaults. View the sections below to understand the different merge strategies for each AMIFamily. diff --git a/website/content/en/docs/concepts/nodepools.md b/website/content/en/docs/concepts/nodepools.md index e4686a517e81..2fbc7f959a74 100644 --- a/website/content/en/docs/concepts/nodepools.md +++ b/website/content/en/docs/concepts/nodepools.md @@ -27,7 +27,7 @@ Here are things you should know about NodePools: Objects for setting Kubelet features have been moved from the NodePool spec to the EC2NodeClasses spec, to not require other Karpenter providers to support those features. {{% /alert %}} -For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.7/examples/v1/). +For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.8/examples/v1/). ```yaml apiVersion: karpenter.sh/v1 diff --git a/website/content/en/docs/faq.md b/website/content/en/docs/faq.md index c1e09e8044a9..f857c95ff6ed 100644 --- a/website/content/en/docs/faq.md +++ b/website/content/en/docs/faq.md @@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well. ### Can I write my own cloud provider for Karpenter? -Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.7/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.8/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}). @@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}). ### What RBAC access is required? -All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/role.yaml) files for details. +All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md index 8dad463df589..718f435b3459 100644 --- a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md @@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version: ```bash export KARPENTER_NAMESPACE="kube-system" -export KARPENTER_VERSION="1.0.7" +export KARPENTER_VERSION="1.0.8" export K8S_VERSION="1.31" ``` @@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/ As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command. ```bash -cosign verify public.ecr.aws/karpenter/karpenter:1.0.7 \ +cosign verify public.ecr.aws/karpenter/karpenter:1.0.8 \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \ --certificate-github-workflow-repository=aws/karpenter-provider-aws \ --certificate-github-workflow-name=Release \ - --certificate-github-workflow-ref=refs/tags/v1.0.7 \ - --annotations version=1.0.7 + --certificate-github-workflow-ref=refs/tags/v1.0.8 \ + --annotations version=1.0.8 ``` {{% alert title="DNS Policy Notice" color="warning" %}} diff --git a/website/content/en/docs/getting-started/migrating-from-cas/_index.md b/website/content/en/docs/getting-started/migrating-from-cas/_index.md index f37df96642c1..483a61b11b07 100644 --- a/website/content/en/docs/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/docs/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION="1.0.7" +export KARPENTER_VERSION="1.0.8" ``` We can now generate a full Karpenter deployment yaml from the Helm chart. @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default NodePool -We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.7/examples/v1) for specific needs. +We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.8/examples/v1) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}} diff --git a/website/content/en/docs/reference/cloudformation.md b/website/content/en/docs/reference/cloudformation.md index d20f9d58005e..a99576eedc20 100644 --- a/website/content/en/docs/reference/cloudformation.md +++ b/website/content/en/docs/reference/cloudformation.md @@ -17,7 +17,7 @@ These descriptions should allow you to understand: To download a particular version of `cloudformation.yaml`, set the version and use `curl` to pull the file to your local system: ```bash -export KARPENTER_VERSION="1.0.7" +export KARPENTER_VERSION="1.0.8" curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml ``` diff --git a/website/content/en/docs/reference/threat-model.md b/website/content/en/docs/reference/threat-model.md index 95b2441d4fa3..119535f3ed3b 100644 --- a/website/content/en/docs/reference/threat-model.md +++ b/website/content/en/docs/reference/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via `Deployments`, `ReplicaSe Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.7/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.8/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/docs/upgrading/v1-migration.md b/website/content/en/docs/upgrading/v1-migration.md index 6470c003b769..067d5019af88 100644 --- a/website/content/en/docs/upgrading/v1-migration.md +++ b/website/content/en/docs/upgrading/v1-migration.md @@ -52,11 +52,11 @@ The upgrade guide will first require upgrading to your latest patch version prio The Karpenter version you are running must be between minor version `v0.33` and `v0.37`. To be able to roll back from Karpenter v1, you must rollback to on the following patch release versions for your minor version, which will include the conversion webhooks for a smooth rollback: - * v0.37.5 - * v0.36.7 - * v0.35.10 - * v0.34.11 - * v0.33.10 + * v0.37.6 + * v0.36.8 + * v0.35.11 + * v0.34.12 + * v0.33.11 3. Review for breaking changes between v0.33 and v0.37: If you are already running Karpenter v0.37.x, you can skip this step. If you are running an earlier Karpenter version, you need to review the [Upgrade Guide]({{}}) for each minor release. @@ -112,14 +112,14 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" --wait ``` -7. Set environment variables for first upgrading to v1.0.7 +7. Set environment variables for first upgrading to v1.0.8 ```bash - export KARPENTER_VERSION=1.0.7 + export KARPENTER_VERSION=1.0.8 ``` -8. Update your existing policy using the following to the v1.0.7 controller policy: +8. Update your existing policy using the following to the v1.0.8 controller policy: Notable Changes to the IAM Policy include additional tag-scoping for the `eks:eks-cluster-name` tag for instances and instance profiles. ```bash @@ -132,7 +132,7 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" --parameter-overrides "ClusterName=${CLUSTER_NAME}" ``` -9. Apply the v1.0.7 Custom Resource Definitions (CRDs): +9. Apply the v1.0.8 Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ @@ -291,11 +291,11 @@ Keep in mind that rollback, without replacing the Karpenter nodes, will not be s Once the Karpenter CRDs are upgraded to v1, conversion webhooks are needed to help convert APIs that are stored in etcd from v1 to v1beta1. Also changes to the CRDs will need to at least include the latest version of the CRD in this case being v1. The patch versions of the v1beta1 Karpenter controller that include the conversion wehooks include: -* v0.37.5 -* v0.36.7 -* v0.35.10 -* v0.34.11 -* v0.33.10 +* v0.37.6 +* v0.36.8 +* v0.35.11 +* v0.34.12 +* v0.33.11 {{% alert title="Note" color="warning" %}} When rolling back from v1, Karpenter will not retain data that was only valid in v1 APIs. For instance, if you were upgrading from v0.33.5 to v1, updated the `NodePool.Spec.Disruption.Budgets` field and then rolled back to v0.33.6, Karpenter would not retain the `NodePool.Spec.Disruption.Budgets` field, as that was introduced in v0.34.x. If you are configuring the kubelet field, and have removed the `compatibility.karpenter.sh/v1beta1-kubelet-conversion` annotation, rollback is not supported without replacing your nodes between EC2NodeClass and NodePool. diff --git a/website/content/en/v0.36/faq.md b/website/content/en/v0.36/faq.md index 8b33850011c8..c263e9add046 100644 --- a/website/content/en/v0.36/faq.md +++ b/website/content/en/v0.36/faq.md @@ -14,7 +14,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well. ### Can I write my own cloud provider for Karpenter? -Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.36.7/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.36.8/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}). @@ -26,7 +26,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}). ### What RBAC access is required? -All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.36.7/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.7/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.36.7/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.36.7/charts/karpenter/templates/role.yaml) files for details. +All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.36.8/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.8/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.36.8/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.36.8/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md index 5d9b9ac3fa0e..3b7f20ec2577 100644 --- a/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md @@ -45,7 +45,7 @@ After setting up the tools, set the Karpenter and Kubernetes version: ```bash export KARPENTER_NAMESPACE="kube-system" -export KARPENTER_VERSION="0.36.7" +export KARPENTER_VERSION="0.36.8" export K8S_VERSION="1.29" ``` @@ -112,13 +112,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/ As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command. ```bash -cosign verify public.ecr.aws/karpenter/karpenter:0.36.7 \ +cosign verify public.ecr.aws/karpenter/karpenter:0.36.8 \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \ --certificate-github-workflow-repository=aws/karpenter-provider-aws \ --certificate-github-workflow-name=Release \ - --certificate-github-workflow-ref=refs/tags/v0.36.7 \ - --annotations version=0.36.7 + --certificate-github-workflow-ref=refs/tags/v0.36.8 \ + --annotations version=0.36.8 ``` {{% alert title="DNS Policy Notice" color="warning" %}} diff --git a/website/content/en/v0.36/upgrading/upgrade-guide.md b/website/content/en/v0.36/upgrading/upgrade-guide.md index 3b00e1f72417..82749eeeae7f 100644 --- a/website/content/en/v0.36/upgrading/upgrade-guide.md +++ b/website/content/en/v0.36/upgrading/upgrade-guide.md @@ -28,9 +28,9 @@ If you get the error `invalid ownership metadata; label validation error:` while In general, you can reapply the CRDs in the `crds` directory of the Karpenter Helm chart: ```shell -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.7/pkg/apis/crds/karpenter.sh_nodepools.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.7/pkg/apis/crds/karpenter.sh_nodeclaims.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.7/pkg/apis/crds/karpenter.k8s.aws_ec2nodeclasses.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.8/pkg/apis/crds/karpenter.sh_nodepools.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.8/pkg/apis/crds/karpenter.sh_nodeclaims.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.8/pkg/apis/crds/karpenter.k8s.aws_ec2nodeclasses.yaml ```