From 9ce6742eba4eb2fc18a28f471afd66f727b2130a Mon Sep 17 00:00:00 2001 From: Jason Deal Date: Wed, 25 Sep 2024 23:07:45 -0700 Subject: [PATCH] docs: update patch refs 1.0.3 (#7079) --- .../content/en/docs/concepts/nodeclasses.md | 6 ++-- website/content/en/docs/concepts/nodepools.md | 2 +- website/content/en/docs/faq.md | 4 +-- .../getting-started-with-karpenter/_index.md | 8 ++--- .../migrating-from-cas/_index.md | 4 +-- .../en/docs/reference/cloudformation.md | 2 +- .../content/en/docs/reference/threat-model.md | 10 +++---- .../en/docs/upgrading/upgrade-guide.md | 2 +- .../content/en/docs/upgrading/v1-migration.md | 30 +++++++++---------- .../en/preview/concepts/nodeclasses.md | 2 +- .../en/preview/upgrading/upgrade-guide.md | 2 +- .../content/en/v1.0/concepts/nodeclasses.md | 6 ++-- website/content/en/v1.0/concepts/nodepools.md | 2 +- website/content/en/v1.0/faq.md | 4 +-- .../getting-started-with-karpenter/_index.md | 8 ++--- .../migrating-from-cas/_index.md | 4 +-- .../en/v1.0/reference/cloudformation.md | 2 +- .../content/en/v1.0/reference/threat-model.md | 10 +++---- .../content/en/v1.0/upgrading/v1-migration.md | 30 +++++++++---------- 19 files changed, 69 insertions(+), 69 deletions(-) diff --git a/website/content/en/docs/concepts/nodeclasses.md b/website/content/en/docs/concepts/nodeclasses.md index be0aa5769ecd..0824259ae106 100644 --- a/website/content/en/docs/concepts/nodeclasses.md +++ b/website/content/en/docs/concepts/nodeclasses.md @@ -207,7 +207,7 @@ status: status: "True" type: Ready ``` -Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1/). +Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.3/examples/v1/). ## spec.kubelet @@ -400,7 +400,7 @@ AMIFamily does not impact which AMI is discovered, only the UserData generation {{% alert title="Ubuntu Support Dropped at v1" color="warning" %}} -Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.1`. +Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.0`. This means Karpenter no longer supports automatic AMI discovery and UserData generation for Ubuntu. To continue using Ubuntu AMIs, you will need to select Ubuntu AMIs using `amiSelectorTerms`. @@ -1038,7 +1038,7 @@ spec: chown -R ec2-user ~ec2-user/.ssh ``` -For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1). +For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.3/examples/v1). Karpenter will merge the userData you specify with the default userData for that AMIFamily. See the [AMIFamily]({{< ref "#specamifamily" >}}) section for more details on these defaults. View the sections below to understand the different merge strategies for each AMIFamily. diff --git a/website/content/en/docs/concepts/nodepools.md b/website/content/en/docs/concepts/nodepools.md index d079b905930f..854eaf840b13 100644 --- a/website/content/en/docs/concepts/nodepools.md +++ b/website/content/en/docs/concepts/nodepools.md @@ -27,7 +27,7 @@ Here are things you should know about NodePools: Objects for setting Kubelet features have been moved from the NodePool spec to the EC2NodeClasses spec, to not require other Karpenter providers to support those features. {{% /alert %}} -For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1/). +For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.3/examples/v1/). ```yaml apiVersion: karpenter.sh/v1 diff --git a/website/content/en/docs/faq.md b/website/content/en/docs/faq.md index 426497746923..e563ad69d154 100644 --- a/website/content/en/docs/faq.md +++ b/website/content/en/docs/faq.md @@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well. ### Can I write my own cloud provider for Karpenter? -Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.1/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.3/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}). @@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}). ### What RBAC access is required? -All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/role.yaml) files for details. +All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md index 8d77745636e9..e0dec1a844ff 100644 --- a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md @@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version: ```bash export KARPENTER_NAMESPACE="kube-system" -export KARPENTER_VERSION="1.0.2" +export KARPENTER_VERSION="1.0.3" export K8S_VERSION="1.30" ``` @@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/ As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command. ```bash -cosign verify public.ecr.aws/karpenter/karpenter:1.0.2 \ +cosign verify public.ecr.aws/karpenter/karpenter:1.0.3 \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \ --certificate-github-workflow-repository=aws/karpenter-provider-aws \ --certificate-github-workflow-name=Release \ - --certificate-github-workflow-ref=refs/tags/v1.0.2 \ - --annotations version=1.0.2 + --certificate-github-workflow-ref=refs/tags/v1.0.3 \ + --annotations version=1.0.3 ``` {{% alert title="DNS Policy Notice" color="warning" %}} diff --git a/website/content/en/docs/getting-started/migrating-from-cas/_index.md b/website/content/en/docs/getting-started/migrating-from-cas/_index.md index 36baf75a2b9f..117fc553bfd9 100644 --- a/website/content/en/docs/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/docs/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION="1.0.1" +export KARPENTER_VERSION="1.0.3" ``` We can now generate a full Karpenter deployment yaml from the Helm chart. @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default NodePool -We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.1/examples/v1) for specific needs. +We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.3/examples/v1) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}} diff --git a/website/content/en/docs/reference/cloudformation.md b/website/content/en/docs/reference/cloudformation.md index c5bf9cae7043..97b157e23258 100644 --- a/website/content/en/docs/reference/cloudformation.md +++ b/website/content/en/docs/reference/cloudformation.md @@ -17,7 +17,7 @@ These descriptions should allow you to understand: To download a particular version of `cloudformation.yaml`, set the version and use `curl` to pull the file to your local system: ```bash -export KARPENTER_VERSION="1.0.1" +export KARPENTER_VERSION="1.0.3" curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml ``` diff --git a/website/content/en/docs/reference/threat-model.md b/website/content/en/docs/reference/threat-model.md index b79862bd94ea..ba26eb7ecfca 100644 --- a/website/content/en/docs/reference/threat-model.md +++ b/website/content/en/docs/reference/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via `Deployments`, `ReplicaSe Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/docs/upgrading/upgrade-guide.md b/website/content/en/docs/upgrading/upgrade-guide.md index b74ac48c4bac..957402f97c22 100644 --- a/website/content/en/docs/upgrading/upgrade-guide.md +++ b/website/content/en/docs/upgrading/upgrade-guide.md @@ -11,7 +11,7 @@ Use your existing upgrade mechanisms to upgrade your core add-ons in Kubernetes This guide contains information needed to upgrade to the latest release of Karpenter, along with compatibility issues you need to be aware of when upgrading from earlier Karpenter versions. {{% alert title="Warning" color="warning" %}} -With the release of Karpenter v1.0.1, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. +With the release of Karpenter v1.0.0, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. {{% /alert %}} ### CRD Upgrades diff --git a/website/content/en/docs/upgrading/v1-migration.md b/website/content/en/docs/upgrading/v1-migration.md index 15a086986920..0cba25f9805f 100644 --- a/website/content/en/docs/upgrading/v1-migration.md +++ b/website/content/en/docs/upgrading/v1-migration.md @@ -75,20 +75,20 @@ The upgrade guide will first require upgrading to your latest patch version prio --set webhook.port=8443 ``` {{% alert title="Note" color="warning" %}} -If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. +If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} {{% alert title="Note" color="warning" %}} -As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: -```bash +```bash export SERVICE_NAME= export SERVICE_NAMESPACE= export SERVICE_PORT= # NodePools kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" -# NodeClaims +# NodeClaims kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" # EC2NodeClass kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" @@ -112,14 +112,14 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" --wait ``` -7. Set environment variables for first upgrading to v1.0.1 +7. Set environment variables for first upgrading to v1.0.3 ```bash - export KARPENTER_VERSION=1.0.1 + export KARPENTER_VERSION=1.0.3 ``` -8. Update your existing policy using the following to the v1.0.1 controller policy: +8. Update your existing policy using the following to the v1.0.3 controller policy: Notable Changes to the IAM Policy include additional tag-scoping for the `eks:eks-cluster-name` tag for instances and instance profiles. ```bash @@ -132,7 +132,7 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" --parameter-overrides "ClusterName=${CLUSTER_NAME}" ``` -9. Apply the v1.0.1 Custom Resource Definitions (CRDs): +9. Apply the v1.0.3 Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ @@ -142,20 +142,20 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" ``` {{% alert title="Note" color="warning" %}} -If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. +If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} {{% alert title="Note" color="warning" %}} -As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: -```bash +```bash export SERVICE_NAME= export SERVICE_NAMESPACE= export SERVICE_PORT= # NodePools kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" -# NodeClaims +# NodeClaims kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" # EC2NodeClass kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" @@ -368,15 +368,15 @@ helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-cr {{% alert title="Note" color="warning" %}} -As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: -```bash +```bash export SERVICE_NAME= export SERVICE_NAMESPACE= export SERVICE_PORT= # NodePools kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" -# NodeClaims +# NodeClaims kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" # EC2NodeClass kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" diff --git a/website/content/en/preview/concepts/nodeclasses.md b/website/content/en/preview/concepts/nodeclasses.md index 2ab22e94bd50..3448a66d49c2 100644 --- a/website/content/en/preview/concepts/nodeclasses.md +++ b/website/content/en/preview/concepts/nodeclasses.md @@ -400,7 +400,7 @@ AMIFamily does not impact which AMI is discovered, only the UserData generation {{% alert title="Ubuntu Support Dropped at v1" color="warning" %}} -Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.1`. +Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.0`. This means Karpenter no longer supports automatic AMI discovery and UserData generation for Ubuntu. To continue using Ubuntu AMIs, you will need to select Ubuntu AMIs using `amiSelectorTerms`. diff --git a/website/content/en/preview/upgrading/upgrade-guide.md b/website/content/en/preview/upgrading/upgrade-guide.md index 3e014be97c3c..cb0421d7ed4c 100644 --- a/website/content/en/preview/upgrading/upgrade-guide.md +++ b/website/content/en/preview/upgrading/upgrade-guide.md @@ -11,7 +11,7 @@ Use your existing upgrade mechanisms to upgrade your core add-ons in Kubernetes This guide contains information needed to upgrade to the latest release of Karpenter, along with compatibility issues you need to be aware of when upgrading from earlier Karpenter versions. {{% alert title="Warning" color="warning" %}} -With the release of Karpenter v1.0.1, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. +With the release of Karpenter v1.0.0, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. {{% /alert %}} ### CRD Upgrades diff --git a/website/content/en/v1.0/concepts/nodeclasses.md b/website/content/en/v1.0/concepts/nodeclasses.md index be0aa5769ecd..0824259ae106 100644 --- a/website/content/en/v1.0/concepts/nodeclasses.md +++ b/website/content/en/v1.0/concepts/nodeclasses.md @@ -207,7 +207,7 @@ status: status: "True" type: Ready ``` -Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1/). +Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.3/examples/v1/). ## spec.kubelet @@ -400,7 +400,7 @@ AMIFamily does not impact which AMI is discovered, only the UserData generation {{% alert title="Ubuntu Support Dropped at v1" color="warning" %}} -Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.1`. +Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.0`. This means Karpenter no longer supports automatic AMI discovery and UserData generation for Ubuntu. To continue using Ubuntu AMIs, you will need to select Ubuntu AMIs using `amiSelectorTerms`. @@ -1038,7 +1038,7 @@ spec: chown -R ec2-user ~ec2-user/.ssh ``` -For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1). +For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.3/examples/v1). Karpenter will merge the userData you specify with the default userData for that AMIFamily. See the [AMIFamily]({{< ref "#specamifamily" >}}) section for more details on these defaults. View the sections below to understand the different merge strategies for each AMIFamily. diff --git a/website/content/en/v1.0/concepts/nodepools.md b/website/content/en/v1.0/concepts/nodepools.md index d079b905930f..854eaf840b13 100644 --- a/website/content/en/v1.0/concepts/nodepools.md +++ b/website/content/en/v1.0/concepts/nodepools.md @@ -27,7 +27,7 @@ Here are things you should know about NodePools: Objects for setting Kubelet features have been moved from the NodePool spec to the EC2NodeClasses spec, to not require other Karpenter providers to support those features. {{% /alert %}} -For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1/). +For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.3/examples/v1/). ```yaml apiVersion: karpenter.sh/v1 diff --git a/website/content/en/v1.0/faq.md b/website/content/en/v1.0/faq.md index 426497746923..e563ad69d154 100644 --- a/website/content/en/v1.0/faq.md +++ b/website/content/en/v1.0/faq.md @@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well. ### Can I write my own cloud provider for Karpenter? -Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.1/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.3/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}). @@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}). ### What RBAC access is required? -All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/role.yaml) files for details. +All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/v1.0/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/v1.0/getting-started/getting-started-with-karpenter/_index.md index 8d77745636e9..e0dec1a844ff 100644 --- a/website/content/en/v1.0/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/v1.0/getting-started/getting-started-with-karpenter/_index.md @@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version: ```bash export KARPENTER_NAMESPACE="kube-system" -export KARPENTER_VERSION="1.0.2" +export KARPENTER_VERSION="1.0.3" export K8S_VERSION="1.30" ``` @@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/ As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command. ```bash -cosign verify public.ecr.aws/karpenter/karpenter:1.0.2 \ +cosign verify public.ecr.aws/karpenter/karpenter:1.0.3 \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \ --certificate-github-workflow-repository=aws/karpenter-provider-aws \ --certificate-github-workflow-name=Release \ - --certificate-github-workflow-ref=refs/tags/v1.0.2 \ - --annotations version=1.0.2 + --certificate-github-workflow-ref=refs/tags/v1.0.3 \ + --annotations version=1.0.3 ``` {{% alert title="DNS Policy Notice" color="warning" %}} diff --git a/website/content/en/v1.0/getting-started/migrating-from-cas/_index.md b/website/content/en/v1.0/getting-started/migrating-from-cas/_index.md index 0915d3f81b13..117fc553bfd9 100644 --- a/website/content/en/v1.0/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/v1.0/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION="1.0.2" +export KARPENTER_VERSION="1.0.3" ``` We can now generate a full Karpenter deployment yaml from the Helm chart. @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default NodePool -We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.2/examples/v1) for specific needs. +We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.3/examples/v1) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}} diff --git a/website/content/en/v1.0/reference/cloudformation.md b/website/content/en/v1.0/reference/cloudformation.md index c5bf9cae7043..97b157e23258 100644 --- a/website/content/en/v1.0/reference/cloudformation.md +++ b/website/content/en/v1.0/reference/cloudformation.md @@ -17,7 +17,7 @@ These descriptions should allow you to understand: To download a particular version of `cloudformation.yaml`, set the version and use `curl` to pull the file to your local system: ```bash -export KARPENTER_VERSION="1.0.1" +export KARPENTER_VERSION="1.0.3" curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml ``` diff --git a/website/content/en/v1.0/reference/threat-model.md b/website/content/en/v1.0/reference/threat-model.md index b79862bd94ea..ba26eb7ecfca 100644 --- a/website/content/en/v1.0/reference/threat-model.md +++ b/website/content/en/v1.0/reference/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via `Deployments`, `ReplicaSe Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.3/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/v1.0/upgrading/v1-migration.md b/website/content/en/v1.0/upgrading/v1-migration.md index 62bb3b4c5bc7..628bdb8e98b5 100644 --- a/website/content/en/v1.0/upgrading/v1-migration.md +++ b/website/content/en/v1.0/upgrading/v1-migration.md @@ -76,20 +76,20 @@ The upgrade guide will first require upgrading to your latest patch version prio ``` {{% alert title="Note" color="warning" %}} -If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. +If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} {{% alert title="Note" color="warning" %}} -As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: -```bash +```bash export SERVICE_NAME= export SERVICE_NAMESPACE= export SERVICE_PORT= # NodePools kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" -# NodeClaims +# NodeClaims kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" # EC2NodeClass kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" @@ -113,14 +113,14 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" --wait ``` -7. Set environment variables for first upgrading to v1.0.2 +7. Set environment variables for first upgrading to v1.0.3 ```bash - export KARPENTER_VERSION=1.0.2 + export KARPENTER_VERSION=1.0.3 ``` -8. Update your existing policy using the following to the v1.0.2 controller policy: +8. Update your existing policy using the following to the v1.0.3 controller policy: Notable Changes to the IAM Policy include additional tag-scoping for the `eks:eks-cluster-name` tag for instances and instance profiles. ```bash @@ -133,7 +133,7 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" --parameter-overrides "ClusterName=${CLUSTER_NAME}" ``` -9. Apply the v1.0.2 Custom Resource Definitions (CRDs): +9. Apply the v1.0.3 Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ @@ -143,20 +143,20 @@ kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\" ``` {{% alert title="Note" color="warning" %}} -If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. +If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} {{% alert title="Note" color="warning" %}} -As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: -```bash +```bash export SERVICE_NAME= export SERVICE_NAMESPACE= export SERVICE_PORT= # NodePools kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" -# NodeClaims +# NodeClaims kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" # EC2NodeClass kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" @@ -369,15 +369,15 @@ helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-cr {{% alert title="Note" color="warning" %}} -As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: -```bash +```bash export SERVICE_NAME= export SERVICE_NAMESPACE= export SERVICE_PORT= # NodePools kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" -# NodeClaims +# NodeClaims kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" # EC2NodeClass kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}"