diff --git a/website/content/en/docs/concepts/scheduling.md b/website/content/en/docs/concepts/scheduling.md index 76c09cbc5124..49e43cf45f9d 100755 --- a/website/content/en/docs/concepts/scheduling.md +++ b/website/content/en/docs/concepts/scheduling.md @@ -626,19 +626,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values The `Exists` operator can be used on a NodePool to provide workload segregation across nodes. ```yaml -... -requirements: -- key: company.com/team - operator: Exists +apiVersion: karpenter.sh/v1beta1 +kind: NodePool +spec: + template: + spec: + requirements: + - key: company.com/team + operator: Exists ... ``` -With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. +With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements. + +If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. + +For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes. + +#### Team A Deployment ```yaml -nodeSelector: - company.com/team: team-a +apiVersion: v1 +kind: Deployment +metadata: + name: team-a-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-a +``` + +#### Team A Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-a ``` + +#### Team B Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: team-b-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-b +``` + +#### Team B Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-b +``` + {{% alert title="Note" color="primary" %}} If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node. {{% /alert %}} diff --git a/website/content/en/preview/concepts/scheduling.md b/website/content/en/preview/concepts/scheduling.md index 76c09cbc5124..49e43cf45f9d 100755 --- a/website/content/en/preview/concepts/scheduling.md +++ b/website/content/en/preview/concepts/scheduling.md @@ -626,19 +626,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values The `Exists` operator can be used on a NodePool to provide workload segregation across nodes. ```yaml -... -requirements: -- key: company.com/team - operator: Exists +apiVersion: karpenter.sh/v1beta1 +kind: NodePool +spec: + template: + spec: + requirements: + - key: company.com/team + operator: Exists ... ``` -With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. +With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements. + +If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. + +For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes. + +#### Team A Deployment ```yaml -nodeSelector: - company.com/team: team-a +apiVersion: v1 +kind: Deployment +metadata: + name: team-a-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-a +``` + +#### Team A Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-a ``` + +#### Team B Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: team-b-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-b +``` + +#### Team B Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-b +``` + {{% alert title="Note" color="primary" %}} If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node. {{% /alert %}} diff --git a/website/content/en/v0.32/concepts/scheduling.md b/website/content/en/v0.32/concepts/scheduling.md index 466b54910454..9617591d7868 100755 --- a/website/content/en/v0.32/concepts/scheduling.md +++ b/website/content/en/v0.32/concepts/scheduling.md @@ -625,19 +625,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values The `Exists` operator can be used on a NodePool to provide workload segregation across nodes. ```yaml -... -requirements: -- key: company.com/team - operator: Exists +apiVersion: karpenter.sh/v1beta1 +kind: NodePool +spec: + template: + spec: + requirements: + - key: company.com/team + operator: Exists ... ``` -With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. +With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements. + +If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. + +For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes. + +#### Team A Deployment ```yaml -nodeSelector: - company.com/team: team-a +apiVersion: v1 +kind: Deployment +metadata: + name: team-a-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-a +``` + +#### Team A Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-a ``` + +#### Team B Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: team-b-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-b +``` + +#### Team B Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-b +``` + {{% alert title="Note" color="primary" %}} If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node. {{% /alert %}} diff --git a/website/content/en/v0.34/concepts/scheduling.md b/website/content/en/v0.34/concepts/scheduling.md index ef12a29cc347..4eebe154c855 100755 --- a/website/content/en/v0.34/concepts/scheduling.md +++ b/website/content/en/v0.34/concepts/scheduling.md @@ -625,19 +625,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values The `Exists` operator can be used on a NodePool to provide workload segregation across nodes. ```yaml -... -requirements: -- key: company.com/team - operator: Exists +apiVersion: karpenter.sh/v1beta1 +kind: NodePool +spec: + template: + spec: + requirements: + - key: company.com/team + operator: Exists ... ``` -With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. +With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements. + +If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. + +For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes. + +#### Team A Deployment ```yaml -nodeSelector: - company.com/team: team-a +apiVersion: v1 +kind: Deployment +metadata: + name: team-a-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-a +``` + +#### Team A Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-a ``` + +#### Team B Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: team-b-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-b +``` + +#### Team B Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-b +``` + {{% alert title="Note" color="primary" %}} If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node. {{% /alert %}} diff --git a/website/content/en/v0.35/concepts/scheduling.md b/website/content/en/v0.35/concepts/scheduling.md index ef12a29cc347..4eebe154c855 100755 --- a/website/content/en/v0.35/concepts/scheduling.md +++ b/website/content/en/v0.35/concepts/scheduling.md @@ -625,19 +625,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values The `Exists` operator can be used on a NodePool to provide workload segregation across nodes. ```yaml -... -requirements: -- key: company.com/team - operator: Exists +apiVersion: karpenter.sh/v1beta1 +kind: NodePool +spec: + template: + spec: + requirements: + - key: company.com/team + operator: Exists ... ``` -With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. +With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements. + +If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. + +For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes. + +#### Team A Deployment ```yaml -nodeSelector: - company.com/team: team-a +apiVersion: v1 +kind: Deployment +metadata: + name: team-a-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-a +``` + +#### Team A Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-a ``` + +#### Team B Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: team-b-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-b +``` + +#### Team B Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-b +``` + {{% alert title="Note" color="primary" %}} If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node. {{% /alert %}} diff --git a/website/content/en/v0.36/concepts/scheduling.md b/website/content/en/v0.36/concepts/scheduling.md index 76c09cbc5124..49e43cf45f9d 100755 --- a/website/content/en/v0.36/concepts/scheduling.md +++ b/website/content/en/v0.36/concepts/scheduling.md @@ -626,19 +626,73 @@ If using Gt/Lt operators, make sure to use values under the actual label values The `Exists` operator can be used on a NodePool to provide workload segregation across nodes. ```yaml -... -requirements: -- key: company.com/team - operator: Exists +apiVersion: karpenter.sh/v1beta1 +kind: NodePool +spec: + template: + spec: + requirements: + - key: company.com/team + operator: Exists ... ``` -With the requirement on the NodePool, workloads can optionally specify a custom value as a required node affinity or node selector. Karpenter will then label the nodes it launches for these pods which prevents `kube-scheduler` from scheduling conflicting pods to those nodes. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. +With this requirement on the NodePool, workloads can specify the same key (e.g. `company.com/team`) with custom values (e.g. `team-a`, `team-b`, etc.) as a required `nodeAffinity` or `nodeSelector`. Karpenter will then apply the key/value pair to nodes it launches dynamically based on the pod's node requirements. + +If each set of pods that can schedule with this NodePool specifies this key in its `nodeAffinity` or `nodeSelector`, you can isolate pods onto different nodes based on their values. This provides a way to more dynamically isolate workloads without requiring a unique NodePool for each workload subset. + +For example, providing the following `nodeSelectors` would isolate the pods for each of these deployments on different nodes. + +#### Team A Deployment ```yaml -nodeSelector: - company.com/team: team-a +apiVersion: v1 +kind: Deployment +metadata: + name: team-a-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-a +``` + +#### Team A Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-a ``` + +#### Team B Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: team-b-deployment +spec: + replicas: 5 + template: + spec: + nodeSelector: + company.com/team: team-b +``` + +#### Team B Node + +```yaml +apiVersion: v1 +kind: Node +metadata: + labels: + company.com/team: team-b +``` + {{% alert title="Note" color="primary" %}} If a workload matches the NodePool but doesn't specify a label, Karpenter will generate a random label for the node. {{% /alert %}}