Skip to content

Commit

Permalink
Add user guide to interact with mto on eks (#115)
Browse files Browse the repository at this point in the history
  • Loading branch information
MuneebAijaz authored Jul 31, 2024
1 parent 73a5b2f commit 2669121
Show file tree
Hide file tree
Showing 14 changed files with 784 additions and 23 deletions.
6 changes: 3 additions & 3 deletions content/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
#### Enhanced

- Updated Tenant CR to v1beta3, more details in [Tenant CRD](./crds-api-reference/tenant.md)
- Added custom pricing support for Opencost, more details in [Opencost](./crds-api-reference/integration-config.md#Custom-Pricing-Model)
- Added custom pricing support for Opencost, more details in [Opencost](./crds-api-reference/integration-config.md#custom-pricing-model)

#### Fix

Expand Down Expand Up @@ -237,7 +237,7 @@

### v0.5.0

- feat: Add support for tenant namespaces off-boarding. For more details check out [onDelete](./tutorials/tenant/deleting-tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted)
- feat: Add support for tenant namespaces off-boarding.
- feat: Add tenant webhook for spec validation

- fix: TemplateGroupInstance now cleans up leftover Template resources from namespaces that are no longer part of TGI namespace selector
Expand Down Expand Up @@ -460,7 +460,7 @@
### v0.2.32

- refactor: Restructure integration config spec, more details in [relevant docs][def]
- feat: Allow users to input custom regex in certain fields inside of integration config, more details in [relevant docs](./crds-api-reference/integration-config.md#openshift)
- feat: Allow users to input custom regex in certain fields inside of integration config, more details in [relevant docs](./crds-api-reference/integration-config.md)

### v0.2.31

Expand Down
6 changes: 1 addition & 5 deletions content/explanation/multi-tenancy-vault.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,7 @@ The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.

This requires a running `RHSSO(RedHat Single Sign On)` instance integrated with Vault over [OIDC](https://developer.hashicorp.com/vault/docs/auth/jwt) login method.

MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.

Once both integrations are set up with [IntegrationConfig CR](../crds-api-reference/integration-config.md#rhsso-red-hat-single-sign-on), MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.

After that, MTO creates specific policies in Vault for its tenant users.
MTO creates specific policies in Vault for its tenant users.

Mapping of tenant roles to Vault is shown below

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ resources:
```
Once the template has been created, Bill has to edit the `Tenant` to add unique label to namespaces in which the secret has to be deployed.
For this, he can use the support for [common](../tutorials/tenant/assigning-metadata.md#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource) and [specific](../tutorials/tenant/assigning-metadata.md#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource) labels across namespaces.
For this, he can use the support for [common](../tutorials/tenant/assigning-metadata.md#distributing-common-labels-and-annotations) and [specific](../tutorials/tenant/assigning-metadata.md#distributing-specific-labels-and-annotations) labels across namespaces.

Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

Expand Down
6 changes: 4 additions & 2 deletions content/how-to-guides/offboarding/uninstalling.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@

You can uninstall MTO by following these steps:

* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set `spec.onDelete.cleanNamespaces` to `false` for all those tenants whose namespaces you want to retain, and `spec.onDelete.cleanAppProject` to `false` for all those tenants whose AppProject you want to retain. For more details check out [onDelete](../../tutorials/tenant/deleting-tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted)
* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not.
For more details check out [onDeletePurgeNamespaces](../../tutorials/tenant/deleting-tenant.md#configuration-for-retaining-resources)
[onDeletePurgeAppProject](../../crds-api-reference/extensions.md#configuring-argocd-integration)

* In case you have enabled console, you will have to disable it first by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and set `spec.provision.console` and `spec.provision.showback` to `false`.
* In case you have enabled console and showback, you will have to disable it first by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and set `spec.components.console` and `spec.components.showback` to `false`.

* Remove IntegrationConfig CR from the cluster by navigating to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and select `Delete` from actions dropdown.

Expand Down
Binary file added content/images/eks-access-config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-access-entry.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-denied-ns-access.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/images/eks-nodegroup.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 8 additions & 8 deletions content/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,20 @@ head:

[//]: # ( introduction.md, features.md)

Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its "Secure by default" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.
Kubernetes is designed to support a single tenant platform; Managed Kubernetes Services (such as AKS, EKS, GKE and OpenShift) brings some improvements with their "Secure by default" concepts, but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single Kubernetes cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster's internal resources among different tenants. Kubernetes and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of the respective tool.

This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy.
This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around Kubernetes resources (depending on the version) to provide a higher level of abstraction to users. With MTO, admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy.
MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.

The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.

MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:
MTO enables cluster admins to host multiple tenants in a single Kubernetes Cluster, i.e.:

* Share an **OpenShift cluster** with multiple tenants
* Share a **Kubernetes cluster** with multiple tenants
* Share **managed applications** with multiple tenants
* Configure and manage tenants and their sandboxes

MTO is also [OpenShift certified](https://catalog.redhat.com/software/operators/detail/618fa05e3adfdfc43f73b126)
MTO is also [RedHat certified](https://catalog.redhat.com/software/operators/detail/618fa05e3adfdfc43f73b126)

## Features

Expand All @@ -34,7 +34,7 @@ RBAC is one of the most complicated and error-prone parts of Kubernetes. With Mu

Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.

Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
Multi Tenant Operator is also able to leverage existing groups in Kubernetes and OpenShift, or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.

## HashiCorp Vault Multitenancy

Expand All @@ -44,7 +44,7 @@ More details on [Vault Multitenancy](./how-to-guides/enabling-multi-tenancy-vaul

## ArgoCD Multitenancy

Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
Multi Tenant Operator is not only providing strong Multi Tenancy for the Kubernetes internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.

More details on [ArgoCD Multitenancy](./how-to-guides/enabling-multi-tenancy-argocd.md)

Expand Down Expand Up @@ -114,7 +114,7 @@ Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can

## Everything as Code/GitOps Ready

Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
Multi Tenant Operator is designed and built to be 100% Kubernetes-native, and to be configured and managed the same familiar way as native Kubernetes resources so it's perfect for modern companies that are dedicated to GitOps as it is fully configurable using Custom Resources.

## Preventing Clusters Sprawl

Expand Down
267 changes: 267 additions & 0 deletions content/installation/managed-kubernetes/aws-eks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,267 @@
# Multi Tenant Operator in Amazon Elastic Kubernetes Service

This document covers how to link Multi Tenant Operator with an [Amazon EKS (Elastic Kubernetes Service)](https://aws.amazon.com/eks/) cluster.

## Prerequisites

- You need kubectl as well, with a minimum version of 1.18.3. If you need to install, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
- To install MTO, you need Helm CLI as well. Visit [Installing Helm](https://helm.sh/docs/intro/install/) to get Helm CLI
- You need to have a user in [AWS Console](https://console.aws.amazon.com/), which we will use as the administrator having enough permissions for accessing the cluster and creating groups with users
- A running EKS Cluster. [Creating an EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) provides a good tutorial to create a demo cluster

## Setting up an EKS Cluster

In this example, we have already set-up a small EKS cluster with the following node group specifications

![Node Group](../../images/eks-nodegroup.png)

We have access configuration set as both, EKS API and Configmap, so that admin can access the cluster using EKS API and map IAM users to our EKS cluster using `aws-auth` configmap.

![EKS Access Config](../../images/eks-access-config.png)

And we have a policy `AmazonEKSClusterAdminPolicy` attached with our user which makes it a cluster admin. To be noted, the user is also added in the `cluster-admins` group which we will later use while installing MTO.

![EKS Access Entry](../../images/eks-access-entry.png)

## Installing Cert Manager and MTO

We, as cluster admins, will start by installing cert manager for automated handling of operator certs.

```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
```

Let's wait for the pods to be up.

```bash
NAME READY STATUS RESTARTS AGE
cert-manager-7fb948f468-wgcbx 1/1 Running 0 7m18s
cert-manager-cainjector-75c5fc965c-wxtkp 1/1 Running 0 7m18s
cert-manager-webhook-757c9d4bb7-wd9g8 1/1 Running 0 7m18s
```

We will be using helm to install the operator, here we have set `bypassedGroups` as `cluster-admins` because our admin user is part of that group as seen in above screenshot.

```terminal
helm install tenant-operator oci://ghcr.io/stakater/public/charts/multi-tenant-operator --version 0.12.62 --namespace multi-tenant-operator --create-namespace --set bypassedGroups=cluster-admins
```

We will wait for the pods to come in running state.

```bash
NAME READY STATUS RESTARTS AGE
tenant-operator-namespace-controller-768f9459c4-758kb 2/2 Running 0 5m
tenant-operator-pilot-controller-7c96f6589c-d979f 2/2 Running 0 5m
tenant-operator-resourcesupervisor-controller-566f59d57b-xbkws 2/2 Running 0 5m
tenant-operator-template-quota-intconfig-controller-7fc99462dz6 2/2 Running 0 5m
tenant-operator-templategroupinstance-controller-75cf68c872pljv 2/2 Running 0 5m
tenant-operator-templateinstance-controller-d996b6fd-cx2dz 2/2 Running 0 5m
tenant-operator-tenant-controller-57fb885c84-7ps92 2/2 Running 0 5m
tenant-operator-webhook-5f8f675549-jv9n8 2/2 Running 0 5m
```

## Users Interaction with the Cluster

We will use two types of users to interact with the cluster, IAM users created via AWS Console and SSO Users.

### IAM Users

We have created a user named `test-benzema-mto` in AWS Console, with ARN `arn:aws:iam::<account>:user/test-benzema-mto`.
This user has a policy attached to be able to get cluster info

```json
{
"Statement": [
{
"Action": "eks:DescribeCluster",
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}
```

We have mapped this user in `aws-auth` configmap in `kube-system` namespace.

```yaml
mapUsers:
- groups:
- iam-devteam
userarn: arn:aws:iam::<account>:user/test-benzema-mto
username: test-benzema-mto
```
Using this [AWS guide](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html), we will ask the user to update its kubeconfig and try to access the cluster.
Since we haven't attached any RBAC with this user at the moment, trying to access anything in the cluster would throw an error
```terminal
$ kubectl get svc

Error from server (Forbidden): services is forbidden: User "test-benzema-mto" cannot list resource "services" in API group "" in the namespace "default"
```
### SSO Users
For SSO Users, we will map a role `arn:aws:iam::<account>:role/aws-reserved/sso.amazonaws.com/eu-north-1/AWSReservedSSO_PowerUserAccess_b0ad9936c75e5bcc`, that is attached by default with Users on SSO login to the AWS console and `awscli`, in `aws-auth` configmap in `kube-system` namespace.

```yaml
mapRoles:
- groups:
- sso-devteam
rolearn: arn:aws:iam::<account>:role/AWSReservedSSO_PowerUserAccess_b0ad9936c75e5bcc
username: sso-devteam:{{SessionName}}
```

Since this user also doesn't have attached RBAC, trying to access anything in the cluster would throw an error

```terminal
$ kubectl get svc
Error from server (Forbidden): services is forbidden: User "sso-devteam:random-user-stakater.com" cannot list resource "services" in API group "" in the namespace "default"
```

### Setting up Tenant for Users

Now, we will set tenants for the above-mentioned users.

We will start by creating a `Quota CR` with some resource limits

```yaml
kubectl apply -f - <<EOF
apiVersion: tenantoperator.stakater.com/v1beta1
kind: Quota
metadata:
name: small
spec:
limitrange:
limits:
- max:
cpu: 800m
min:
cpu: 200m
type: Container
resourcequota:
hard:
configmaps: "10"
memory: "8Gi"
EOF
```

Now, we will mention this `Quota` in two `Tenant` CRs

```yaml
kubectl apply -f - <<EOF
apiVersion: tenantoperator.stakater.com/v1beta3
kind: Tenant
metadata:
name: tenant-iam
spec:
namespaces:
withTenantPrefix:
- dev
- build
accessControl:
owners:
groups:
- iam-devteam
quota: small
EOF
```

```yaml
kubectl apply -f - <<EOF
apiVersion: tenantoperator.stakater.com/v1beta3
kind: Tenant
metadata:
name: tenant-sso
spec:
namespaces:
withTenantPrefix:
- dev
- build
accessControl:
owners:
groups:
- sso-devteam
quota: small
EOF
```

Notice that the only difference in both tenant specs are the groups.

### Accessing Tenant Namespaces

After the creation of `Tenant` CRs, now users can access namespaces in their respective tenants and preform create, update, delete functions.

Listing the namespaces by cluster admin will show us the recently created tenant namespaces

```bash
$ kubectl get namespaces
NAME STATUS AGE
cert-manager Active 8d
default Active 9d
kube-node-lease Active 9d
kube-public Active 9d
kube-system Active 9d
multi-tenant-operator Active 8d
random Active 8d
tenant-iam-build Active 5s
tenant-iam-dev Active 5s
tenant-sso-build Active 5s
tenant-sso-dev Active 5s
```

### IAM Users on Tenant Namespaces

We will now try to deploy a pod from user `test-benzema-mto` in its tenant namespace `tenant-iam-dev`

```bash
$ kubectl run nginx --image nginx -n tenant-iam-dev
pod/nginx created
```

And if we try the same operation in the other tenant with the same user, it will fail

```bash
$ kubectl run nginx --image nginx -n tenant-sso-dev
Error from server (Forbidden): pods is forbidden: User "test-benzema-mto" cannot create resource "pods" in API group "" in the namespace "tenant-sso-dev"
```

To be noted, `test-benzema-mto` can not list namespaces

```bash
$ kubectl get namespaces
Error from server (Forbidden): namespaces is forbidden: User "test-benzema-mto" cannot list resource "namespaces" in API group "" at the cluster scope
```

### SSO Users on Tenant Namespaces

We will repeat the above operations for our SSO user `sso-devteam:random-user-stakater.com` as well

```bash
$ kubectl run nginx --image nginx -n tenant-sso-dev
pod/nginx created
```

Trying to do operations outside the scope of its own tenant will result in errors

```bash
$ kubectl run nginx --image nginx -n tenant-iam-dev
Error from server (Forbidden): pods is forbidden: User "sso-devteam:random-user-stakater.com" cannot create resource "pods" in API group "" in the namespace "tenant-iam-dev"
```

To be noted, `sso-devteam:random-user-stakater.com` can not list namespaces

```bash
$ kubectl get namespaces
Error from server (Forbidden): namespaces is forbidden: User "sso-devteam:random-user-stakater.com" cannot list resource "namespaces" in API group "" at the cluster scope
```
Loading

0 comments on commit 2669121

Please sign in to comment.