Replies: 3 comments 2 replies
-
Thanks for opening up the discussion! The only part that's not clear to me, and perhaps could be identified by a separate paragraph or section, is "Why not configure Pinniped server to simply trust an upstream OIDC provider and operate as a OIDC client?" . Pinniped seems to base its authz on short-lived, cluster-specific tokens via its access to the cluster's signing key pair, so my assumption was that I'd always require pinniped installed on the cluster if I'm using pinniped to access that cluster. If I want to configure OIDC for that cluster, I don't need pinniped I don't think? I can just ensure that cluster trusts my OIDC provider. So why not assume pinniped will be installed on each cluster that is using pinniped for auth, and simply enable the installed pinniped to trust an upstream OIDC provider (such as dex)? This would seem (in my not-so-detailed thoughts) to be a cleaner boundary and would still enable your scenario. Though I'm sure this has been considered and there are reasons I'm missing, just keen to know what they are. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your thoughtful questions and comments @absoludity!
I think you're asking why not make the Pinniped component running on each workload cluster act as an OIDC client to your enterprise identity provider? The main reason is that registering a client with an OIDC provider requires that the administrator of that provider take an action. In many organizations, this would mean filing a ticket and waiting for a long time for that ticket to be reviewed, approved, and resolved. Workload clusters can come and go, sometimes even in an automated/on-demand fashion, so we're seeking to avoid the need to register and unregister workload clusters directly with the upstream enterprise IDP. In the future, if we add direct support for upstream SAML IDPs, the same reasoning would apply again. By using a single public OIDC client (no client secret) to talk between the CLI and the Supervisor Pinniped server, and only registering the Supervisor Pinniped itself as a client of the upstream IDP (private client with client secret), then we allow workload clusters to come and go without hassle.
This would certainly be possible after implementing the above design, since the design includes allowing Pinniped to be installed on workload clusters to validate ID tokens from any OIDC provider. However, we hope that using the Supervisor Pinniped instead will unlock additional value here. For example, the user only logs in once in their browser, but then the CLI never sends those tokens directly to the workload clusters. Instead, for each workload cluster, the CLI uses the Supervisor to exchange those tokens for a short-lived, cluster-specific token. Only those cluster-specific tokens are sent to the workload cluster, and those tokens are of lesser value to a bad actor because they will only work on that single workload cluster. Your comments were very nuanced, so let me know if I missed something! |
Beta Was this translation helpful? Give feedback.
-
How does this scenario, where the workload cluster's API server itself hasn't been configured for OIDC, yet by installing the pinniped custom resource, OIDC is somehow configured for the cluster? The only two options I can imagine are that:
Or how is pinniped otherwise ensuring requests to the workload cluster's api server with an |
Beta Was this translation helpful? Give feedback.
-
Introduction
Hello Pinniped Community!
My name is Ryan and I'm one of the maintainers of the Pinniped project. The whole maintainers team has been working closely together this week on planning the next set of features for Pinniped, and we would like to share our thoughts with the community for discussion.
What is presented below is an early glimpse into a design that has evolved over the past weeks into a rough plan. As we start refining and executing on this plan in the coming weeks, we may find aspects of the design or the plan which need to be adjusted, but we didn't want that to stop us from sharing our thoughts early.
Credit goes to the whole team (@ankeesler @cfryanr @enj @mattmoyer and @pabloschuhmacher) for this design.
Overview
The overall goal of the features described by this design is to enable the following scenario:
Authenticating into a central Identity Provider will likely require that the user interact with a web browser, through which they will enter their credentials and possibly perform multi-factor authentication. However, a user would prefer that once they have signed in using their web browser once, then they should be able to use
kubectl
to access a whole family of related clusters without needing to interact with a web browser again for the rest of the day, or for as long as their identity session lasts.Authorization will be delegated to each cluster's Kubernetes RBAC system. To make this work, the user's username and group membership information from their corporate/central Identity Provider will need to be projected into each cluster.
Terminology
Let's introduce some terminology that will be used for the rest of this document. We will likely come up with better names for these concepts before we finish implementing this design, but these will work for the purpose of this document.
Federation Group: A group of Kubernetes clusters whose authentication is provided by a central Pinniped server.
Supervisor Pinniped: A Pinniped server running on some Kubernetes cluster who is trusted to provide authentication services for all other clusters in the federation group. All other clusters in the federation will trust this supervisor Pinniped, so the cluster on which the supervisor Pinniped is run ideally would not host other non-admin users or non-trusted workloads to help protect it from being compromised.
Workload Cluster: A Kubernetes cluster that has been configured to trust the Supervisor Pinniped for authentication. It is important to note that we will not assume that all workload clusters are good actors. In other words, in the design below, a workload cluster will never be given access to credentials that would work on any other cluster. If a workload cluster becomes compromised by a bad actor, then that bad actor gains no access to other clusters in the federation, even if they can steal every credential seen by the compromised cluster, thus limiting the blast radius of the attack.
Upstream IDP: The corporate/central identity provider. The source of identity and group membership, e.g. something like ADFS or Okta.
Desired User Experience
kubectl get pods
using a kubeconfig for that cluster which includes some Pinniped settings.kubectl get pods
command continues and is successful.kubectl get pods
on another workload cluster in the same federation group. They executekubectl get pods
using the kubeconfig for that other cluster, which also contains some similar Pinniped settings. Without being asked to interact with their web browser again, it “just works”. 🎉Technical Approach
We intend for the supervisor Pinniped to initially support upstream OIDC IDPs to serve as the corporate/central IDP and the source of identity and group membership information. In the future, we may support other types of upstream IDPs, such as SAML or LDAP, but until adding that support a user could use an identity proxy such as dex to hide a SAML or LDAP IDP behind an OIDC shim.
We also intend for the supervisor Pinniped to act as an OIDC Provider itself, to provide authentication downstream (downstream from the point of view of the supervisor Pinniped) to the
kubectl
CLI and therefore to the workload clusters.A user will configure
kubectl
via their kubeconfig to use the Pinniped CLI as a Kubernetes client-go credential plugin.The Pinniped CLI will be an OIDC client to the supervisor Pinniped. It will be blissfully unaware of the existence of the upstream IDP. It will able to prompt the user with the URL that they should use to authorize. That URL will initiate an OIDC Authorization Code with PKCE flow with the supervisor Pinniped in their browser, which will end with a final browser redirect to a localhost port. The CLI will be listening on that localhost port to receive the user's tokens.
The tokens fetched by the Pinniped CLI from the supervisor Pinniped will be cached locally. All the usual best practices for OIDC will be followed, including a short access token lifetime with a longer refresh token lifetime. The CLI will be able to refresh the access token whenever needed with no human or browser involvement required. Once the refresh token is no longer valid, the CLI will start the authorization flow again from the beginning.
Now that the user has authenticated into the supervisor Pinniped, the CLI will use the access token from the supervisor Pinniped to go back to the supervisor Pinniped to request a cluster-specific ID token for the specific workload cluster that the user would like to access. This ID token will be restricted for use only on that specific workload cluster by using the token's audience field to specify the unique ID of that cluster. This endpoint will implement the relatively new OAuth 2.0 Token Exchange spec. The cluster-specific ID token will not be handed by the CLI to any other cluster aside from its intended target to avoid leaking the credential to a compromised cluster. If a bad actor tried to use this ID token to gain access to a cluster other than the cluster for which it was intended, then it would not be accepted by the cluster due to the audience mismatch.
When the user uses
kubectl
to access another workload cluster, the CLI will use the cached access token and skip straight to the step where it is used to fetch a cluster-specific ID token from the supervisor Pinniped for that other workload cluster.Workload clusters will be configured to accept ID tokens issued by a trusted supervisor Pinnniped either by configuring the Kubernetes API server to validate those tokens at Kubernetes installation time, or by installing another copy of Pinniped on the workload cluster. Pinniped on a workload cluster will provide custom resources to configure OIDC authentication for the cluster, which can be created, updated, and deleted at any time to change the cluster's authentication configuration.
Ideally the trust will be established unidirectionally. The workload cluster will trust whatever the tokens from the supervisor Pinniped say. The Pinniped cluster will not trust nor even know of the existence of the workload clusters directly, and the workload clusters will not need to register with the supervisor cluster. This will allow workload clusters to be configured to join arbitrary federation groups, without asking for permission in advance. For example, as a developer I can run a kind or minikube cluster on my laptop and configure it to use the same authentication method as my IT-provisioned staging and production clusters, if I choose. I could also configure my cluster to belong to multiple federation groups, for example one federation group would allow me to log into the cluster using my corporate identity while another federation group allows my friends from an adjacent team or company to log in to the cluster using their Google identities. Of course, to configure any of these relationships, I need to be the effective owner of the cluster because I need to have admin-level permissions.
This design does not assume network connectivity between the supervisor Pinniped and the workload clusters. There is also no assumption of any network connectivity between workload clusters. All token validation in the workload clusters will be done offline without requiring any requests to the supervisor Pinniped. In cases where that access exists, it may create opportunities to automatically push and pull configuration in ways that make the federation group easier for the cluster owners to configure and maintain, for example by configuring the workload cluster to use the OIDC discovery features of the supervisor Pinniped to automatically discover endpoints and public key certificates, and to automatically find out that the supervisor Pinniped has rotated its signing keys.
In cases where network access is unreliable between the CLI and the supervisor Pinniped, the nature of the CLI's cached token lifetimes will allow continued access to the workload clusters for some period. However, very long outages will not be tolerated by this design because the supervisor Pinniped access tokens are intended to have a relatively short lifetime, requiring the CLI to occasionally return to the supervisor Pinniped to refresh tokens.
The supervisor Pinniped will receive HTTPS requests by the user's web browser, so it must have TLS certificates configured which are trusted by the browser. Conveniently, there is no such requirement for the workload clusters.
Future Extensions
Other related features could be added iteratively on top of the baseline described above. Examples of features that might be added in the future include:
Out of Scope
This design does not contemplate how Pinniped might provide authentication by cluster users into web-based dashboard UIs running on the cluster. However, it may indirectly lay some groundwork for a future design by bringing the upstream IDP integrations and downstream OIDC provider into the implementation of Pinniped.
Feedback
The Pinniped team would love to hear your feedback! Head on over to https://github.com/vmware-tanzu/pinniped/discussions to connect with the team.
Thanks!
The Pinniped Team
Beta Was this translation helpful? Give feedback.
All reactions