Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monokle policies as CRDs #1

Merged
merged 26 commits into from
Oct 4, 2023
Merged

Monokle policies as CRDs #1

merged 26 commits into from
Oct 4, 2023

Conversation

f1ames
Copy link
Contributor

@f1ames f1ames commented Sep 27, 2023

This PR fixes https://github.com/kubeshop/monokle-saas/issues/1839, fixes https://github.com/kubeshop/monokle-saas/issues/1845.

Implemented MonoklePolicy and MonoklePolicyBinding according to https://github.com/kubeshop/monokle-saas/issues/1839#issuecomment-1738937312 and consistent (to some extend) with how VAP works.

Changes

  • Introduced CRDs for MonoklePolicy and MonoklePolicyBinding.
    • VAP supports 3 action types. For the first version we decided to go with Warn only (see screenshot below). I already put it in the CRDs definition (as enum array so it can be extended and is the same format VAP has).
  • Introduced PolicyManager class which watches instances of MonoklePolicy and MonoklePolicyBinding and is responsible for storing, updating and matching (policy-binding-resource) them (I think *Manager class name is a bit of naming-antipattern so I'm open for suggestions).
    • This is based on K8s Informer (high level watcher) mechanism.
  • Introduced ValidatorManager class which reacts to policy changes and creates/updates/deletes validator instances. It also provides matching validators for given resources when they need to be validated.
  • Reworked the structure of the project. Also, by default all namespaced resources related to admission controller deployment will be deployed to monokle-admission-controller namespace.
  • Added mechanism blocking plugins which will not work at the current state (resource-links). See https://github.com/kubeshop/monokle-saas/issues/1845#issuecomment-1744412694 and 586aeed.

Sample output (we can adjust wording there):

image

Fixes

  • Mostly major changes as described above.

To consider

  • Namespace matching - see this comment.
  • Revisit logs - I used pino for logging and try to make it reasonable with different levels (so it's not to much with info and higher level while debug/trace will log a lot of data on each update, like resources content, polices content, etc). Still it might require some improvements to better help with debugging.
  • Deployment part (via Helm) is not there yet since we have separate task for it - https://github.com/kubeshop/monokle-saas/issues/1842.

I'm open for discussion/suggestions for any of the above.

Testing

See README.md with step-by-step instructions.

Checklist

  • tested locally
  • added new dependencies
  • updated the docs
  • added a test

Copy link
Contributor

@murarustefaan murarustefaan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really think we should reconsider our approach on namespaced admission controllers. Or maybe i did not understand the code and I might be wrong, but the approach where the controller can have an one "active policy" at a time is not looking reliable to me.

A better aproach, on a quick thought, would be creating a Map, 1-1 between a namespace and a policy (at first, later we can map resources to policies better should we want that and if it is possible). Now the Controller would have knowledge of all the policies and would be able to validate namespaced manifests.

monokle.policy.crd.yaml Outdated Show resolved Hide resolved
kac-api/src/index.ts Outdated Show resolved Hide resolved
kac-api/src/utils/validation-server.ts Outdated Show resolved Hide resolved
@olensmar
Copy link
Member

I really think we should reconsider our approach on namespaced admission controllers. Or maybe i did not understand the code and I might be wrong, but the approach where the controller can have an one "active policy" at a time is not looking reliable to me.

A better aproach, on a quick thought, would be creating a Map, 1-1 between a namespace and a policy (at first, later we can map resources to policies better should we want that and if it is possible). Now the Controller would have knowledge of all the policies and would be able to validate namespaced manifests.

totally agree - this would be configured in cloud, i.e. which policies apply to which resources - the selection could be based on

  • namespace
  • labels/annotations
  • resource kinds
  • ?

@f1ames
Copy link
Contributor Author

f1ames commented Sep 27, 2023

Ok, discussed some technicalities with @murarustefaan to make sure I understand how admission controller should behave exactly, to summarize:

  • Admission controller should be global (not namespaced) and should react on resources creation/updates across namespaces (this will be configurable later, which namespaces, etc).
    • Still, webhook deployment/pod will belong to a namespace (should be configurable during deployment).
  • Based on resource data, it will use policy from the same namespace to validate it.
    • Should we fallback to any default if there is no policy in a given namespace?
  • When integrating with Monokle Cloud, it should allow configurabilty, to define which policies should be applied to which resources (see Monokle policies as CRDs #1 (comment)).

Comment on lines +8 to +11
matchResources:
namespaceSelector:
matchLabels:
namespace: default
Copy link
Contributor Author

@f1ames f1ames Oct 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure about this part. It follow what VAP has but it's labels matcher. Initially I thought we would have a property to define namespace name for which policy is applied (through this binding). This one matches namespace label in resource metadata which is not required and can have any value AFAIU.

I went through the specs but haven't found something which would just match namespace for the resource 🤔 Not sure if there is something from the specs we can easily use or should have something custom 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also need to be mindful that no all resources are namespaced - is it possible to match on resource kind/version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is optional part of MonoklePolicyBinding so skipping it will not restrict policy to be applied to single namespace (so will be applied cluster-wide).

As for more precise matching, we can follow matchResources spec which allows filtering by resource, apiVersion, apiGroups, etc, with resourceRules. By "following" I mean we need to implement matching mechanism according to specs (or part of it) in admission controller.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is such a common use case that Kubernetes automatically adds a label to all namespaces called kubernetes.io/metadata.name=$namespaceName.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


for (const key of Object.keys(namespaceMatchLabels)) {
if (resource.metadata.labels?.[key] !== namespaceMatchLabels[key]) {
if (!(key === 'namespace' && resource.metadata.namespace === namespaceMatchLabels[key])) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See this comment. This is a bit "stretched" interpretation of namespace matching (when resource doesn't have metadata.label.namespace). But I have some doubts about it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link is broken. Not sure I understand the problem correctly. For normal Kubernetes Objects you would look to metadata.namespace because metadata.label.namespace will rarely exist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link lead to my previous comment which you already answered with:

This is such a common use case that Kubernetes automatically adds a label to all namespaces called kubernetes.io/metadata.name=$namespaceName.

So my understanding is that below matching which we want to support:

  matchResources:
    namespaceSelector:
      matchLabels:
        namespace: default

is metadata.label matcher according to specs. Meaning being compliant with specs, it should only try to match resource.metadata.label.namespace?

Or maybe not 🤔 It doesn't really say how matching is done for Namespaced resources:

NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy.

Not sure now TBH 🤔

Copy link

@WitoDelnat WitoDelnat Oct 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a 2-way lookup:

  • Give me the namespace that match these labels.
  • Match me all the resources that match this namespace selector.

You can see it in action in the excerpt below from this blogpost.

const namespaces = matchExpressions(allnamespaces, predicates); // Returns many namespaces which are not these system namespaces
const objects = matchResources(namespaces); // Returns all namespaces looking at `meta.namespace`
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingAdmissionPolicyBinding
metadata:
  name: max-replicas-deployments
spec:
  paramRef:
    name: max-replicas-deployments
    namespace: policies-configs
    parameterNotFoundAction: Deny
  policyName: max-replicas-deployments
  validationActions:
    - Deny
  matchResources:
    namespaceSelector:
      matchExpressions:
      - key: kubernetes.io/metadata.name
        operator: NotIn
        values:
        - kube-node-lease
        - kube-public
        - kube-system
        - allow-listed

I'd say let's not dive too deep into this yet. It will likely need to be a library which both the ValidatingAdmissionPolicy validator and the Admission Controller use. In this library, we can cover everything thoroughly with unit tests as the expressions can get quite elaborate. If we simply cover this exact example and maybe one with operator In so you can specify in namespace demo we are good to go. Can even be in a follow-up or bootstrap that library as now we can already apply the policy globally and these system namespaces can be ignored with config iirc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WitoDelnat thanks for the explanation, now I understand👍 Well, not it works fine with matching resources in given namespace (thanks to checking metadata.namespace as you described) and selector matching. So it seems to be consistent with what you described.

For more general handling of matchResources with matchExpressions, resourceRules and others I will add an issue in monokle-core as you suggested 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@f1ames f1ames marked this pull request as ready for review October 3, 2023 07:42
@f1ames f1ames requested a review from WitoDelnat October 3, 2023 07:42
@f1ames f1ames requested a review from murarustefaan October 3, 2023 07:42
Copy link

@WitoDelnat WitoDelnat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Splendid work 💪

@WitoDelnat WitoDelnat self-requested a review October 3, 2023 15:25
@f1ames f1ames merged commit 462e8d7 into main Oct 4, 2023
@f1ames f1ames deleted the crds-policy branch October 4, 2023 07:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants