Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter which namespaces NAC controller watches #103

Open
mateusoliveira43 opened this issue Oct 23, 2024 · 13 comments
Open

Filter which namespaces NAC controller watches #103

mateusoliveira43 opened this issue Oct 23, 2024 · 13 comments

Comments

@mateusoliveira43
Copy link
Contributor

We need to configure which namespaces NAC controller will watch (by default, all namespaces in the cluster).

We need to ensure that 2 or more NAC controllers are not watching the same namespace.

We need to ensure that if configuration changes, no other related parts are touched (like Velero Pod restarting).

@kaovilai
Copy link
Member

Can we leave this to RBAC? watch any and all namespaces it has access to?

@mateusoliveira43
Copy link
Contributor Author

do not know if that is the easiest approach. We would have to create this RBAC at install time of NAC conntroller, and putting the right values there, NAC controller can still try to access those namespaces (but would error with permission).

@mateusoliveira43
Copy link
Contributor Author

I have 2 worries with the solution we will implement:

  1. how to filter
  2. when to validate

how to filter

First concern is if using labels, for example, non admin users may have the power to edit label from namespace it is owner, which would "override" admin user configuration.

To avoid scenarios like this, we could only allow filtering by namespaces names in NAC configuration

when to validate

To validate, we will need to get all DPAs in the cluster, and check that there is no overlap between NAC configuration in them. Depending on how much OADP installs there are in the cluster, this may be not immediate, so a one time solution, like on NAC controller start up, may be better. With this approach, can it have corner cases? Like installing 2 NACs at the same time they would not be validated properly?

Another pro of using this one time validation is that it could be OADP responsibility. If we put this validation on NAC, we may need to change NAC RBACs (I believe current OADP RBACs work to getting all DPAs in the cluster, for example, will test).

@kaovilai
Copy link
Member

NAC controller can still try to access those namespaces (but would error with permission).

or NAC controller can ask via SelfSubjectAccessReview object instead of trying and erroring.

@mateusoliveira43
Copy link
Contributor Author

@kaovilai that I like

only thing missing would have a way to know when there is RBAC overlapping between 2 NACs. Do you have a suggestion for that?

@kaovilai
Copy link
Member

annotate each namespace NAC have access to. the second nac instance will know about overlap because there's a pre-existing annotation.

@kaovilai
Copy link
Member

kaovilai commented Oct 24, 2024

If not annotating namespace, then perhaps there will be a custom resource/empty configmap in each namespace for non admin user to "save their settings" across backups with nac ownerReferences and/or annotations that can be checked the same way.

@mateusoliveira43
Copy link
Contributor Author

I am trying to avoid label/annotation approach, because it is override-able (but maybe that is ok, we just document that if non admin users change labels/annotation in their namespaces, this will probably break what admin user configured)

I think I did not understand your second idea, can you explain it again?

@kaovilai
Copy link
Member

second idea is instead of labeling.. we have a single source of info in a specific namespace a configmap that looks like

data:
     nac1: namespace1
     nac2: namespace2
     nac3:
       - namespace2
       - namespace3
       - namespace4

where every nac would read this and check if its namespace scope overlaps with others.
The latter ones will error out. The earlier in configmap has priority and exit for loop on finding its own key ie. nac2 would not parse nac3 namespaces.

@shubham-pampattiwar
Copy link
Member

@kaovilai we discussed the configmap approach and also something like:
From slack archives:
Thinking more about this, how about this workflow for NAC config:

  • DPA only had NAC enablement spec boolean, Rest other NAC configuration resides in a separate CRD called NACConfig
  • For using nac you need enable it via DPA and also create a NACCondifgCRD (This has stuff we discussed in today's sync, BackupSpecDefaults, NSToBeCateredTo, etc)
  • DPA reconciler only deploys NAC controller deployment based on the DPA spec flag and NACConfig CRD

@mateusoliveira43
Copy link
Contributor Author

As discussed with the team yesterday, we will not allow multiple NAC instances for now. So first implementation will be to validate this: no more than one NAC is deployed (possible solution is do a get call to gather all DPAs in cluster and confirm only one deploys NAC).

Allowing more than one NAC instance in the cluster would be valid to improving performance. But since there is work to allow parallel backups in the future, this may not be needed.

Last thing to note, if we decide to implement this later, we would only allow to filter by namespaces names. Because the way to tell the controller which namespaces to watch is through this config

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
...
   Cache: cache.Options{
      DefaultNamespaces: map[string]cache.Config{"operator-namespace": cache.Config{}},
   },
})

which only accepts namespaces names and is not dynamic. Every time a new namespace should be added to it, controller would need to restart

@kaovilai
Copy link
Member

The manager can watch all namespaces here, and we add predicate to filter out namespaces it's not meant to watch if we want to prevent restart.

@mateusoliveira43
Copy link
Contributor Author

PR enforcing only one NAC instance per cluster openshift/oadp-operator#1577

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants