-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[KONFLUX-179]: ADR for provisioning test resources #168
Conversation
|
||
The controller will also watch the status of `ClusterClaims` as they are allocated. Upon success, | ||
the secret for the `ClusterDeployment` (containing a kubeconfig) will be copied to the tenant | ||
namespace using the same name as the `ClusterClaim`. Using this pattern allows `Pipeline` authors to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that only a single pipeline can be running at a time? Otherwise they would both get the same cluster.
The multi-arch controller uses a similar pattern but uses the task/pipeline name in the secret, so every secret name is unique.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there can be many claims for a given PipelineRun. It's mentioned earlier that the claims would include the PipelineRun name (<PipelineRunName>-<ConfigMapKey>
).
|
||
A `provision.konflux.dev/cluster-claims` annotation on integration `Pipelines`/`PipelineRuns` will | ||
be used to signal the intent to create claims for ephemeral OpenShift clusters. The value of this | ||
annotation is a reference to a `ConfigMap` in the same namespace. The `ConfigMap` keys will be used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do we gain by introducing indirection via a ConfigMap? Could we make the annotation contain the name of a cluster pool directly instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main motivation behind the ConfigMap
was to allow the user to customize some aspects of the ClusterClaim
. That being said, the only field in the claim spec which might be of interest is lifetime
and we may want to restrict what can be provided there anyway.
Looking to a future where DRA gets adopted, ResourceClaims
have a ref to a separate ResourceClaimParams
resource so a ConfigMap wouldn't be necessary in this case either (the user would have to be allowed to create ResoureClaimParams
).
In order to drop the ConfigMap, we need the annotation to reference multiple ClusterPools
while still defining some tokens (e.g. cluster1
) the author can use to reference the generated claims. We could do that with a bit of json/yaml or some key=value like syntax. For example:
provision.konflux.dev/cluster-claims: |
{
"cluster1": "openshift-latest-ga-x86",
"cluster2": "openshift-4-13-x86"
}
provision.konflux.dev/cluster-claims: |
cluster1: openshift-latest-ga-x86
cluster2: openshift-4-13-x86
provision.konflux.dev/cluster-claims: cluster1=openshift-latest-ga-x86,cluster2=openshift-4-13-x86
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How often to we expect users to actually use multiple clusters? I'd rather see use use the Pareto principle to make the common use case dead simple while accepting some extra complexity for user use cases.
For example, we could make a very simple annotation naming a cluster pool for the single use case and a more complex one referring to a config map for more complex use cases. for example:
provision.konflux.dev/cluster-claim: openshift-latest-ga-x86
V.s.
provision.konflux.dev/cluster-claim-config: name-of-my-config-map
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a config map does prevent the user from fully defining their pipeline via PAC. So we may consider one of the nested syntaxes you have suggested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on how operator testing is done currently in CVP, users almost always utilize multiple clusters in order to test compatibility with all OpenShift versions that their operator is supposed to support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dirgim ok, but does that have to be in the same pipeline? could we possibly have a different pipeline per-version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that would work.
Signed-off-by: Alex Misstear <[email protected]>
8a487ea
to
ed9b946
Compare
It also recommends using a dedicated management cluster separate from application workloads which | ||
creates additional infrastructure complexity. | ||
|
||
Hive `ClusterPools` and `ClusterImageSets` will be maintained by Konflux admins (possibly with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO, what I like about the openshift-ci platform approach is it lets me do a "bring your own cluster pool config". Some teams might have different cluster configurations than others and may need different flavor sizes.
What is not clear is that I can't tell if that is still possible and I just need to had these configs over to a Konflux Admin to configure which is fine or if Konflux is predefining and there isn't many options?
|
||
The problem of provisioning test resources/environments can be broken down into a few questions: | ||
|
||
1. How can resources (OpenShift clusters, to start) be provisioned, efficiently, without exposing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is also not clear to me is if it is going to be a "Bring your own cloud credentials" model?
I think it should be. That is the model DPTP is moving to and I believe our internal Resource Hub offering is as well.
Closing this ADR in favor of a different approach outlined in #172. |
Here's ADR for how we can provision clusters for OLM operator testing.