-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow configuring cluster connection details in provider block #8
Comments
As an alternative, you may set the |
Thanks for prompt reply. I have found #3 after posting and I think I understand the tradeoff.
|
One option would be to manually parse the HCL file to read the provider detail.
These two are very similar and have the best potential.
Not sure what you mean by this but one way to minimize the chance of error is to run like this
|
Thanks.
In retrospective, local cluster idea is more likely to result in problems. When terraform launches plugin you can retrieve the path of the root module and could look for a schema file with a pre-defined name like k8s_static_schema.json.
The call to discovery client to fetch schema happens from this spot. Since the function expects OpenAPIv2 doc, another doc could potentially be supplied instead. I have found another reference to DiscoveryClient here. Not sure if this is a show stopper, but it seems to do make some additional requests. Re 4
The way to make this work would be to create a shell script like do_tf.sh that would prepare environment and call terraform. For my specific use case this causes some authentication problems due to user/permission structure. With current setup, current user credentials are exchanged for access token to their regional TF service account and that token is passed to Kubernetes provider. This avoids storing long lived service account creds on individual machines. root module:
inside a sub module:
|
@silvpol What works well for me is I store my kubeconfig files as secrets in Jenkins.
This behavior is native to kubernetes. I'll close this issue for now but will continue to think about your suggestions. Thanks for raising this important issue. |
Reopen to track issue hashicorp/terraform-plugin-sdk#281 on the Terraform SDK repo. |
@mingfang Would it be possible to "work around" the limitation by having some mechanism to "pre-initialise"/"cache" the required schema information rather than having to fetch it on provider start ? Effectively:
I'm sure there are probably good reasons this wont work... but I didn't see this ruled out by what I read here and in #3 so I wanted to suggest it anyway in case the idea had been missed. |
@techdragon For me I feel using the |
Hi
I had a look at the provider code and it looks like it relies on default kubeconfig mechanism for configuring cluster connection. Is there a way to override it just like the kubernetes provider available in Terraform? Below is an example setup I use currrently:
The text was updated successfully, but these errors were encountered: