Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuring cluster connection details in provider block #8

Open
silvpol opened this issue Dec 25, 2019 · 9 comments
Open

Allow configuring cluster connection details in provider block #8

silvpol opened this issue Dec 25, 2019 · 9 comments
Labels
enhancement New feature or request waiting for dependencies This is blocked

Comments

@silvpol
Copy link

silvpol commented Dec 25, 2019

Hi

I had a look at the provider code and it looks like it relies on default kubeconfig mechanism for configuring cluster connection. Is there a way to override it just like the kubernetes provider available in Terraform? Below is an example setup I use currrently:

data "google_client_config" "default" {}

provider "kubernetes" {
  load_config_file       = false
  host                   = var.cluster_host
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = var.cluster_ca_certificate
}

@mingfang
Copy link
Owner

@silvpol
AFAIK Unfortunately that is not possible.
Please find an explanation here #3

@mingfang
Copy link
Owner

As an alternative, you may set the KUBECONFIG environment variable as described here https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

@silvpol
Copy link
Author

silvpol commented Dec 25, 2019

Thanks for prompt reply. I have found #3 after posting and I think I understand the tradeoff.
My main concern is that I could end up runnig changes against the wrong cluster. Being able to specify which cluster to run operations against is essential to avoid some major mistakes. I'm not familiar with either Go or terraform plugin writing and I may ask some odd questions so please bear with me.

  1. Is any tf configuration available to be read from at the stage of initialising the plugin, ie. anything can be read even if not interpolated yet like literal values or remote terraform state?
  2. Could init be run against a local cluster (i.e. minikube) to fetch schema but actual state and operations run against target cluster from config? I know this could allow for schema mismatch but I think this may be an option for my use case.
  3. Could schema be pre-fetched and then loaded from disk? For example download OpenAPI json file and then point provider at it.
  4. KUBECONFIG is prone to same mistake as it's an env variable, client supports specifying single file, could that be exposed?

@mingfang
Copy link
Owner

  1. Is any tf configuration available to be read from at the stage of initialising the plugin, ie. anything can be read even if not interpolated yet like literal values or remote terraform state?

One option would be to manually parse the HCL file to read the provider detail.
I suspect this would require a lot of work and may be error prone.

  1. Could init be run against a local cluster (i.e. minikube) to fetch schema but actual state and operations run against target cluster from config? I know this could allow for schema mismatch but I think this may be an option for my use case.
  2. Could schema be pre-fetched and then loaded from disk? For example download OpenAPI json file and then point provider at it.

These two are very similar and have the best potential.
The biggest problem is this can only read one version of the schema.
As a result this will cause many compatibility problems.

  1. KUBECONFIG is prone to same mistake as it's an env variable, client supports specifying single file, could that be exposed?

Not sure what you mean by this but one way to minimize the chance of error is to run like this

KUBECONFIG=prod_kubeconfig.yaml terraform apply

vs

KUBECONFIG=qa_kubeconfig.yaml terraform apply

@silvpol
Copy link
Author

silvpol commented Dec 26, 2019

Thanks.
Re 1 - I agree, not worth it
Re 2/3

These two are very similar and have the best potential.
The biggest problem is this can only read one version of the schema.
As a result this will cause many compatibility problems.

In retrospective, local cluster idea is more likely to result in problems.
If the schema was stored in root module then schema could be updated as necessary and checked into git along with tf files. I have done some more digging in the code and came up with the below.

When terraform launches plugin you can retrieve the path of the root module and could look for a schema file with a pre-defined name like k8s_static_schema.json.

	rootDir, _ := os.Getwd()
	if _, err := os.Stat(rootDir + "k8s_static_schema.json"); err != nil {
        // use static schema
	} else {
          // use live schema
	}

The call to discovery client to fetch schema happens from this spot. Since the function expects OpenAPIv2 doc, another doc could potentially be supplied instead. I have found another reference to DiscoveryClient here. Not sure if this is a show stopper, but it seems to do make some additional requests.

Re 4

    KUBECONFIG is prone to same mistake as it's an env variable, client supports specifying single file, could that be exposed?

Not sure what you mean by this but one way to minimize the chance of error is to run like this

KUBECONFIG=prod_kubeconfig.yaml terraform apply

vs

KUBECONFIG=qa_kubeconfig.yaml terraform apply

The way to make this work would be to create a shell script like do_tf.sh that would prepare environment and call terraform. For my specific use case this causes some authentication problems due to user/permission structure. With current setup, current user credentials are exchanged for access token to their regional TF service account and that token is passed to Kubernetes provider. This avoids storing long lived service account creds on individual machines.

root module:

data "google_service_account_access_token" "main" {
  provider               = google
  target_service_account = var.regional_tf_service_account
  scopes                 = ["userinfo-email", "cloud-platform"]
  lifetime               = "1800s"
}
provider "google" {
  alias        = "token"
  access_token = data.google_service_account_access_token.main.access_token
}

inside a sub module:

data "google_client_config" "default" {}

provider "kubernetes" {
  load_config_file       = false
  host                   = var.cluster_auth["host"]
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = var.cluster_auth["ca_certificate"]
}

@mingfang
Copy link
Owner

@silvpol
While I agree what you proposed is technically possible, I think it will requires a lot of work and documentation for people to get it right. I feel the solution may cause more problems than it solves.

What works well for me is I store my kubeconfig files as secrets in Jenkins.
Then my terraform job would get the file injected at run time.
The job would then run terraform as I described before

KUBECONFIG=kubeconfig.yaml terraform apply --auto-aprove

This behavior is native to kubernetes.
Basically if it works with kubectl then it would work with terraform.

I'll close this issue for now but will continue to think about your suggestions.
Perhaps terraform will support dynamic plugins like this in the future.

Thanks for raising this important issue.

@mingfang
Copy link
Owner

mingfang commented Jan 7, 2020

Reopen to track issue hashicorp/terraform-plugin-sdk#281 on the Terraform SDK repo.
This issue can be worked on once that gets implemented.

@mingfang mingfang reopened this Jan 7, 2020
@mingfang mingfang added enhancement New feature or request waiting for dependencies This is blocked labels Jan 13, 2020
@techdragon
Copy link

@mingfang Would it be possible to "work around" the limitation by having some mechanism to "pre-initialise"/"cache" the required schema information rather than having to fetch it on provider start ?

Effectively:

  1. manually run foobar-config-builder -o "${PROJECT_ROOT}/k8s-provider-config.data" or something like that.
  2. Configure the provider with an option that lets me specify k8s-provider-config.data as the source of the K8S API schema, and if this option is passed, also allow passing the other Kubernetes connection parameters since they wont be required at init?

I'm sure there are probably good reasons this wont work... but I didn't see this ruled out by what I read here and in #3 so I wanted to suggest it anyway in case the idea had been missed.

@mingfang
Copy link
Owner

mingfang commented May 9, 2020

@techdragon
Technically it is possible and I've thought about doing it before, but I think it causes more problems then it solves.
The problems are
1- I will have to build and make the user configure a separate foobar-config-builder binary
2-User will have to manually run the pre-initialise step
3-User will have add provider configuration block to replace the kubeconfig info and to point to the cache dir
4-Whenever there is an upgrade to foobar-config-builder, this plugin, and/or Kubernetes, then user must somehow know they have to repeat step 2. And since there's no easy way to know, it will cause user anxiety and think they have to repeat step 2 whenever something unrelated goes wrong.

For me I feel using the kubeconfig is much more convenient.
Especially when in the CI/CD tool such as Jenkins where the kubeconfig can be supplied as a Jenkins credential file or dynamically generated by public cloud tools at runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request waiting for dependencies This is blocked
Projects
None yet
Development

No branches or pull requests

3 participants