Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]: use kubeconfig in bootstrap_git resource #717

Open
BobyMCbobs opened this issue Sep 10, 2024 · 3 comments
Open

[Enhancement]: use kubeconfig in bootstrap_git resource #717

BobyMCbobs opened this issue Sep 10, 2024 · 3 comments

Comments

@BobyMCbobs
Copy link

BobyMCbobs commented Sep 10, 2024

Description

As a platform builder managing multiple clusters,
I need to create, manage and destroy multiple dynamic clusters without instantiating multiple flux providers, while using a kubeconfig provided from a data or resource source.

Given the complexities of Terraform providers in modules, it would allow ease of use to provide a kubeconfig on bootstrap.

TLDR;

provide kube_config and kube_config_path fields in bootstrap_git if not given in provider config.

Affected Resource(s) and/or Data Source(s)

bootstrap_git

Potential Terraform Configuration

resource "flux_bootstrap_git" "this" {
  path             = "clusters/${var.cluster}"
  components_extra = ["image-reflector-controller", "image-automation-controller"]
  kube_config      = some_provider.kubernetes.kubeconfig # OR
  kube_config_path = "./some/path/here"
}

References

No response

Would you like to implement a fix?

None

@JordanP
Copy link

JordanP commented Sep 23, 2024

What's wrong with multiple Flux providers ? Is it because lack of "for_each" on a list of providers ?

@BobyMCbobs
Copy link
Author

What's wrong with multiple Flux providers ? Is it because lack of "for_each" on a list of providers ?

@JordanP, providers are only available outside of modules. Having multiple providers per-cluster where the kubeconfig (or values) is fed through outputs into the top-level flux provider for that cluster is clunky.

like this (example)

module "cluster-somek8s" {
  source = "./modules/a-cluster-config"
}

provider "flux" {
  alias = "somek8s"
  kubernetes = {
    host                   = module.somek8s.host
    client_certificate     = module.somek8s.cert
    client_key             = module.somek8s.key
    cluster_ca_certificate = module.somek8s.ca
  }
}

module "flux-somek8s" {
  source = "./modules/a-flux-deploy"
  provider = {
    flux = flux.somek8s
  }

  depends_on = [module.cluster-somek8s] # NOTE afaik this is hard to make this module depend on the cluster being up
}

I'd like to be able to have a module for a cluster where defining a cluster also includes Flux, without top-level config needing to be added. This limiting the number of steps to get components up.

Please correct me if you think there's a better way to use the tooling.

If this were possible, it could be able to do something like this (example)

provider "flux" {}

variable "github-token" {}

module "cluster" {
  for_each = toset(["sfo", "syd", "fra"])
  source = "./modules/a-cluster-config-with-flux"

  region = each.key
  github-token = var.github-token

  provider = {
    flux = flux
  }
}

Let me know your thoughts.

@swade1987
Copy link
Member

@BobyMCbobs I've previously solved this issue using the following approach:

  1. Create a Terraform module called k8s-bootstrapped.
  2. This module does two things:
    a. Constructs the Kubernetes cluster (using its own k8s module).
    b. Uses the output from the k8s module to feed into the Flux bootstrap process.

This approach is similar to the examples in this repository

To implement this solution, you would use the k8s-bootstrapped module as the main calling module in your Terraform configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants