Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Passing Harvester cloud-provider-config as a resource reference produces inconsistent plan #1460

Open
avekrivoy opened this issue Jan 13, 2025 · 0 comments
Labels

Comments

@avekrivoy
Copy link

Rancher Server Setup

  • Rancher version: 2.10.1
  • Installation option (Docker install/Helm Chart): k3s using the AWS quickstart guide

Information about the Cluster

  • Kubernetes version: v1.31.3+rke2r1
  • Cluster Type (Local/Downstream): downstream RKE2 Harvester cluster

User Information

  • What is the role of the user logged in? Admin

Provider Information

  • What is the version of the Rancher v2 Terraform Provider in use? 6.0.0
  • What is the version of Terraform in use? 1.10.4

Describe the bug

As a workaround for #1459, I'm trying to automate the Harvester cloud provider integration described in the documentation. Using terracurl_request resource to create service account, fetch kubeconfig and pass to the machine_selector_config field of a rancher2_machine_config_v2

resource "terracurl_request" "create-sa-development" {
  name = "create-sa-development"

  url             = "${var.api_url}/k8s/clusters/${data.rancher2_cluster_v2.harv.cluster_v1_id}/v1/harvester/kubeconfig"
  skip_tls_verify = true
  method          = "POST"
  request_body    = <<EOF
{
  "clusterRoleName": "harvesterhci.io:cloudprovider", 
  "namespace": "${var.development_vm_namespace}", 
  "serviceAccountName": "${var.development_cluster_name}"
}
EOF

  headers = {
    Content-Type  = "application/json"
    Authorization = "Basic ${local.rancherBase64AuthString}"
  }

  response_codes = [200, 204]
}

resource "rancher2_cluster_v2" "rke2-development" {
  count = var.development_cluster_deploy ? 1 : 0
  name  = var.development_cluster_name

  ...

  machine_selector_config {
    config = jsonencode({
      cloud-provider-name   = "harvester"
      cloud-provider-config = jsondecode(terracurl_request.create-sa-development.response)
    })
  }
}

Actual Result

First apply fails with the following output:

│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for rancher2_cluster_v2.rke2-development[0] to include new values learned so far during apply, provider "registry.terraform.io/rancher/rancher2" produced an
│ invalid new value for .rke_config[0].machine_selector_config[0].config: final value cty.StringVal("cloud-provider-config: |\n  apiVersion: v1\n  clusters:\n  - cluster:\n
│ certificate-authority-data:
│ LS0tLS1CRUdJTiBDRVJUSUZRUUREQmx5...\n
│ server: https://10.10.10.10:6443\n    name: default\n  contexts:\n  - context:\n      cluster: default\n      namespace: development\n      user: default\n    name: default\n
│ current-context: default\n  kind: Config\n  preferences: {}\n  users:\n  - name: default\n    user:\n      token:
│ eyJhbGciOiJSUzI1NiIsImtpZCI6InZuYTNKVTRjYW1CU4oIg...\ncloud-provider-name:
│ harvester\n") does not conform to planning placeholder cty.UnknownVal(cty.String).Refine().NotNull().StringPrefixFull("{").NewValue().
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

On second try Terraform produces correct plan and successfully creates resources

          + machine_selector_config {
              + config = <<-EOT
                    cloud-provider-config: |
                      apiVersion: v1
                      clusters:
                      - cluster:
                          certificate-authority-data:  LS0tLS1CRUdJTiBDRVJUSUZRUUREQmx5...
                          server: https://10.10.10.10:6443
                        name: default
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant