Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] Issue with databricks_cluster resource - gcp_attributes.local_ssd_count = 0 not working #4089

Open
arloc opened this issue Oct 9, 2024 · 0 comments

Comments

@arloc
Copy link

arloc commented Oct 9, 2024

Configuration

terraform {

  required_providers {
    databricks = {
      source  = "databricks/databricks"
      version = "1.53.0"
    }
  }
}

provider "databricks" {
  host      = "https://xxxxxx.gcp.databricks.com/"
  token     = "xxxxxxxxx"
  auth_type = "pat"
}


resource "databricks_cluster" "clusters" {
  cluster_name            = "troubleshoot-terraform"
  node_type_id            = "n2-highmem-2"
  spark_version           = "15.4.x-scala2.12"
  autotermination_minutes = 15
  no_wait                 = true
  num_workers             = 1

  gcp_attributes {
    availability    = "PREEMPTIBLE_GCP"
    local_ssd_count = 0
    zone_id         = "auto"
  }
}

Expected Behavior

That the cluster was created with no local SSDs

Actual Behavior

The cluster is being created with the default value for local SSDs (ignoring local_ssd_count: 0)

Plan:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # databricks_cluster.clusters will be created
  + resource "databricks_cluster" "clusters" {
      + autotermination_minutes      = 15
      + cluster_id                   = (known after apply)
      + cluster_name                 = "troubleshoot-terraform"
      + default_tags                 = (known after apply)
      + driver_instance_pool_id      = (known after apply)
      + driver_node_type_id          = (known after apply)
      + enable_elastic_disk          = (known after apply)
      + enable_local_disk_encryption = (known after apply)
      + id                           = (known after apply)
      + no_wait                      = true
      + node_type_id                 = "n2-highmem-2"
      + num_workers                  = 1
      + spark_version                = "15.4.x-scala2.12"
      + state                        = (known after apply)
      + url                          = (known after apply)

      + gcp_attributes {
          + availability = "PREEMPTIBLE_GCP"
          + zone_id      = "auto"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Retrieving the cluster config using the API

databricks -p Staging clusters get <cluster-id> | jq .gcp_attributes
{
  "availability": "PREEMPTIBLE_GCP",
  "use_preemptible_executors": false,
  "zone_id": "auto"
}

Steps to Reproduce

Terraform and provider versions

<=1.53.0

Is it a regression?

No

Debug Output

tf-debug.log

Maybe this section of the log has some helpful information.

2024-10-09T17:17:40.983-0300 [DEBUG] provider.terraform-provider-databricks_v1.53.0: [DEBUG] Suppressing diff for gcp_attributes.0.local_ssd_count: platform="" config="0"

Important Factoids

No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant