-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ISSUE] Issue with databricks_notebook
, databricks_secret_scope
, databricks_cluster
, etc resources
#4113
Comments
You can't update existing workspace without recreating it... |
I'm not sure how the first item in the troubleshooting guide relates to this, we're not using a data resource for the workspace. We have it defined in the same module. We also have the depends_on added for the notebook. Recreating the workspace would be absolutely fine, but the plan step already fails. Please elaborate if I'm misunderstanding the point you're trying to make. |
It's more about the behavior of terraforming itself— If you're recreating a workspace anyway, why not do |
I wouldn't have expected that I need additional terraform destroy, I would expect that because of the vnet integration change everything is recreated during the apply just like for other terraform resources and the plan does not fail. But if I get it correctly, this is not so easy in this case because of this azurerm - databricks borderline between the workspace and the notebooks, etc. |
Deployed configuration
Updated configuration - to be deployed
Expected Behavior
Terraform understanding that the workspace needs to be recreated because of the vnet integration being turned on.
No issues during plan generation.
Actual Behavior
Terraform plan fails with authentication error when trying to refresh the state of the notebooks, clusters and secret scopes.
Seems as if the provider tries to access these resources differently when it realizes that the databricks workspace is going to be switched to be vnet-integrated.
Error: cannot read notebook: failed during request visitor: default auth: cannot configure default credentials, please check https://docs.databricks.com/en/dev-tools/auth.html#databricks-client-unified-authentication to configure credentials for your preferred authentication method. Config: azure_client_secret=, azure_client_id=, azure_tenant_id=omitted now. Env: ARM_CLIENT_SECRET, ARM_CLIENT_ID, ARM_TENANT_ID
Steps to Reproduce
terraform plan
Terraform and provider versions
Terraform: v1.5.5 (windows_amd64)
Databricks provider: v1.53.0
Is it a regression?
First we tried on v1.49.0, had the same experience.
Workaround
If all these problematic resources that are part of the configuration (notebooks, clusters, etc) are removed from the state before running terraform plan, everything works fine.
The text was updated successfully, but these errors were encountered: