You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We configure our ES stack through kibana or the DEV console and are used to JSON definitions of our configuration elements (ILM,Index templates, Ingest Pipelines, Users & groups e.t.c)
While the TF ElasticStack provider does an excellent job of managing said items it requires a translation step from JSON to TF resources.
This has proved to be problematic as our team is not sufficiently adept in TF and most of our users just use JSON and the DEV console.
Describe the solution you'd like
It would be awesome if each resource gets an optional "from_json" field that accepts a string value (the JSON content) and translates this to a TF resource definition to perform plan and apply.
It could be so that if "from_json" is defined, first the json definition is read and subsequent values in HCL can override them.
The same request was discussed in #124
We would like to see this implemented for all resources.
Describe alternatives you've considered
We have a working CI/CD pipeline that uses the ElasticStack provider and reads the definitions from JSON.
In the case of ILM's it proved to be extremely cumbersome to extract the values from json while defining an "elasticstack_elasticsearch_index_lifecycle" resource.
We eventually opted to use a simple http provider to accomplish what we wanted, which beats the purpose of using the elasticstack provider.
For other resources (agent policies,package policies, fleet outputs and fleet servers) we had to make use of the local-exec provider to deploy via cURL. This because for creation POST is needed and for updates PUT and our shell scripts handle that logic.
The elasticstack provider already determines if POST or PUT is needed.
Our TF job is becoming a monster this way..
example:
locals {
# Create a set of files to process
ilm-policies = setunion(
fileset("/", "${path.cwd}/ilm-policies/*.json"),
fileset("/", "${path.cwd}/../../base/ilm-policies/*.json")
)
# Read JSON files content setting the filename as the key in a map
ilm-policy-json-data = {
for f in local.ilm-policies : basename(replace(f, ".json", "")) => jsondecode(file("/${f}"))
}
}
# I opted to do simple HTTP PUT's for ILM with JSON as source.
# If this is not desired I would recommend just defining all ILM's as resources in this file with no read from JSON.
data "http" "ilm_policies" {
for_each = local.ilm-policy-json-data
provider = http-full
url = "${var.API_URL_ES}/_ilm/policy/${each.key}"
method = "PUT"
insecure_skip_verify = true
request_headers = {
content-type = "application/json"
authorization = "Basic ${base64encode("elastic:${var.ELASTIC_USER_PASSWORD}")}"
}
request_body = jsonencode(each.value)
}
# Using the configuration terraform provider for ILM policies proved to be very complex if we want to use source JSON as inputs.
# There are many cases and optional fields that make it look like the below commented configuration (does not work).
# resource "elasticstack_elasticsearch_index_lifecycle" "ilm-policies" {
# for_each = local.ilm-policy-json-data
# name = each.key
#
# # Only settings that are defined in the currently used ILM policies are implemented.
# # Extend this block in the future when the need arises to include more settings.
# hot {
# min_age = tostring(each.value.policy.phases.hot.min_age)
# dynamic "forcemerge" {
# for_each = {for key, value in each.value.policy.phases.hot.actions.forcemerge : key => key}
# content {
# max_num_segments = "${forcemerge.value.max_num_segments}"
# }
# }
# dynamic "readonly" {
# for_each = {for key, value in each.value.policy.phases.hot.actions.readonly : key => key}
# content {
# enabled = true
# }
# }
# rollover {
# max_age = tostring(lookup(each.value.policy.phases.hot.actions.rollover, "max_age", ""))
# }
# dynamic "shrink" {
# for_each = {for key, value in each.value.policy.phases.hot.actions.shrink : key => key}
# content {
# number_of_shards = shrink.value.number_of_shards
# }
# }
# }
#
# dynamic "delete" {
# for_each = {for key, value in each.value.policy.phases.delete : key => "${value}"}
# content {
# min_age = tostring(lookup(delete.value, "min_age", ""))
# dynamic "delete" {
# for_each = {
# for key, value in each.value.policy.phases.delete.actions : key => "${value}"if value.class == "pub"
# }
# content {
# delete_searchable_snapshot = jsonencode(lookup(delete.value, "delete_searchable_snapshot", false))
# }
# }
# }
# }
# }
We would really prefer to use the elasticstack provider like the following mock example:
Is your feature request related to a problem? Please describe.
We configure our ES stack through kibana or the DEV console and are used to JSON definitions of our configuration elements (ILM,Index templates, Ingest Pipelines, Users & groups e.t.c)
While the TF ElasticStack provider does an excellent job of managing said items it requires a translation step from JSON to TF resources.
This has proved to be problematic as our team is not sufficiently adept in TF and most of our users just use JSON and the DEV console.
Describe the solution you'd like
It would be awesome if each resource gets an optional "from_json" field that accepts a string value (the JSON content) and translates this to a TF resource definition to perform plan and apply.
It could be so that if "from_json" is defined, first the json definition is read and subsequent values in HCL can override them.
The same request was discussed in #124
We would like to see this implemented for all resources.
Describe alternatives you've considered
We have a working CI/CD pipeline that uses the ElasticStack provider and reads the definitions from JSON.
In the case of ILM's it proved to be extremely cumbersome to extract the values from json while defining an "elasticstack_elasticsearch_index_lifecycle" resource.
We eventually opted to use a simple http provider to accomplish what we wanted, which beats the purpose of using the elasticstack provider.
For other resources (agent policies,package policies, fleet outputs and fleet servers) we had to make use of the local-exec provider to deploy via cURL. This because for creation POST is needed and for updates PUT and our shell scripts handle that logic.
The elasticstack provider already determines if POST or PUT is needed.
Our TF job is becoming a monster this way..
example:
We would really prefer to use the elasticstack provider like the following mock example:
Additional context
I am willing to work on this feature but could use some guidance from the maintainers to find the most elegant solution.
The text was updated successfully, but these errors were encountered: