-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider produced inconsistent result after apply (es and kibana config yamls) #698
Comments
@tobio could this issue cause the following error after a second Terraform apply when the EC deployment is already created?
|
@Zawadidone almost certainly not. Are you able to open a new issue with your full |
@tobio I could not reproduce the issue, but I will create an issue if it occurs again. |
It occurs when a Terraform apply fails for whatever reason while the deployment is already created, after which the resource cannot be used by Terraform which results in the failed to parse Elasticsearch error. |
The following error occurs when the Elastic Cloud deployment is upgraded while the Terraform state contains an older version. I don't know if it is related to this issue, but the error indicates that th a value is empty.
data "ec_stack" "default" {
version_regex = "8.?.?"
region = "gcp-europe-west1
} # Elastic Cloud shows version 8.12.1
terraform state state ec_deployment.default
{
[...]
version = "8.12.0"
}
TF_LOG=debug terraform plan
[...]
[ERROR] provider.terraform-provider-ec_v0.9.0: Response contains error diagnostic: diagnostic_summary="Failed to determine whether to use node_roles" tf_proto_version=6.3 tf_provider_addr=registry.terraform.io/elastic/ec diagnostic_detail="failed to parse Elasticsearch version: Version string empty" tf_rpc=PlanResourceChange @caller=github.com/hashicorp/[email protected]/tfprotov6/internal/diag/diagnostics.go:58 diagnostic_severity=ERROR tf_resource_type=ec_deploymen
[ERROR] vertex "ec_deployment.default" error: Failed to determine whether to use node_roles
[...] |
Inconsistent results after apply, when es or kibana config yaml is set to "".
Readiness Checklist
Expected Behavior
Values will be consistent after apply
Current Behavior
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to module.main.module.ec_deployment_setup.ec_deployment.elastic, provider
│ "provider["registry.terraform.io/elastic/ec"]" produced an unexpected new value:
│ .elasticsearch.config.user_settings_yaml: was cty.StringVal(""), but now null.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to module.main.module.ec_deployment_setup.ec_deployment.elastic, provider
│ "provider["registry.terraform.io/elastic/ec"]" produced an unexpected new value:
│ .kibana.config: was cty.ObjectVal(map[string]cty.Value{"docker_image":cty.NullVal(cty.String),
│ "user_settings_json":cty.NullVal(cty.String),
│ "user_settings_override_json":cty.NullVal(cty.String),
│ "user_settings_override_yaml":cty.NullVal(cty.String), "user_settings_yaml":cty.StringVal("")}),
│ but now null.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
## Terraform definition
Steps to Reproduce
Context
Possible Solution
Your Environment
The text was updated successfully, but these errors were encountered: