Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider produced inconsistent result after apply (es and kibana config yamls) #698

Open
4 tasks done
roy-tancredi opened this issue Aug 28, 2023 · 5 comments · May be fixed by #700
Open
4 tasks done

Provider produced inconsistent result after apply (es and kibana config yamls) #698

roy-tancredi opened this issue Aug 28, 2023 · 5 comments · May be fixed by #700
Labels
bug Something isn't working

Comments

@roy-tancredi
Copy link

Inconsistent results after apply, when es or kibana config yaml is set to "".

Readiness Checklist

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I am reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Values will be consistent after apply

Current Behavior

│ Error: Provider produced inconsistent result after apply

│ When applying changes to module.main.module.ec_deployment_setup.ec_deployment.elastic, provider
│ "provider["registry.terraform.io/elastic/ec"]" produced an unexpected new value:
│ .elasticsearch.config.user_settings_yaml: was cty.StringVal(""), but now null.

│ This is a bug in the provider, which should be reported in the provider's own issue tracker.


│ Error: Provider produced inconsistent result after apply

│ When applying changes to module.main.module.ec_deployment_setup.ec_deployment.elastic, provider
│ "provider["registry.terraform.io/elastic/ec"]" produced an unexpected new value:
│ .kibana.config: was cty.ObjectVal(map[string]cty.Value{"docker_image":cty.NullVal(cty.String),
│ "user_settings_json":cty.NullVal(cty.String),
│ "user_settings_override_json":cty.NullVal(cty.String),
│ "user_settings_override_yaml":cty.NullVal(cty.String), "user_settings_yaml":cty.StringVal("")}),
│ but now null.

│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
## Terraform definition

Steps to Reproduce

Context

Possible Solution

Your Environment

  • Version used:
  • Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version:
  • Environment name and version (e.g. Go 1.9):
  • Server type and version:
  • Operating System and version:
  • Link to your project:
@roy-tancredi roy-tancredi added the bug Something isn't working label Aug 28, 2023
@tobio tobio linked a pull request Aug 30, 2023 that will close this issue
10 tasks
@Zawadidone
Copy link

@tobio could this issue cause the following error after a second Terraform apply when the EC deployment is already created?

│ Error: Failed to determine whether to use node_roles
│ 
│   with [...].ec_deployment.default,
│   on [...]/main.tf line 458, in resource "ec_deployment" "default":
│  458: resource "ec_deployment" "default" {
│ 
│ failed to parse Elasticsearch version: Version string empty

@tobio
Copy link
Member

tobio commented Sep 7, 2023

@Zawadidone almost certainly not. Are you able to open a new issue with your full ec_deployment resource definition and we can take a look.

@Zawadidone
Copy link

@tobio I could not reproduce the issue, but I will create an issue if it occurs again.

@Zawadidone
Copy link

It occurs when a Terraform apply fails for whatever reason while the deployment is already created, after which the resource cannot be used by Terraform which results in the failed to parse Elasticsearch error.

@Zawadidone
Copy link

Zawadidone commented Feb 10, 2024

The following error occurs when the Elastic Cloud deployment is upgraded while the Terraform state contains an older version. I don't know if it is related to this issue, but the error indicates that th a value is empty.

  1. Elastic Cloud deployment is created with version 8.12.0.
  2. Elastic Cloud deployment is upgraded to version 8.12.1, but the Terraform apply command fails due to another resource.
  3. The Elastic Cloud deployment is upgraded to version version 8.12.1 in the background.
  4. Terraform apply is executed again but fails with the following error:
│ Error: Failed to determine whether to use node_roles
│ 
│   with [...].ec_deployment.default,
│   on [...]/main.tf line 458, in resource "ec_deployment" "default":
│  458: resource "ec_deployment" "default" {
│ 
│ failed to parse Elasticsearch version: Version string empty
data "ec_stack" "default" {
  version_regex = "8.?.?"
  region              = "gcp-europe-west1
}
# Elastic Cloud shows version 8.12.1

terraform state state ec_deployment.default
{
[...]
    version                = "8.12.0"
}

TF_LOG=debug terraform plan
[...]
[ERROR] provider.terraform-provider-ec_v0.9.0: Response contains error diagnostic: diagnostic_summary="Failed to determine whether to use node_roles" tf_proto_version=6.3 tf_provider_addr=registry.terraform.io/elastic/ec diagnostic_detail="failed to parse Elasticsearch version: Version string empty" tf_rpc=PlanResourceChange @caller=github.com/hashicorp/[email protected]/tfprotov6/internal/diag/diagnostics.go:58 diagnostic_severity=ERROR tf_resource_type=ec_deploymen
[ERROR] vertex "ec_deployment.default" error: Failed to determine whether to use node_roles
[...]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants