You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
In some specific scenarios, the kubeconfig is missing when operators are ready to run the intra cluster step. This leads to the installation failing in the local provisioner step in terraform.
To Reproduce
Steps to reproduce the behavior:
Clone the solution guidance
Run the steps to create the clusters
Run the intra-cluster creation step
See error
null_resource.generate_agones_certs: Provisioning with 'local-exec'...
null_resource.generate_agones_certs (local-exec): Executing: ["/bin/sh" "-c" "nohup /Users/pouemes/Projects/guidance-for-game-server-hosting-using-agones-and-open-match-on-amazon-eks/scripts/generate-agones-certs.sh agones-gameservers-1 /Users/pouemes/Projects/guidance-for-game-server-hosting-using-agones-and-open-match-on-amazon-eks &"]
null_resource.generate_agones_certs (local-exec): + echo '#####'
null_resource.generate_agones_certs (local-exec): #####
null_resource.generate_agones_certs (local-exec): + CLUSTER_NAME=agones-gameservers-1
null_resource.generate_agones_certs (local-exec): + ROOT_PATH=/Users/pouemes/Projects/guidance-for-game-server-hosting-using-agones-and-open-match-on-amazon-eks
null_resource.generate_agones_certs (local-exec): ++ kubectl config get-contexts -o=name
null_resource.generate_agones_certs (local-exec): ++ grep agones-gameservers-1
null_resource.generate_agones_certs (local-exec): + kubectl config use-context
null_resource.generate_agones_certs (local-exec): Set the current-context in a kubeconfig file.
null_resource.generate_agones_certs (local-exec): Aliases:
null_resource.generate_agones_certs (local-exec): use-context, use
null_resource.generate_agones_certs (local-exec): Examples:
null_resource.generate_agones_certs (local-exec): # Use the context for the minikube cluster
null_resource.generate_agones_certs (local-exec): kubectl config use-context minikube
null_resource.generate_agones_certs (local-exec): Usage:
null_resource.generate_agones_certs (local-exec): kubectl config use-context CONTEXT_NAME [options]
null_resource.generate_agones_certs (local-exec): Use "kubectl options" for a list of global command-line options (applies to all commands).
null_resource.generate_agones_certs (local-exec): error: Unexpected args: []
null_resource.generate_agones_certs (local-exec): + echo '- Verify that the cert-manager pods are running -'
null_resource.generate_agones_certs (local-exec): - Verify that the cert-manager pods are running -
null_resource.generate_agones_certs (local-exec): + kubectl get pods -n cert-manager -o wide
null_resource.generate_agones_certs (local-exec): E0115 17:18:20.235382 92207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
null_resource.generate_agones_certs (local-exec): E0115 17:18:20.238508 92207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
null_resource.generate_agones_certs (local-exec): E0115 17:18:20.240012 92207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
null_resource.generate_agones_certs (local-exec): E0115 17:18:20.241511 92207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
null_resource.generate_agones_certs (local-exec): E0115 17:18:20.243808 92207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
Expected behavior
The intra-cluster creation should be successful.
Screenshots
Not required.
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Smartphone (please complete the following information):
Device: [e.g. iPhone6]
OS: [e.g. iOS8.1]
Browser [e.g. stock browser, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
In some specific scenarios, the kubeconfig is missing when operators are ready to run the intra cluster step. This leads to the installation failing in the local provisioner step in terraform.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The intra-cluster creation should be successful.
Screenshots
Not required.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: