Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream Prematurely Closed Connection While Reading Response Header from Upstream - 502 Gateway error #12286

Closed
anjanaprasd opened this issue Nov 3, 2024 · 7 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@anjanaprasd
Copy link

anjanaprasd commented Nov 3, 2024

What happened:
I'm running an OpenSearch cluster and trying to expose it through an NGINX Ingress resource. When I attempt to access the OpenSearch cluster via the NGINX Ingress, I get a 502 Bad Gateway error. However, if I access the service through port-forwarding, it works without any issues. although there is a bit of latency, the login page loads and functions correctly.

I've tried various solutions, but none have worked so far. My NGINX Ingress setup seems fine, as I deployed a simple web server and was able to access it without any problems.

I also checked the Ingress logs and found an error message, but I'm not sure how to resolve this. Any help would be greatly appreciated!

What you expected to happen:
In the browser only 502 error

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

kubectl describe svc new-dashboards
.Name:              new-dashboards
Namespace:         default
Labels:            opensearch.cluster.dashboards=new
Annotations:       banzaicloud.com/last-applied:
                     UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAb3JpZ2luYWyUUU2v2jAQ/C9zdigpoZAcSy89gUrVS8VhY29KhLEtewOqUP575YQ+oXd7N3t2vPPhByj0vzim3js0uJVQuPTOoM...
Selector:          opensearch.cluster.dashboards=new
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.102.216.181
IPs:               10.102.216.181
Port:              http  5601/TCP
TargetPort:        5601/TCP
Endpoints:         10.244.3.84:5601
Session Affinity:  None
Events:            <none>

ingress pod logs

2024/11/03 10:02:03 [error] 299#299: *174 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:10:02:03 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 10:02:03 [error] 299#299: *174 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:10:02:03 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://opensearch-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
I20241103 10:04:48.216897       1 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"opensearch", UID:"b0c0dbc4-5d83-474f-900a-246035884c9d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"877675", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/opensearch was added or updated
2024/11/03 10:04:52 [error] 309#309: *184 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:10:04:52 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 10:04:52 [error] 309#309: *184 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:10:04:52 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://opensearch-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
I20241103 10:05:17.829226       1 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"opensearch", UID:"b0c0dbc4-5d83-474f-900a-246035884c9d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"877795", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/opensearch was added or updated
2024/11/03 10:05:20 [error] 319#319: *189 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:10:05:20 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 10:05:20 [error] 319#319: *189 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:10:05:20 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://opensearch-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
192.168.56.140- - [03/Nov/2024:11:22:46 +0000] "GET / HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
192.168.56.140- - [03/Nov/2024:11:22:46 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://opensearch-dashboard.example.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 11:23:22 [error] 324#324: *197 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:11:23:22 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 11:23:22 [error] 324#324: *197 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:11:23:22 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://opensearch-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 11:23:30 [error] 324#324: *197 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:11:23:30 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 11:23:30 [error] 324#324: *197 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:11:23:30 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://opensearch-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 11:23:31 [error] 324#324: *197 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:11:23:31 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 11:23:31 [error] 324#324: *197 upstream prematurely closed connection while reading response header from upstream, client: 192.168.56.140, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://opensearch-dashboard.example.com"
192.168.56.140- - [03/Nov/2024:11:23:31 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://opensearch-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"

Any other info

  • When I use port-forwarding for the service, it works, and the service is also accessible with the LoadBalancer service type.
  • I don’t see anything else in the logs; the same log message appears for each request.
  • The backend pod isn’t receiving any requests at the moment. It’s a fresh cluster, and no traffic is coming to it.
  • The OpenSearch dashboard supports NGINX ingress.
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 3, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

The controller does routing so 502 is a server-side response hence remove all those annotations you have first.

Then send a request.

Check logs.

Then add the annotations on a as needed basis based on the logs.

@anjanaprasd
Copy link
Author

anjanaprasd commented Nov 3, 2024

Hi @longwuyuan
Thank you very much for your valuable response. As you suggested, I removed all the annotations, redeployed the ingress add annotations one by one to resource, and checked the logs. However, I still see the same error.

2024/11/03 06:09:29 [error] 184#184: *94 upstream prematurely closed connection while reading response header from upstream, client: 192.168.58.11, server: opensearch-dashboard.example.com, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.example.com"
192.168.58.11 - - [03/Nov/2024:06:09:29 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 06:09:30 [error] 184#184: *94 upstream prematurely closed connection while reading response header from upstream, client: 192.168.58.11, server: opensearch-dashboard.example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.example.com", referrer: "https://demo-dashboard.example.com/"
192.168.58.11 - - [03/Nov/2024:06:09:30 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://demo-dashboard.example.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"

@longwuyuan
Copy link
Contributor

longwuyuan commented Nov 3, 2024

ok. now, please click on new bug report only to look at the template of a new bug report. Then come here and edit the description of this issue. Answer the questions asked in the new bug report template. Ensure that all the content is in markdown format.

This way there will some info to analyze. right now, other than a few lines of log, there is nothing to analyze. You may want to add the commands & outputs of other information in addition to what is asked in the template. Like ;

  • logs of the backend pod
  • k describe of the backend service and pod
  • k get events -A
  • any other info

Of particular interest will be the curl command and output with -iv. If your backend pod is slow to respond, it seems that the backend pod is either starved for CPU/Memory/Inodes/Conntrack or it is not designed to work behind another reverseproxy. Mitigation of-course can be timers like you had set before, but this is proof that you do know not the response time needed over reverse-proxy. Generally the default timeouts are enough. If a backend needs extra-long timeouts, you may make it "functional" for a test like here, but when the real-use of tons of traffic comes-in, then the timeouts will not be enough as the work for the backend will increase multi-fold.

@anjanaprasd
Copy link
Author

@longwuyuan
I updated the ticket body.
The NGINX ingress pod logs repeatedly show the same message without any additional details.

@longwuyuan
Copy link
Contributor

You are using the controller released by the company F5 that owns NGINX. its evident because of this iamge name nginx/nginx-ingress:3.7.0

This project is the Kubernetes Community ingress controller. So I am closing this issue. You can reach out in the NGINX INC. forums

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

You are using the controller released by the company F5 that owns NGINX. its evident because of this iamge name nginx/nginx-ingress:3.7.0

This project is the Kubernetes Community ingress controller. So I am closing this issue. You can reach out in the NGINX INC. forums

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants