Glances Web Intermittently Unresponsive #2946
Replies: 8 comments 3 replies
-
I will see if I can get a "glances --issue" when its in a hung state. |
Beta Was this translation helpful? Give feedback.
-
Hey, I have the same issue, have you done some progress on this? |
Beta Was this translation helpful? Give feedback.
-
I don't see any error in logs. |
Beta Was this translation helpful? Give feedback.
-
I know this is almost a year old but still shows open and is the issue I'm having. Running glances off raspberry pi directly. Webui and api is unresponsive, but running glances direct in terminal shows activity. Running glances again with -w show's address already in use. Running glances with --issue provides the following:
|
Beta Was this translation helpful? Give feedback.
-
Hi, I think I have a solution for this if I run glances via systemd. Here is my writeup about it: |
Beta Was this translation helpful? Give feedback.
-
This issue is stale because it has been open for 3 months with no activity. |
Beta Was this translation helpful? Give feedback.
-
I use Glances through the Homepage project. Because I want each disk to appear with a label, I set it up separately, which ends up making several simultaneous requests to Glances. I don't know if this is the expected behavior, or if they should be treated differently by the API, but I'm here to provide a possible solution. To begin with, I'll list some tests and results here:
Now, how to solve it? I don't know if something will be changed in Glances, if there is a better alternative or if this really is the best one. But whatever is done, I suggest documenting this issue to avoid future discussions/issues. docker-compose.yml name: prd-glances
services:
app:
image: nicolargo/glances:4.1.2.1
restart: unless-stopped
privileged: true
pid: host
network_mode: host
environment:
- GLANCES_OPT=-w
volumes:
- /etc/os-release:/etc/os-release:ro
- ./app/glances.conf:/etc/glances/glances.conf
- /:/host:ro
healthcheck:
test: curl -fSs http://localhost:61208/api/4/status || exit 1
interval: 10s
retries: 3
start_period: 30s
timeout: 3s
proxy:
image: nginxinc/nginx-unprivileged:1.27.1-alpine
restart: unless-stopped
volumes:
- ./proxy/default.conf:/etc/nginx/conf.d/default.conf
networks:
- network
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- app
healthcheck:
test: curl -fSs http://localhost:61208/proxy-health || exit 1
interval: 10s
retries: 3
start_period: 30s
timeout: 3s
networks:
network:
driver: bridge /etc/nginx/conf.d/default.conf # nginx-proxy version : 1.6.1
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_host $proxy_x_forwarded_host {
default $http_x_forwarded_host;
'' $host;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# Include the port in the Host header sent to the container if it is non-standard
map $server_port $host_port {
default :$server_port;
80 '';
443 '';
}
# If the request from the downstream client has an "Upgrade:" header (set to any
# non-empty value), pass "Connection: upgrade" to the upstream (backend) server.
# Otherwise, the value for the "Connection" header depends on whether the user
# has enabled keepalive to the upstream server.
map $http_upgrade $proxy_connection {
default upgrade;
'' $proxy_connection_noupgrade;
}
map $upstream_keepalive $proxy_connection_noupgrade {
# Preserve nginx's default behavior (send "Connection: close").
default close;
# Use an empty string to cancel nginx's default behavior.
true '';
}
# Abuse the map directive (see <https://stackoverflow.com/q/14433309>) to ensure
# that $upstream_keepalive is always defined. This is necessary because:
# - The $proxy_connection variable is indirectly derived from
# $upstream_keepalive, so $upstream_keepalive must be defined whenever
# $proxy_connection is resolved.
# - The $proxy_connection variable is used in a proxy_set_header directive in
# the http block, so it is always fully resolved for every request -- even
# those where proxy_pass is not used (e.g., unknown virtual host).
map "" $upstream_keepalive {
# The value here should not matter because it should always be overridden in
# a location block (see the "location" template) for all requests where the
# value actually matters.
default false;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost escape=default '$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$upstream_addr"';
access_log off;
error_log /dev/stderr;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_set_header Host $host$host_port;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $proxy_x_forwarded_host;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=glances:1m max_size=10m inactive=2s use_temp_path=off;
server {
server_name localhost;
access_log /var/log/nginx/access.log vhost;
http2 on;
listen 61208 default_server;
location / {
proxy_pass http://host.docker.internal:61208;
set $upstream_keepalive false;
proxy_cache glances;
proxy_buffering on;
proxy_ignore_headers "Cache-Control" "Expires";
proxy_cache_revalidate on;
proxy_cache_valid any 1s;
proxy_cache_use_stale updating;
proxy_cache_background_update on;
proxy_cache_lock on;
}
location = /proxy-health {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{"status":"UP"}';
}
} |
Beta Was this translation helpful? Give feedback.
-
Another thing that slowed me down was containers that were accumulating zombie processes. In Linux, processes (parents) can open subprocesses (children) to do some activity. When the child process finishes, it waits for the parent to get its result and kill it. If the parent doesn't do this, for whatever reason, this responsibility falls to the process with ID 1. The accumulation of processes can basically cause two things: slowing down other processes and the inability to create new processes. If this is the case, you could try the following:
|
Beta Was this translation helpful? Give feedback.
-
Glance Web UI periodically exceeds 48 seconds load time.
Steps to reproduce the behavior:
Run docker pull nicolargo/glances:3.4.0.3 image in Docker.
Glances Web loads in over 48 seconds load time. Trying to load from different computers it just hangs. Then it all the sudden starts working after multiple attempts.
If I restart the container then it works quickly.
I monitor websites with Uptime Kuma (https://github.com/louislam/uptime-kuma)
Environement (please complete the following information)
Alpine Linux 3.18.0 64bit on a Linux (From container: nicolargo/glances:3.4.0.3)
Docker version 20.10.23, build 8659133e59
Linux 5.15.111-flatcar (https://www.flatcar.org/)
ESXi-7.0U3m-21686933-standard
HP Z440 Workstation (8 CPUs x Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz, 128 GB Ram)
docker-compose.yml
Docker Logs
glances --issue
Note: My docker setup is super stable for several years with several containers that host websites; Glance is the only container that triggers an uptime alert..
Beta Was this translation helpful? Give feedback.
All reactions