-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Pod fits on node" that has lower utilization than current node #1461
Comments
experiencing same issue (k8s v1.29.3, descheduler version: 0.30.1)
using HighNodeUtilization with mostAllocated scoring strategy, pod scheduled on nodeB. When descheduler runs it decides to sweep the pod anyway which ends in endless descheduling loop. apiVersion: "descheduler/v1alpha2"
kind: "DeschedulerPolicy"
maxNoOfPodsToEvictPerNamespace: 1
profiles:
- name: default
pluginConfig:
- args:
nodeFit: true
evictFailedBarePods: true
evictLocalStoragePods: true
name: DefaultEvictor
- args:
evictableNamespaces:
exclude:
- kube-system
thresholds:
cpu: 70
memory: 90
name: HighNodeUtilization
plugins:
balance:
enabled:
- HighNodeUtilization |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What version of descheduler are you using?
descheduler version: 0.30.1
Does this issue reproduce with the latest release?
Yes
Which descheduler CLI options are you using?
--v7
--dry-run
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
Configured the scheduler of the cluster to use
MostAllocated
Deployed descheduler with the above
HighNodeUtilization
policyWhat did you expect to see?
With the current policies, I expect pods on nodes < 70% memory / CPU usage to get descheduled if there is room on another node with higher usage.
What did you see instead?
Some pods are sometimes descheduled when the only other node they can fit on has lower resource usage, resulting in an endless loop of descheduling the pods (and having them rescheduled on the same node since their usage is higher)
Here are the truncated logs:
We can see that the pod is currently scheduled on node
ip-x-x-x-47.us-east-1.compute.internal
with a utilization of{"cpu":64.29,"memory":52.14,"pods":20.69}
and the descheduler considers that it can fit onip-x-x-x-63.us-east-1.compute.internal
with a (lower) utilization of{"cpu":5.61,"memory":2.37,"pods":10.34}
The pod is then descheduled, and because the scheduler is configured with the
MostAllocated
option, the pod gets scheduled onip-x-x-x-47.us-east-1.compute.internal
again.The text was updated successfully, but these errors were encountered: