Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scx_rustland: improve scheduler cpu selection #57

Merged
merged 2 commits into from
Dec 30, 2023

Conversation

arighi
Copy link
Contributor

@arighi arighi commented Dec 30, 2023

Implement a more reasonable idle CPU selection logic in the user-space scheduler.

Andrea Righi added 2 commits December 30, 2023 10:34
The current CPU selection logic in the scheduler presents some
inefficiencies.

When a task is drained from the BPF queue, the scheduler immediately
checks whether the CPU previously assigned to the task is still idle,
assigning it if it is. Otherwise, it iterates through available CPUs,
always starting from CPU #0, and selects the first idle one without
updating its state. This approach is consistently applied to the entire
batch of tasks drained from the BPF queue, resulting in all of them
being assigned to the same idle CPU (also with a higher likelihood of
allocation to lower CPU ids rather than higher ones).

While dispatching a batch of tasks to the same idle CPU is not
necessarily problematic, a fairer distribution among the list of idle
CPUs would be preferable.

Therefore change the CPU selection logic to distribute tasks equally
among the idle CPUs, still maintaining the preference for the previously
used one. Additionally, apply the CPU selection logic just before tasks
are dispatched, rather than assigning a CPU when tasks are drained from
the BPF queue. This adjustment is important, because tasks may linger in
the scheduler's internal structures for a bit and the idle state of the
CPUs in the system may change during that period.

Signed-off-by: Andrea Righi <[email protected]>
When the scheduler decides to assign a different CPU to the task always
make sure the assignment is valid according to the task cpumask. If it's
not valid simply dispatch the task to the global DSQ.

This prevents the scheduler from exiting with errors like this:

  09:11:02 [WARN] EXIT: SCX_DSQ_LOCAL[_ON] verdict target cpu 7 not allowed for gcc[440718]

In the future we may want move this check directly into the user-space
scheduler, but for now let's keep this check in the BPF dispatcher as a
quick fix.

Signed-off-by: Andrea Righi <[email protected]>
// mitigate migration overhead), otherwise pick the next idle CPU available.
if !idle_cpus.contains(&task.cpu) {
task.cpu = *cpu;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still can lead to multiple tasks being assigned to the same cpu, right? Something to improve in the future, I guess.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@htejun exactly, we can still assign the same idle cpu to multiple tasks if their prev_cpu is still idle, that is not super bad, being too aggressive with migrations also comes with a cost. We could also speculate on the fact that tasks with a close vruntime should be spread across multiple CPUs, so that they can run pretty much at the same time in parallel, rather than piling them up on the same idle CPU (that is also something that I'd like to experiment).

But there is definitely room for improvements in the cpu selection logic and in the userspace scheduler in general.

And I'm happy to see that we are reaching a stage where the BPF component is becoming a more "stable" backend (requiring minimal changes), so that we can start channeling all the optimizations, improvements, tests and experiments in the user-space part. If that's the case it would be a nice success story for this little project IMHO. :)

@htejun htejun merged commit 641f9b7 into sched-ext:main Dec 30, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants