Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OS scheduling and designated work #1160

Open
wks opened this issue Jun 28, 2024 · 1 comment
Open

OS scheduling and designated work #1160

wks opened this issue Jun 28, 2024 · 1 comment

Comments

@wks
Copy link
Collaborator

wks commented Jun 28, 2024

Some designated work starts late

The following timeline is collected while running the Liquid benchmark with MMTk-Ruby. It is focused on the prepare stage. The Prepare work packet spawns PrepareCollector work packets which are so-called "designated" work packets that needs each GC worker to execute on its own.

image

From the timeline, we see that some GC workers wake up very early, executing the PrepareCollector work packet and other work packets available. Others didn't wake up until all work packets are drained.

I added a notifying statement after adding designated work:

 impl<C: GCWorkContext> GCWork<C::VM> for Prepare<C> {
     fn do_work(&mut self, worker: &mut GCWorker<C::VM>, mmtk: &'static MMTK<C::VM>) {
         trace!("Prepare Global");
         // We assume this is the only running work packet that accesses plan at the point of execution
         let plan_mut: &mut C::PlanType = unsafe { &mut *(self.plan as *const _ as *mut _) };
         plan_mut.prepare(worker.tls); 
 
         if plan_mut.constraints().needs_prepare_mutator {
             for mutator in <C::VM as VMBinding>::VMActivePlan::mutators() {
                 mmtk.scheduler.work_buckets[WorkBucketStage::Prepare]
                     .add(PrepareMutator::<C::VM>::new(mutator));
             }
         }
         for w in &mmtk.scheduler.worker_group.workers_shared {
             let result = w.designated_work.push(Box::new(PrepareCollector));
             debug_assert!(result.is_ok());
         }
+        // Notify workers about the designated packet.
+        mmtk.scheduler.worker_monitor.notify_work_available(true);
     }
 }

I also added USDT probes for the events of (1) notifying workers, (2) a worker parked, and (3) a worker unparked. Here is the timeline:

image

From this timeline, we see that the threads that run PrepareCollector late are not busy doing something else. They simply wake up later. The notification statements I added has no effect. In fact, StopMutator already notifies all workers when executing GCWorkScheduler::notify_mutators_paused() which opens the Prepare bucket.

I think we need to accept the fact that the OS scheduler isn't that timely. We can't expect a thread to wake up immediately when we notify it. The CPU may be running other processes (such as my desktop window manager and the sound server) and the OS can't find a spare CPU to run my GC worker thread.

Abolishing "designated work"

The obvious way to solve the problem is not forcing specific GC workers to do the specific job.

Currently, we have two "designated work" packets. One is "PrepareCollector" and the other is "ReleaseCollector". They both work with worker-local copy contexts.

  • During PrepareCollector, semispace-based plans (SemiSpace and GenCopy) rebinds the copy context to the to-space. For other plans, it is a no-op.
  • During ReleaseCollector, it calls GCWorkerCopyContext::release which in turn calls release on all allocators. For Immix-based allocators, it resets the bump pointers and other states. For CopySpace, it is a no-op.

We should allow GCWorkerCopyContext to be prepared and released by other GC workers.

We may expose GCWorkerCopyContext in GCWorkerShared so that other GC workers can reach them and prepare/release them.

Alternatively, we may implement GCWorkerCopyContext similar to the blockpageresource::BlockPool which holds a "worker-local" vector, and each worker only has access to one of its elements.

In either way, we need to persuade Rust that (1) no two threads are accessing the same GCWorkerCopyContext instance at the same time, and (2) During Closure and VMRefClosure stage, each GCWorkerCopyContext can only be accessed by its owner. That may involve some unsafe operations, but the work packet scheduler is responsible for the correctness of the schedule.

@wks
Copy link
Collaborator Author

wks commented Jun 28, 2024

In the following timeline, two workers both find work packets to execute. When one of them park, they find that some other workers have "designated work", so it notifies all workers and park again. Then another worker wakes up, also finding some other workers have "designated work". It also notifies all workers and park again. This repeats, fruitlessly, until the third worker that actually has "designated work" wakes up (much later) and runs its "designated work", and then opens the next bucket because no workers have designated work.

image

This is a manifestation of #793. But after we removed the coordinator thread, it is much more benign now, but it is still a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant