You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 28, 2022. It is now read-only.
This eventually needs to be done, both the user and kernel space. In kernel space, the implementation might actually be simpler because we can depend on existing things like stop_machine and synchronize_sched. In user space, we will need to pause all threads, e.g. with signals, or perhaps with ptrace, then we'll need to hot-patch the code. An alternative is to implement safepoints in user space code. This could be an interesting line to look into.
One thing worth investigating is whether or not the patching can/should be distributed across the paused threads/cores, or whether or not there should be one single master patch thread.
One definite thing is that patching should follow two stages: first, a patching plan should be constructed the decides exactly what values to write where. Then, once a plan is decided on, the patching should be performed.
The text was updated successfully, but these errors were encountered:
This is partially resolved with the new command-line option --unsafe_patch_edges. It's pretty bug because it doesn't deal with any of the hairinesses of the architecture's actual requirements regarding hot patching, but I expect these issues to become less of an issue when I move to CPU-private allocators for code cache pages.
For user space, I think this could be nicely done with sys_clone and ptrace, where we clone a new process that shares the same virtual memory (CLONE_VM), that way we don't need to use PTRACE_POKEUSER to patch code but can instead patch directly with memory writes.
Another alternative to this would be to implement the equivalent of an IPI in user space. The trick is to spawn and pin one thread per core, and when we want to synchronize all code, we'd wake up all those threads, then have them wait on a barrier.
A similar trick could be used to implement a user space cross-cpu memory barrier.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
This eventually needs to be done, both the user and kernel space. In kernel space, the implementation might actually be simpler because we can depend on existing things like
stop_machine
andsynchronize_sched
. In user space, we will need to pause all threads, e.g. with signals, or perhaps with ptrace, then we'll need to hot-patch the code. An alternative is to implement safepoints in user space code. This could be an interesting line to look into.One thing worth investigating is whether or not the patching can/should be distributed across the paused threads/cores, or whether or not there should be one single master patch thread.
One definite thing is that patching should follow two stages: first, a patching plan should be constructed the decides exactly what values to write where. Then, once a plan is decided on, the patching should be performed.
The text was updated successfully, but these errors were encountered: