Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logs not getting written when multiprocessing_context is spawn or forkserver #131

Open
krehm opened this issue Dec 26, 2023 · 4 comments
Open

Comments

@krehm
Copy link
Contributor

krehm commented Dec 26, 2023

I just opened PR #130 to fix dlio.log so that it gets reopened in spawn and forkserver child
processes so that the child log messages are not lost.

The same problem exists with dlp.log, but some of the code that needs to change is in repository
dlio-profiler. Once that is updated and its release number is bumped, then changes can
be made in dlio_benchmark to use the newer dlio-profiler version.

@krehm
Copy link
Contributor Author

krehm commented Jan 4, 2024

Thanks for the merge of PR #130. The dlp.log has the same problem, but is more complicated because of the ENTER and EXIT code that is use to implement the:
with Profile(name=f"{self.init.qualname}", cat=MODULE_DLIO_BENCHMARK):
code. That code makes sense in main.py where it wraps the body of the benchmark, but inside a forked/spawned child process in TorchDataset.worker_init() I don't see how to re-initialize the dlp.log without using ENTER and EXIT. Any advice would be appreciated.

@zhenghh04
Copy link
Member

zhenghh04 commented Jan 12, 2024

@krehm This has been fixed by @hariharan-devarajan, the corresponding PR is merged. Could you test it again whether it is working?

@krehm
Copy link
Contributor Author

krehm commented Jan 12, 2024

I will be out of the office until Tuesday, but will give it a try then.

@krehm
Copy link
Contributor Author

krehm commented Jan 23, 2024

I have not had time yet to do full testing, I am chasing another problem at the moment. But what I do notice is that when I do a MLPerf run (using 'main' branch) that it sets workflow.profiling=False, yet when I run unet3d with 'spawn' multiprocessing_context, I see messages that imply that profiling is running. So maybe child processes, at least in spawn mode, are ignoring args.do_profiling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants