Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove memory spike in Sampling #1235

Draft
wants to merge 25 commits into
base: main
Choose a base branch
from
Draft

Conversation

marchdf
Copy link
Contributor

@marchdf marchdf commented Sep 5, 2024

Summary

This removes the memory spike on the ioproc when doing sampling by initializing the sampling particles in parallel.

Pull request type

Please check the type of change introduced:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation content changes
  • Other (please describe):

Checklist

The following is included:

  • new unit-test(s)
  • new regression test(s)
  • documentation for new capability

This PR was tested by running:

  • the unit tests
    • on GPU
    • on CPU
  • the regression tests
    • on GPU
    • on CPU

Additional background

Issue Number: #1186

@marchdf
Copy link
Contributor Author

marchdf commented Sep 5, 2024

This is great except that it now all ranks need to know about the probe locations. So it ends up needing more memory in total though there is no spike. The next step that I want to do before this PR is merged is to have the ranks only need to know about probe locations that are relevant to it. Basically I want to pass box to sampling_locations and only get the locations that are inside with that box. But I need to switch to something else for a bit.

@marchdf marchdf force-pushed the sampling-spike branch 5 times, most recently from ac8f6e4 to fc43780 Compare September 11, 2024 20:41
@marchdf
Copy link
Contributor Author

marchdf commented Sep 11, 2024

Status update before I leave this aside for a bit:

  • there is currently are hard check that the number of particles added match what was expected to be added. This is a good check. But if the sampling location is not in the domain, then this will fail. Do I silently filter out sampling locations outside the domain? Or abort? This shows up in the dam_break_godunov test case.
  • Now that I can dynamically add sampling locations, I am going to have each rank only do the sampling locations that are in the boxes that it owns. That should further lower the memory consumption.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants