Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to drive ASCOT5 through successive timesteps #3

Closed
11 tasks done
bielsnohr opened this issue May 24, 2021 · 8 comments · Fixed by #19
Closed
11 tasks done

Add ability to drive ASCOT5 through successive timesteps #3

bielsnohr opened this issue May 24, 2021 · 8 comments · Fixed by #19
Assignees
Labels
enhancement New feature or request must-have

Comments

@bielsnohr
Copy link
Collaborator

bielsnohr commented May 24, 2021

Option 1 Tasks

  • Test the current code with multiple MPI processes
  • Set the time step in the ASCOT5 input to allow for iterative runs.
  • Implement ability to "restart" ASCOT5 between successive time steps
    • Generalise HDF5 read routines to accommodate the many fields that need to be read
    • Read required endstate data members, store in AscotProblem, and verify correct read
    • Create generic write routines
    • Write endstate to marker group
    • Figure out unit test to verify write to marker group in HDF5 file
    • Verify restart capability (likely with regression test)
  • Figure out whether it is the time step or just the cumulative time that should be written to the ASCOT5 input files
  • Figure out how to test these steps.
@bielsnohr bielsnohr added the enhancement New feature or request label May 24, 2021
@bielsnohr bielsnohr self-assigned this May 24, 2021
@bielsnohr
Copy link
Collaborator Author

Some important discussion in team meeting and with @makeclean has happened around design decisions for this phase of development, and this is a brief summary.

With my file-based interface to ASCOT, it is going to be very difficult to make this readily parallel on the MOOSE side. There are a few options going forward:

  1. Enforce the MOOSE side to be "effectively serial". In other words, the main process would create the ASCOT5 input file, then run ASCOT5 as an MPI compatible executable that uses the other processes allocated at the MOOSE app runtime (hopefully possible?), read the resulting output on the main process, and sync those results with the other processes.
    • It was agreed that this is probably the best chance for quick progress.
    • @helen-brooks also made the good point that ASCOT5 could be used in shared memory (i.e. thread) mode on each process. However, the downside of this is that the same ASCOT5 run is being done on each process, which is quite inefficient from a memory and computational perspective.
  2. Revisit whether the close coupling interface I am creating between ASCOT5 and MOOSE is actually required. Ultimately, this comes down to the coupling between the various problems being solved. From a physics perspective, it is a loose coupling between the heat conduction problem of the first wall and the fast ion heat flux calculation. I.e. the temperature of the wall will have very little impact on the heat flux deposited by fast ions. However, there will be a stronger coupling between the meshes of the two problems since heat deposited on the first wall will cause deformation and possibly melting. Therefore, changes in the mesh from the heat conduction problem in MOOSE would need to be communicated back to ASCOT5. This is easier in a close coupling implementation but also possible in loose coupling (e.g. just rewrite the mesh to the HDF5 file).
  3. If a loose coupling is possible, is there a different way that MOOSE can interface with ASCOT5? E.g. through a "service" model.
    • Based on conversation, it does not appear that there is another way for MOOSE to drive external programs in a loosely coupled fashion.
  4. If a tight coupling is indeed desired, then perhaps go back to the drawing board and revisit the option of creating an API for ASCOT5.
    • I do have the source available, but it has been designed to exclusively interface with the HDF5 file as input. Finding the seam where data has already been read in and creating an interface there will take quite a bit of investigative work, and this is what I was hoping to avoid from the start when trying to get a working prototype.

Conclusion: proceed with option 1 as it seems to have the best chance of success for a more immediate prototype.

@bielsnohr
Copy link
Collaborator Author

Investigation into whether my app was indeed correctly running over multiple MPI processes proved tricky (unsurprisingly). MOOSE tends to discard output from all processes except process 0. The correct flag to use to get output from all processes is --keep-cout, so in my case running Phaethon in parallel with MPI looked like:

mpirun -n 4 phaethon-devel -i read_ascot5_heat_fluxes.i --keep-cout --redirect-stdout

The --redirect-stdout puts the output into separate files on disk which is helpful to keep a record of the run output.

Now that I am confident the app runs successfully in parallel, I am on firm ground to start spawning ASCOT5 processes.

In the process of figuring this out, I also came across the useful class member _communicator which has all of the relevant MPI functionality contained in it. This will be handy to control when and from which process ASCOT5 should be launched.

@helen-brooks
Copy link

Ah, glad you figured out the right flag to use! Sorry that I didn't remember this offhand. Great that we now have a record of this in writing via this thread.

@bielsnohr
Copy link
Collaborator Author

Made some progress calling ASCOT5 from within Phaethon. See discussion here on MOOSE forum: idaholab/moose#19413

I went down the route of calling the ASCOT5 main function like a normal library routine. This caused some additional complexity to get ASCOT5 callable as a library (see above), but there are a variety of advantages from the user perspective over passing an ASCOT5 executable via runtime input. With the current setup, a user will only need to run make from the top level directory and the MOOSE app with have access to ASCOT5 baked-in. Passing the ASCOT5 executable via the MOOSE input file offers too much flexibility: users could easily compile their own incompatible ASCOT5 executable and pass it to the program.

@bielsnohr
Copy link
Collaborator Author

bielsnohr commented Dec 6, 2021

In the course of writing tests for the driving of ASCOT5 runs, I have encountered some strange variability in the particle velocity values produced by ASCOT5. My understanding is that the particle motions should be deterministic between runs, but that doesn't appear to be the case.

  • clarify with ASCOT developers where the variability is coming from

@helen-brooks
Copy link

One thing you could check is how the random seed for the runs is generated. I've encountered MC codes that use the machine timestamp to generate a random seed, so there will be time-variability. Only if the code uses a fixed seed will it be the same between runs.

@bielsnohr bielsnohr added this to the First Prototype milestone Feb 11, 2022
@bielsnohr
Copy link
Collaborator Author

Commit f63eb8e completes implementation of generic read routines and ability to read endstate.

@bielsnohr
Copy link
Collaborator Author

Commit 553d601 completes initial implementation of copying the endstate to the marker group in the HDF5 file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request must-have
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants