Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI parallelisation #14

Open
aosprey opened this issue Sep 29, 2021 · 3 comments
Open

MPI parallelisation #14

aosprey opened this issue Sep 29, 2021 · 3 comments

Comments

@aosprey
Copy link
Owner

aosprey commented Sep 29, 2021

Plan for parallelising the code with MPI:

Control code:

  • Create a module to hold parallelisation routines: mckpp_mpi_control
  • Sort out where MPI and XIOS initalization and finalization is done
  • Create a log file per MPI rank (we could also have an option to just have root process write logs)
  • Make sure mckpp_abort stops all ranks
  • Report timer statistics across ranks

Domain decomposition:

  • 1D domain decomposition over npts, define array size npts_local.
  • Identify whether some fields need to be held globally.

Parallel routines:

  • Broadcast field
  • Scatter fields - 1d, 2d, 3d, etc arrays
  • I don't think we need a gather

XIOS:

  • Define decomposition

Parallel netcdf reads:

  • Have a root rank read data, and collapse x,y dimensions into npts.
  • Scatter fields along npts.
@aosprey
Copy link
Owner Author

aosprey commented Sep 29, 2021

What makes the parallelisation tricky is that we want to parallelise over npts but this dimension is contiguous in memory, since the arrays are dimensioned array(npts,nz). Therefore scattering any 2d (or 3d fields) requires extracting non-contiguous chunks of data and packing into a send buffer. If the dimensions were reveresed and nz was contiguous, the scatter could be done with an MPI_scatter routine directly (I think).

It would be better to fix this first #15

@aosprey
Copy link
Owner Author

aosprey commented Sep 29, 2021

Some progress on parallelisation has been made in the mpi branch.

@aosprey
Copy link
Owner Author

aosprey commented Sep 29, 2021

We may want to have a data structure to hold global fields (as in CAM routines), e.g. kpp_global_fields.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant