Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Very large scale mesh import #4

Open
altya opened this issue May 7, 2020 · 1 comment
Open

Question about Very large scale mesh import #4

altya opened this issue May 7, 2020 · 1 comment

Comments

@altya
Copy link

altya commented May 7, 2020

Dear espreso developer,

Our team has done some large-scale efficiency tests using espreso. But so far, our tests have been limited to mesh generated using internal generators.

I noticed that the mesh external import approach use a full grid file, instead of a list of partition mesh.
When dealing with very large scale mesh, in our case, we need to solve a mesh of 10 billion level, can the memory of a single machine read such a large grid file?
Whether it takes a lot of time to distribute the mesh to different nodes?

best regards,
Benjamin

@mec059
Copy link
Collaborator

mec059 commented May 7, 2020

Dear Benjamin,

Espreso contains a parallel loader that is based on the algorithm described here: https://ieeexplore.ieee.org/document/8820782. The paper contains the description of parallel loading of Ansys CDB database files. Currently Espreso supports also VTK Legacy, XDMF, and EnSight files.

In our approach we read a sequential mesh database and compute a decomposition by ParMetis on the fly, instead of generating separate files for each MPI processes before the computation. Hence, you are able to use the same file that is produced by a mesh generator for an arbitrary number of MPI processes. Since each MPI process loads only a part of the file and the mesh is never gathered to a single process, it is possible to load such large meshes that you need very fast (e.g. a mesh with 800mil. nodes and 500mil. elements in 20s using 3000 MPI processes).

Best regards,
Ondrej

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants