Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

help for weak scalability test #2

Open
yun20190121 opened this issue Mar 20, 2020 · 3 comments
Open

help for weak scalability test #2

yun20190121 opened this issue Mar 20, 2020 · 3 comments

Comments

@yun20190121
Copy link

Hello! The HTFETI method is efficient for large-scale problems. I want to use espreso to test weak scalability and I run espreso with the following command:

./waf configure -m release --intwidth=64
./waf -j16
source /env/threading.default 1
mpirun -n 216 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 6 6 6 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 512 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 8 8 8 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 1000 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 10 10 10 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 1728 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 12 12 12 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 2744 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 14 14 14 6 6 6 4 4 4 FETI HYBRID_FETI

SOLVER TIME
8.206s
8.66s
9.563s
10.59s
17.021s

As the number of processes increases, solver time grows too quickly. Is the example I chose wrong? Are there examples of structural mechanics 3d that test weak scalability?

@mec059
Copy link
Collaborator

mec059 commented Mar 21, 2020

Hello,

in general, we use benchmarks only for evaluation of the solver correctness. Nevertheless, you can use them also for scalability testing.

In your case, the increasing solver runtime is probably caused by setting too small domains sizes (only 444 elements per domain). Hence, try to increase domains sizes. We tried to evaluate this benchmark on our cluster (with DIRICHLET preconditioner instead of LUMPED). We got the following times:

mpirun -n 216 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 6 6 6 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 1728 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 12 12 12 3 3 3 8 8 8 FETI HYBRID_FETI
mpirun -n 2744 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 14 14 14 3 3 3 8 8 8 FETI HYBRID_FETI

5.2s
5.1s
5.57s

Where do you try to evaluate this benchmark?

Best Regards,
O. Meca

@yun20190121
Copy link
Author

Thank you! Another question is that neither espreso-master or espreso-readex supports GPU. GPU code is commented out in itersolverGPU.cpp. Is there a espreso version that supports the GPU?

@mec059
Copy link
Collaborator

mec059 commented Mar 23, 2020

The GPU version is currently available only for our internal use. The public version should be available in August.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants