-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelise eigen{vec,val} calculations #199
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportBase: 86.82% // Head: 86.88% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #199 +/- ##
==========================================
+ Coverage 86.82% 86.88% +0.05%
==========================================
Files 22 22
Lines 1829 1837 +8
==========================================
+ Hits 1588 1596 +8
Misses 241 241
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
This is encouraging to see! I think it would be nice to stick the matrix inversion logic into its own function (maybe not even as a method on Ion) to help keep the level_populations method clean since this method is already becoming quite complex. This would would also make future experiments with performance much easier. |
Good shout - I can do that in another PR. Worth noting that in the current single (Python) thread case, |
Ah ok I was just about to ask whether your case in your plot labeled "single-threaded" was actually single threaded (ie OMP_NUM_THREADS=1 was actually being enforced) or whether numpy was doing some parallelization. Interesting that it seems to be slower. Thinking more about your plot, I am surprised that the execution time scales so strongly with number of temperature points and that multiprocessing does so much better. I had (maybe naively) assumed that it would be hard to beat the vectorization over temperature provided by numpy. |
My guess is that If anyone else wants to do some testing here's the code I used: from datetime import datetime
import astropy.units as u
import numpy as np
from fiasco import Ion
if __name__ == '__main__':
ns = 2**np.arange(1, 7)
times = {}
for n in ns:
print(n)
Te = np.geomspace(0.1, 100, n) * u.MK
ne = 1e8 * u.cm**-3
ion = Ion('Fe XII', Te)
t = datetime.now()
contribution_func = ion.contribution_function(ne)
times[n] = (datetime.now() - t).total_seconds()
print(times) |
xref #26. I realised that it should be possible to parallelise the computation on many different matrices. I'm not sure this is the right approach, as in my experience doing a naïve
multiprocessing
implementation isn't rarely the best way of doing parallel stuff, but opening for discussion.With the following code:
parallelising the eigen{vector, value} calculation speeds it up from ~26secs to 14secs in total for me.
to