You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
about the python benchmark, maybe it would be great to check the result using numpy and/or numba (also pypy).
because nowadays, no one would use pure python (when possible) when performance is important.
thanks!
The text was updated successfully, but these errors were encountered:
Generally, it would be interesting to allow a fairer comparison between compiled and interpreted languages by extending the data collection script to measure also during the Makescript (compilation) and probably sum it with the results for the first run (to hopefully give a relevant comparison even in the case of languages with virtual machine or interpreter). The energy consumption, time and memory for this should be shown in separate columns of the result tables.
At the level of interpretation and discussion of the results, one could then consider different trade-offs for different use-cases. I.e. for a program that will handle huge datasets but seldom need to be updated, compilation consumption isn't important. But for interactive use on small datasets where the program is tweaked until it works or until the diagrams are annotated as desired, the tradeoff could be completely different. And the second situation is the kind of use where many students and scientists may use Python (typically with numpy for the actual array operations).
very interesting initiative!
about the python benchmark, maybe it would be great to check the result using numpy and/or numba (also pypy).
because nowadays, no one would use pure python (when possible) when performance is important.
thanks!
The text was updated successfully, but these errors were encountered: