Memory issues
#1378
Replies: 1 comment
-
I do not really know what could cause this, but there are some hints that there could be memory leaks in Xarray: Following the advice there, can you try to do the following before creating the ROMS readers(s):
You could also experiment with opening the ROMS files/collection with Xarray explicitly, with possibility to tweak options, and then pass this dataset to the reader, instead of filenames.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have been running into memory issues with opendrift, and I struggle to understand what is causing it.
I am using a slightly modified version of the reader_ROMS_native.py script, but the input files are opened with the same "open_mfdataset" method. The simulation is run for 70 days and I have hourly ROMS output files, so there is quite a large number of files to read.
When I advect my particles with the Euler scheme, the simulation runs smoothly. However, when I try to use the "runge-kutta" or "runge-kutta4" schemes, the simulation is killed by the linux bash (I use a linux laptop). Reducing the number of particle seems to help, but because of the stochastic component of particles motion, the results are not statistically significant if I don't have a high enough number of particles.
I am looking at interanual and seasonal variability of larvae dispersion, so I run multiple scenarios using forcing from different seasons and years. The memory issue doesn't happen for every scenario. But I have noticed that the scenarios which get the following warning: "Data block from NEATL not large enough to cover element positions within timestep. Buffer size (7) must be increased. See
Variables.set_buffer_size
." are the ones that get killed by linux because of the memory usage (oom-kill in the log files).I found a previous post with a similar issue #1241 and tried the solution it suggested (use the engine="h5netcdf" argument in the "open_mfdataset" method), but in my case it doesn't seem to make any difference.
I am using version 1.11.12.
Any help is appreciated, thanks!
Soizic
Beta Was this translation helpful? Give feedback.
All reactions