Replies: 1 comment
-
Sorry for the late reply, been busy at work. Yes, that would be great. How big is your modification? If it is lots of code, can it be incorporated in a modular way into the existing |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I'm working with a model that includes a high number of agents (10,000). I need to run several iterations of the model to validate some hypotheses and gather statistical data.
The main issue I'm encountering is with the batch_run function, which creates a large list for the results. Handling this list and converting it into a DataFrame is challenging, even with chunking or other strategies.
Currently, I have written custom parallel code that runs each model with its own parameters and then saves the results from the two data collectors (agents' and model's). However, I was wondering that maybe it would be beneficial to consider enhancing the batch_run function to allow users to save intermediate results directly. This would make managing large sets of results more efficient.
Additionally, if anyone has any other optimal strategies or suggestions for managing large sets of results efficiently, I would greatly appreciate your input.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions