-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom Tests - Model Scope #711
Comments
In your module where you define the custom tests to be run, you could add a pytest fixture that computes a solution. Then you use that solution in your test functions and pull out the flux values that you need. Something like: @pytest.fixture(scope="module")
def glucose_minimal_solution(model):
# Set medium, objective, and constraints.
return model.optimize() # or use pFBA
def test_acetate_secretion(glucose_minimal_solution):
assert glucose_minimal_solution["EX_ac_e"] == experimental_value
def test_lactate_secretion(glucose_minimal_solution):
assert glucose_minimal_solution["EX_la_e"] == experimental_value You could even parametrize that test but that's probably going too far. |
Hi @Midnighter, thanks for the feedback. This was actually the first thing I tried, but I got an error because you are then passing in the model fixture (scoped at the function level) into a fixture that is scoped at the module level. I realized that pytest doesn't allow you to pass in lower-scoped fixtures as arguments to a higher-scoped fixture (which makes sense). This results in the test returning an error. I think I can only do what you suggest if Memote allows users to access a shared (e.g. session-scoped) version of the model object... |
I added a session-scoped model fixture to my local Memote codebase (in collect.py) to see if that gets rid of the error - turns out it does. I am then able to do what you suggest (by using "session_scoped_model" rather than "model" as input arg). Not sure if this is something that should be added to Memote, though. |
I changed that one to function scope because otherwise the model context which is supposed to revert changes does not work properly. Thus modifications to the model from all the tests will accumulate and return nonsense. You would indeed need the previous |
Checklist
Question
I have written a set of custom tests that test model predictions against steady-state data. So say I have a model producing acetate and lactate growing on glucose. I constrain glucose uptake based on the expt data, optimize using a custom objective, and in Test 1, compare acetate exchange flux prediction vs expt, and in Test 2 do the same for lactate. I want to separate these tests because I can then show them as separate metrics in a Memote report.
However, because the model is scoped at the function level in memote/suite/collect.py, I have to optimize the model for each test, even though it generates the same exact predictions (due to having the same constrained feed). I'm wondering if there is a way to only optimize the model once and have a set of tests receive that same predicted flux distribution.
I noticed that there used to be a "read_only_model" (see Memote v0.5.0) scoped at the 'session' level, but it looks like it has been removed. Are there any plans to bring something like that back?
The text was updated successfully, but these errors were encountered: