Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Tests - Model Scope #711

Open
3 tasks done
jonstrutz11 opened this issue Feb 23, 2021 · 4 comments
Open
3 tasks done

Custom Tests - Model Scope #711

jonstrutz11 opened this issue Feb 23, 2021 · 4 comments

Comments

@jonstrutz11
Copy link

jonstrutz11 commented Feb 23, 2021

Checklist

Question

I have written a set of custom tests that test model predictions against steady-state data. So say I have a model producing acetate and lactate growing on glucose. I constrain glucose uptake based on the expt data, optimize using a custom objective, and in Test 1, compare acetate exchange flux prediction vs expt, and in Test 2 do the same for lactate. I want to separate these tests because I can then show them as separate metrics in a Memote report.

However, because the model is scoped at the function level in memote/suite/collect.py, I have to optimize the model for each test, even though it generates the same exact predictions (due to having the same constrained feed). I'm wondering if there is a way to only optimize the model once and have a set of tests receive that same predicted flux distribution.

I noticed that there used to be a "read_only_model" (see Memote v0.5.0) scoped at the 'session' level, but it looks like it has been removed. Are there any plans to bring something like that back?

@Midnighter
Copy link
Member

Midnighter commented Feb 23, 2021

In your module where you define the custom tests to be run, you could add a pytest fixture that computes a solution. Then you use that solution in your test functions and pull out the flux values that you need. Something like:

@pytest.fixture(scope="module")
def glucose_minimal_solution(model):
    # Set medium, objective, and constraints.
    return model.optimize()  # or use pFBA


def test_acetate_secretion(glucose_minimal_solution):
    assert glucose_minimal_solution["EX_ac_e"] == experimental_value


def test_lactate_secretion(glucose_minimal_solution):
    assert glucose_minimal_solution["EX_la_e"] == experimental_value

You could even parametrize that test but that's probably going too far.

@jonstrutz11
Copy link
Author

jonstrutz11 commented Feb 23, 2021

Hi @Midnighter, thanks for the feedback.

This was actually the first thing I tried, but I got an error because you are then passing in the model fixture (scoped at the function level) into a fixture that is scoped at the module level. I realized that pytest doesn't allow you to pass in lower-scoped fixtures as arguments to a higher-scoped fixture (which makes sense).

image

This results in the test returning an error.

I think I can only do what you suggest if Memote allows users to access a shared (e.g. session-scoped) version of the model object...

@jonstrutz11
Copy link
Author

jonstrutz11 commented Feb 23, 2021

I added a session-scoped model fixture to my local Memote codebase (in collect.py) to see if that gets rid of the error - turns out it does. I am then able to do what you suggest (by using "session_scoped_model" rather than "model" as input arg). Not sure if this is something that should be added to Memote, though.

image

@Midnighter
Copy link
Member

I changed that one to function scope because otherwise the model context which is supposed to revert changes does not work properly. Thus modifications to the model from all the tests will accumulate and return nonsense.

You would indeed need the previous read_only_model session scoped fixture back and use that in your module level fixture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants