Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add comparison to pre-computed loglike to tests #32

Open
itrharrison opened this issue Oct 13, 2021 · 4 comments
Open

Add comparison to pre-computed loglike to tests #32

itrharrison opened this issue Oct 13, 2021 · 4 comments
Assignees
Labels
tests Improvements to tests
Milestone

Comments

@itrharrison
Copy link
Collaborator

To check the stability of likelihood calculations after any refactoring is done, we should add a test which compares the loglike at a specified point(s) in parameter space to a pre-computed value stored in the test.

@itrharrison itrharrison added the tests Improvements to tests label Oct 13, 2021
@itrharrison itrharrison self-assigned this Oct 13, 2021
@timothydmorton
Copy link
Collaborator

I think it would be a good idea actually for there to be a standard way to store this information for each likelihood, similar to how we have a standard convention for mylike.yaml that allows easy expansion of the existing tests to new likelihoods. One simple way to do this would be to just store a data file of some format (e.g., something like mylike_fiducial.yaml) that stores the point(s) in parameter space and the associated chisq values, and to structure each test to read in those inputs, rather than requiring those fiducial points/chisq values to be coded in test_mylike.py.

@itrharrison
Copy link
Collaborator Author

itrharrison commented Oct 26, 2021

This sounds good. Would you be happy that evaluating the likelihood at some number of points in parameter space could replace the current mcmc test runs? Say, the number of points necessary for computing the Fisher matrix?

I suggest replacing the mcmc test runs as they currently take ~20 minutes to run the tests, which feels a bit long to me(?).

@timothydmorton
Copy link
Collaborator

I like the idea of testing that MCMC runs just start just to make sure it can initialize a fit properly, etc., but I wouldn't say it's 100% necessary. Is it really the MCMC tests that are taking so long? I think the lensing likelihood as currently implemented takes a good bit of time to initialize, so maybe just having to do that multiple times is taking most of the time? Feel free to experiment, and if dropping MCMC testing really cuts down the time, then it might be worth it to improve development cycle times. There's also an option of marking tests and having separate github actions set up for smaller unit tests and larger longer-running tests; e.g., the former could run on every commit and the latter (which could include the MCMC runs and other longer-running things) just on PRs?

@itrharrison itrharrison added this to the v0.1 milestone Dec 2, 2022
@mgerbino
Copy link
Collaborator

Comparison to pre-computed chi2 is done in test_mflike, test_lensing, test_clusters. I think everything is also in place to add such a comparison in test_xcorr. Is there anything more specific we want to add here? Like, e.g., use the same set of theory params (cosmo, fg, accuracy) to be consistent among likelihoods?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Improvements to tests
Projects
None yet
Development

No branches or pull requests

3 participants