-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add frf noise cov #54
base: master
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Here are some test results with new correlation matrix based correction factors to the noise variances. To tease this out in an obvious way, we used noise-only simulations and just the central LSTs for the non-incoherently averaged PS (top row in the following plots) since the edge LSTs have pretty different statistics compared to the central ones. Here is using the original correction factors, there is a ~10% error in the standard deviation in the top panel: Here is using the new correction: The data in the top panel now look standardized. The bottom panel doesn't visually change depending on which correction factor we use, but you can see that the MAD statistic improves (gets closer to 1) with the new correction factor. |
@jsdillon I don't know to what extent we want to finish all the items in the checklist above before merging (also there is a cell with a known bug in it related to converting to pI after coherent averaging). Theoretically one could go as far as propagating these matrices to the delay spectra themselves, which could range from easy to hard depending on how much is going on under the hood in pspec. My inclination is to leave this here (perhaps with an earmark since I've only looked at 1 baseline) since the noise-dominated delay bins look Gaussian anyway after incoherent averaging and the correction factors appear to be doing the right thing as noted in the previous comment. |
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #28. from HERA_FRF_cov import FRF_cov_calc
I don't think this is a HERA team repo, and we would like to avoid dependencies outside HERA team (except big public repos, of course). Do we need it? And if so, can we get this code in a HERA repo (or in a new one owned by hera_team) with all the normal unit testing and coverage, etc.?
Reply via ReviewNB
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #1. def main_beam_FR_filter(data, wgts, tslice=tslice, cache={}, bls=None, suppression_factors=[FR_EIGENVAL_CUTOFF]):
Do we need this change?
Reply via ReviewNB
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #18. avg_data_copy = copy.deepcopy(avg_data)
This feels kinda hacky. I think we can probably architect the main analysis cell (which does the FRF, coherent time avg, and the psuedo stokes) better so that these keys aren't here.
Or change form pseudo-stokes to create a brand-new DataContainer rather than adding keys to an existing one.
Reply via ReviewNB
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #4. USE_CORR_MATRIX = True
This should be an environment variable and should also control whether your whole big calculation gets run... assuming we want it to be an option.
Reply via ReviewNB
@@ -83,7 +83,9 @@ | |||
"import hera_pspec as hp\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #9. #print((uvp.get_data(key) / uvp.get_stats('P_N', key).real)[:, high_dlys].shape)
extraneous print
Reply via ReviewNB
This PR will add a (analytic) noise covariance calculation for the effects of the main lobe filter + coherent averaging, ignoring the effects of the notch filter. Things left to do include
[ x] Make the conversion to pI happen after coherent averaging
[ x] Recheck that the FRF + coherent average operation is similar b/w the noise covariance filter implementation and the pipeline filter implementation
[ ] Ensure the analytically propagated variances agree with the noise-dominated delay spectra
[ x] See if Josh's rule of thumb can be extracted from the covariance matrices
[ ] Try with a few different baselines to make sure nothing wild is happening