You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
We enforce the same processing (filters, resampling, fft-shape etc), which ensures that templates and continuous data are processed the same, however when resampling, if the data that the template is cut from and the continuous data used for self-detection start one sample apart (for resampling by half), then the correlations are suppressed because the same samples of data are not retained in the resampling.
This is obvious when considering self-detections: in a recent example, self-detections (which should have an average cross-correlation of 1.0) fall as low as an average of 0.8. I suspect this issue is less pronounced for lower frequency data (I was filtering 2--20 Hz), but this is annoying.
I'm not entirely sure what to do about this. We should provide self-detections with unity correlations, which requires that the sample locations are the same...
If we move one more sample on, we get the same result as the first go.
Suggestions welcome
Note that the correlations are correct, as is the resampling and processing, it is just that if we sample different samples then we don't get the same correlation values. The times of the resulting self detections are accurate as well (as in the suppressed one is not the same time as the correct self-detection because the samples are different).
The text was updated successfully, but these errors were encountered:
Describe the bug
We enforce the same processing (filters, resampling, fft-shape etc), which ensures that templates and continuous data are processed the same, however when resampling, if the data that the template is cut from and the continuous data used for self-detection start one sample apart (for resampling by half), then the correlations are suppressed because the same samples of data are not retained in the resampling.
This is obvious when considering self-detections: in a recent example, self-detections (which should have an average cross-correlation of 1.0) fall as low as an average of 0.8. I suspect this issue is less pronounced for lower frequency data (I was filtering 2--20 Hz), but this is annoying.
I'm not entirely sure what to do about this. We should provide self-detections with unity correlations, which requires that the sample locations are the same...
To Reproduce
Using the tribe linked here:
The returned party contains two detections, with the zeroth a self detection:
In contrast, if we start the detection one-sample further on in time:
We get a party of three detections, with the zeroth as the same self-detection, but with a suppressed average correlation value:
If we move one more sample on, we get the same result as the first go.
Suggestions welcome
Note that the correlations are correct, as is the resampling and processing, it is just that if we sample different samples then we don't get the same correlation values. The times of the resulting self detections are accurate as well (as in the suppressed one is not the same time as the correct self-detection because the samples are different).
The text was updated successfully, but these errors were encountered: