Motion Correction and Data Quality Question #864
Replies: 1 comment
-
Related thing to this can be found in an issue using a Bruker scope here. Some of the artifacts can be reduced or nearly eliminated through the modification of a preamplifier offset. It's been determined that adjusting this offset to ameliorate the artifacts yields suboptimal behavior through |
Beta Was this translation helpful? Give feedback.
-
EDIT:
The way I subtracted the artifacts was definitely incorrect because it seems like every ~120 frames or so there's just frames missing, so I need to fix that for any of this to be more relevant...
EDIT 2:
Dr. Stringer suggested in #838 that I try out the 1p registration settings since it performs a high-pass filtering on the data. Maybe that will help us out on that recording. It will be a little bit before I can try, but will post here when I get to it!
Hello suite2p devs! I've been wondering about whether or not subtracting artifacts from our 2p data would be helpful for things like motion correction. It looks like it didn't help as much as I thought it would unfortunately! At least on a dataset with a pretty good SNR. I still need to test things on a low SNR recording.
A couple people I talked with online told me that our data looks quite noisy (and it does!). I figured I would post a video here of what it looks like and see if anyone could point out ways we can improve our data collection/analysis.
On the left is where artifact subtraction as described here has been done. On the right is where no artifact subtraction has been done.
First, here's what the max projections look like:
Next, what the correlation maps look like:
Now, what the mean enhanced image looks like:
Here's a side by side of the registered binary movie:
registration_comparison_Trim.mp4
As you can see, there's a bit of motion still there in the video for both of them. It looks like artifact subtraction didn't do much.
Here's a side by side of the registered binary for the data without subtraction against the raw binary:
registered_vs_raw.mp4
It looks like the motion registration didn't do that much in this case!
These were the parameters used for the run:
The data that was given to Suite2p was acquired down a GRIN lens using resonant scanning with a framerate collected close to 30Hz in the mPFC of a headfixed (poorly apparently) mouse. You'll notice that the data looks pretty noisy.
There was no rolling average applied to the images before running Suite2p on them. Some people I've asked have told me that you definitely don't want to average your data before putting it through. Others have told me you definitely do want to do that. Others still have told me that doing an average for finding ROIs/motion correction is good practice and then use those ROIs around the raw data. Another person suggested we slow down our framerate at acquisition time.
So what do my fellow peas in the 2-photon pod think? Any advice about how we can get better imaging from our brains?
Beta Was this translation helpful? Give feedback.
All reactions