Creating TemporalDataset object from subjects with different channels #406
-
Hi! I'm trying to use this lovely toolbox for intracranial EEG analysis. However, my subjects have different numbers of channels, and different channel locations/names. I've already loaded my data into a nested dictionary that's structured in terms of subject -> condition -> Epochs object. However, when I try to concatenate across Epochs objects, I get a ValueError because different subjects have different numbers of channels. This may be a silly question but is there any way to have all of my subjects in the same TemporalDatasets object? I would like to have them in the same dataset so that I can group electrodes in the same ROI across subjects. Thank you in advance for any advice!
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
Hi! Great to hear that you're considering using this on intracranial data. I think one solution would be to add NaNs? i.e. make a master array that has all channels, before creating the |
Beta Was this translation helpful? Give feedback.
-
Thanks! Yeah it seems like adding NaNs is the best solution. I also want to add ROI and subject information for each channel, I suppose I can just add those as lists similar to how the channel names are added as a list in the demo_temporal.ipynb? And then just have new keys of 'ROI' and 'subject' in the channel_descriptors dictionary? So it would be something like:
Or would a better approach to be have a separate TemporalDataset for each roi? This is because in the master channel array, for each channel, if I fill in trials with NaNs on which that channel doesn't actually have data, then I'll end up with a huge array. I'm just not sure if having separate TemporalDatasets would make doing the RSA harder. |
Beta Was this translation helpful? Give feedback.
-
Thank you for all of your help so far! I managed to make a TemporalDataset for each roi. However, my issue now is that when I try to use the Here's exactly what I did in case more context is needed. First I prepare my data so I can convert it to a TemporalDataset object:
Next, I make the temporal datasets
And when I look at my data:
But this doesn't play nice with the calc_rdm_movie function:
Which outputs all nans for the dissimilarities:
|
Beta Was this translation helpful? Give feedback.
Hi! Great to hear that you're considering using this on intracranial data.
I think one solution would be to add NaNs? i.e. make a master array that has all channels, before creating the
Dataset
. For RDMs there isrsatoolbox.rdm.combine.from_partials
which will do this for you, but I don't think we have something for Datasets yet.