Skip to content

The official implement of Mind's eye: image recognition by EEG via multimodal similarity-keeping contrastive learning.

Notifications You must be signed in to change notification settings

ChiShengChen/MUSE_EEG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MUSE_EEG

arXiv
The official repository implement of Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning . We provide new types of EEG encoders and Similarity-Keeping Contrastive Learning framework to reach the SOTA on EEG-image zero-shot classification task. paper_img_eeg_music_com_c

Multimodal Similarity-Keeping ContrastivE (MUSE) Learning

paper_img_eeg_clip_corr The details of the MUSE. (a.) The contrastive learning loss is calculated from EEG encoding and image encoding. (b.)(c.) The similarity-keeping loss comes from the final similarity of self-batch similarity of the input modal data.

New EEG encoder series

paper_img_model_c

Performance

image

Datasets

many thanks for sharing good datasets!

  1. Things-EEG2

The data we use is "Raw EEG data" in here.
image

EEG pre-processing

Script path

  • ./preprocessing/

Data path

  • raw data: ./Data/Things-EEG2/Raw_data/
  • proprocessed eeg data: ./Data/Things-EEG2/Preprocessed_data_250Hz/

Steps

  1. pre-processing EEG data of each subject

    • modify preprocessing_utils.py as you need.
      • choose channels
      • epoching
      • baseline correction
      • resample to 250 Hz
      • sort by condition
      • Multivariate Noise Normalization (z-socre is also ok)
    • python preprocessing.py for each subject (run by per subject), note that need to modified the default parser.add_argument('--sub', default=<Your_Subject_Want_to_Preprocessing>, type=int).
    • The output files will svaed in ./Data/Things-EEG2/Preprocessed_data_250Hz/.
  2. get the center images of each test condition (for testing, contrast with EEG features)

    • get images from original Things dataset but discard the images used in EEG test sessions.

Image features from pre-trained models

Script path

  • ./clipvit_feature_extraction/

Data path (follow the original dataset setting)

  • raw image: ./Data/Things-EEG2/Image_set/image_set/
  • preprocessed eeg data: ./Data/Things-EEG2/Preprocessed_data_250Hz/
  • features of each images: ./Data/Things-EEG2/DNN_feature_maps/full_feature_maps/model/pretrained-True/
  • features been packaged: ./Data/Things-EEG2/DNN_feature_maps/pca_feature_maps/model/pretrained-True/
  • features of condition centers: ./Data/Things-EEG2/Image_set/

Steps

  1. obtain feature maps with each pre-trained model with obtain_feature_maps_xxx.py (clip, vit, resnet...)
  2. package all the feature maps into one .npy file with feature_maps_xxx.py
  3. obtain feature maps of center images with center_fea_xxx.py
    • save feature maps of each center image into center_all_image_xxx.npy
    • save feature maps of each condition into center_xxx.npy (used in training)

Training and testing

Script path

  • ./model/main_train.py

Star History

Star History Chart

Reference

The code is modified based on NICE_EEG.

Citation

Hope this code is helpful. I would appreciate you citing us in your paper, and the github.

@article{chen2024mind,
  title={Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning},
  author={Chen, Chi-Sheng and Wei, Chun-Shu},
  journal={arXiv preprint arXiv:2406.16910},
  year={2024}
}

About

The official implement of Mind's eye: image recognition by EEG via multimodal similarity-keeping contrastive learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages