Copyright (c) 2019 ETH Zurich, Michael Hersche
Compressing Subject-specific Brain--Computer Interface Models into One Model by Superposition in Hyperdimensional Space
In this repository, we share the code for compressing subject-specific BCI models.
For details, please refer to the papers below.
If this code proves useful for your research, please cite
Michael Hersche, Philipp Rupp, Luca Benini, Abbas Rahimi, "Compressing Subject-specific Brain--Computer Interface Models into One Model by Superposition in Hyperdimensional Space", in ACM/IEEE Design, Automation, and Test in Europe Conference (DATE), 2020.
You will need a machine with a CUDA-enabled GPU and the Nvidia SDK installed to compile the CUDA kernels.
Further, we have used conda as a python package manager and exported the environment specifications to conda-env-bci-superpos.yml
.
You can recreate our environment by running
conda env create -f conda-env-bci-superpos.yml -n myBCIsupposEnv
Make sure to activate the environment before running any code.
EEGNet:
Download the .mat
files of 4-class MI dataset with 9 subjects (001-2014) from here, unpack it, and put into folder dataset/EEGNet
ShallowConvnet:
Download .gdf
files from here by requesting access under "Download of data sets". You'll receive an account and can download files. Then put them into folder dataset/shallowconvnet
. The labels need to be downloaded seperately also here under "True Labels of Competition's Evaluation Sets".
There are two networks to test the compression -- EEGNet and Shallow ConvNet. You can test them by running main.py
either in code/EEGnet/
or code/ShallowConvNet/
. The original and compressed models are stored in the corresponding models/
folder. Accuracy results are available in results/
.
Please refer to the LICENSE file for the licensing of our code.