Skip to content

meta-skin/metaskin_natelec

Repository files navigation

TD-C Learning: Time Dependent Contrastive Learning

Kyun Kyu Kim#1, Min Kim#2, Sungho Jo*2, Seung Hwan Ko*3, Zhenan Bao*1
1Stanford, CA, USA, 2Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, 3Seoul National University, Seoul, Korea #denotes equal contribution
in Nature Electronics

Dependecy

This repo is written in Python 3.9. Any Python version > 3.7 will be compatible with our code.

This repo is tested on Windows OS with CUDA 11. For the same environment, you can install Pytorch with the below command line, otherwise, please install Pytorch by following the instructions on the official Pytorch website: https://pytorch.org/get-started/locally/

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

quick setup

Python 3 dependencies:

  • Pytorch 1.12
  • attrs
  • numpy
  • PyQt5
  • scikit-learn

We provide a conda environment setup file having all the dependencies required for running our code. You can create a conda environment tdc by running below command line:

conda env create -f environment.yml

Runnig code

Our training steps are divided into two separate parts: 1. TD-C Learning, 2. Rapid Adaptation We provide codes and experiment environments for adopting our learning method, including data parsing, training code, and basic UI for collecting few-shot demonstrations and making real-time inferences.

TD-C Learning

TD-C learning is an unsupervised learning method that utilizes jittering signal augmentation and time-dependent contrastive learning to learn sensor representations with unlabeled random motion data. Here we show the data format used to run our code and how to run our code with sample unlabeled data.

1. Prepare unlabeled dataset

To run the code, first prepare byte-encoded pickle files containing sensor signals in a dictionary data structure with key 'sensor' and value sequential sensor signals: {'sensor': array(s1, s2, ....)} Our code will read and parse all pickle files in ./data/train_data with the above dictionary format.

2. Change hyperparameters

We found out that the best-performing window size and data embedding size are dependent on the total amount of data, data collection frequency, and so on. You can change different hyperparameter settings by simply modifying values in params.py file.

3. Running tdc train

Run

python tdc_train.py 

Rapid few-shot adaptation and real-time inference

To allow the pre-trained model to be adapted to perform different tasks, we applied a few-shot transfer learning and metric-based inference mechanism for real-time inference. Here we provide a basic UI system implemented with PyQT5 which allows users to collect few-shot demos and make real-time inferences.

1. Basic UI

We provide basic UI code in ui directory

The UI contains two buttons: 1. Collect Start, 2. Start Prediction and two Widgets: 1. status widget showing current prediction, 2. sensor widget showing current sensor values.

The system starts to record few-shot labeled data from the demonstration when the user presses the "Collect Start" button. After providing all required demonstrations, make sure to press the "Start Prediction" button, so that the system starts to transfer and learn the model.

2. Few-shot rapid adaptation, Data embedding, and metric-based inference system

In transfer_learning_base.py file, we provide transfer learning, data embedding, and metric-based inference functions

In our system, the system does transfer learning with provided few-shot demonstrations. The number of transfer epochs can be modified by changing the transfer_epoch variable in params.py.

After running a few transfer epochs, the system encodes few-shot label data with a transferred model to generate demo_embeddings. These embeddings are then used as references for Maximum Inner Product Search(MIPS). Given a window of sensor values, the model generates its embedding and phase variable. If the phase variable exceeds the predefined threshold, the system performs MIPS and the corresponding prediction appears on the status widget. Otherwise, the system regards the current state as a resting state.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages