A TensorFlow implementation of TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? [1]. In this paper, an earlier version of which is presented at NeurIPS 2021 [2], the authors suggest an adaptive token learning algorithm that makes ViT computationally much more efficient (in terms of FLOPs) and also increases downstream accuracy (here classification accuracy). Experimenting with CIFAR-10 we reduce the number of pathces from 64 to 4 (number of adaptively learned tokens) and also report a boost in the accuracy. We experiment with different hyperparameters and report results which aligns with the literature.
We report results training our mini ViT with and without the vanilla TokenLearner module here.
You can find the vanilla Token Learner module in the TokenLearner.ipynb
notebook.
TokenLearner | # tokens in TokenLearner |
Top-1 Acc (Averaged across 5 runs) |
GFLOPs | TensorBoard |
---|---|---|---|---|
N | - | 56.112% | 0.0184 | Link |
Y | 8 | 56.55% | 0.0153 | Link |
N | - | 56.37% | 0.0184 | Link |
Y | 4 | 56.4980% | 0.0147 | Link |
N | - (# Transformer layers: 8) | 55.36% | 0.0359 | Link |
We have also implemented the Token Learner v11 module which aligns with the official implementation. The Token Learner v11 module can be found in the TokenLearner-V1.1.ipynb
notebook. The results training with this module are as follows:
# Groups | # Tokens | Top-1 Acc | GFLOPs | TensorBoard |
---|---|---|---|---|
4 | 4 | 54.638% | 0.0149 | Link |
8 | 8 | 54.898% | 0.0146 | Link |
4 | 8 | 55.196% | 0.0149 | Link |
We acknowledge that the results with this new TokenLearner module are slightly off than expected and this might mitigate with hyperparameter tuning.
Note: To compute the FLOPs of our models we use this utility from this repository.
- Michael S. Ryoo: The first author of the paper.
- Google Developers Experts Program and JarvisLabs.ai for providing credits to perform extensive experimentation on A100 GPUs.
[1] TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?; Ryoo et al.; arXiv 2021; https://arxiv.org/abs/2106.11297
[2] TokenLearner: Adaptive Space-Time Tokenization for Videos; Ryoo et al., NeurIPS 2021; https://openreview.net/forum?id=z-l1kpDXs88