Re-implementation of Competitive Gradient Descent
-
Implement GDA (easy)
-
Implement CGD
- Should we try in Pytorch or JAX ?
- Investigate how to do (1) forward differentiation in pytorch and (2) second order derivatives in pytorch (Irene)
- Try-out JAX a bit (is it complicated to use/learn?) (Julien)
- Inspect and understand their Julia implementation (Julien and Irene)
- Should we try in Pytorch or JAX ?
-
Implement other baselines (which ones?)
-
Implement a GAN and training pipeline on MNIST
-
Experiment of Figure 2
-
Experiment of Figure 3
-
Train GAN on MNIST (easy dataset) and compare performance and robustness with different optimizers
-
Non zero-sum games (similar to Figure 2) (maybe just discussions in the report?)
- Overview of the paper * * *
- Experiment results * * *
-
Background
- Taylor approximations
- Single-player optimisation
- Games optimisation
-
Derivation of the algorithm (Guillaume) * * *
-
Theoretical Analysis
- How their approach is different?
- Why not taking only taking cross derivatives terms of the Hessian?
-
Experiments
- (see over there)
-
Discussion *
-
Conclusion *