Skip to content

Latest commit

 

History

History
63 lines (43 loc) · 1.23 KB

README.md

File metadata and controls

63 lines (43 loc) · 1.23 KB

competitive_gradient_descent

Re-implementation of Competitive Gradient Descent

Implementation

  • Implement GDA (easy)

  • Implement CGD

    • Should we try in Pytorch or JAX ?
      • Investigate how to do (1) forward differentiation in pytorch and (2) second order derivatives in pytorch (Irene)
      • Try-out JAX a bit (is it complicated to use/learn?) (Julien)
      • Inspect and understand their Julia implementation (Julien and Irene)
  • Implement other baselines (which ones?)

  • Implement a GAN and training pipeline on MNIST

Experiments

  • Experiment of Figure 2

  • Experiment of Figure 3

  • Train GAN on MNIST (easy dataset) and compare performance and robustness with different optimizers

  • Non zero-sum games (similar to Figure 2) (maybe just discussions in the report?)

Poster

  • Overview of the paper * * *
  • Experiment results * * *

Report

  • Background

    • Taylor approximations
    • Single-player optimisation
    • Games optimisation
  • Derivation of the algorithm (Guillaume) * * *

  • Theoretical Analysis

    • How their approach is different?
    • Why not taking only taking cross derivatives terms of the Hessian?
  • Experiments

    • (see over there)
  • Discussion *

  • Conclusion *