Skip to content

irenezhang30/competitive_gradient_descent

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

competitive_gradient_descent

Re-implementation of Competitive Gradient Descent

Implementation

  • Implement GDA (easy)

  • Implement CGD

    • Should we try in Pytorch or JAX ?
      • Investigate how to do (1) forward differentiation in pytorch and (2) second order derivatives in pytorch (Irene)
      • Try-out JAX a bit (is it complicated to use/learn?) (Julien)
      • Inspect and understand their Julia implementation (Julien and Irene)
  • Implement other baselines (which ones?)

  • Implement a GAN and training pipeline on MNIST

Experiments

  • Experiment of Figure 2

  • Experiment of Figure 3

  • Train GAN on MNIST (easy dataset) and compare performance and robustness with different optimizers

  • Non zero-sum games (similar to Figure 2) (maybe just discussions in the report?)

Poster

  • Overview of the paper * * *
  • Experiment results * * *

Report

  • Background

    • Taylor approximations
    • Single-player optimisation
    • Games optimisation
  • Derivation of the algorithm (Guillaume) * * *

  • Theoretical Analysis

    • How their approach is different?
    • Why not taking only taking cross derivatives terms of the Hessian?
  • Experiments

    • (see over there)
  • Discussion *

  • Conclusion *

About

Re-implementation of Competitive Gradient Descent

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published