Skip to content

Commit

Permalink
Merge pull request #233 from kozistr/feature/lots-of-stuffs
Browse files Browse the repository at this point in the history
[Feature] Lots of stuffs
  • Loading branch information
kozistr authored May 5, 2024
2 parents 48030b5 + 8c8b821 commit fc473bb
Show file tree
Hide file tree
Showing 19 changed files with 953 additions and 305 deletions.
142 changes: 72 additions & 70 deletions README.md

Large diffs are not rendered by default.

9 changes: 8 additions & 1 deletion docs/changelogs/v3.0.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,12 @@ Major version is updated! (`v2.12.0` -> `v3.0.0`) (#164)
* Implement `GaLore` optimizer. (#224, #228)
* [Memory-Efficient LLM Training by Gradient Low-Rank Projection](https://arxiv.org/abs/2403.03507)
* Implement `Adalite` optimizer. (#225, #229)
* Implement `bSAM` optimizer. (#212, #233)
* [SAM as an Optimal Relaxation of Bayes](https://arxiv.org/abs/2210.01620)
* Implement `Schedule-Free` optimizer. (#230, #233)
* [Schedule-Free optimizers](https://github.com/facebookresearch/schedule_free)
* Implement `EMCMC`. (#231, #233)
* [Entropy-MCMC: Sampling from flat basins with ease](https://www.semanticscholar.org/paper/Entropy-MCMC%3A-Sampling-from-Flat-Basins-with-Ease-Li-Zhang/fd95de3f24fc4f955a6fe5719d38d1d06136e0cd)

### Fix

Expand All @@ -35,4 +41,5 @@ thanks to @sdbds, @i404788

## Diff

[2.12.0...3.0.0](https://github.com/kozistr/pytorch_optimizer/compare/v2.12.0...v3.0.0)
* from the previous major version : [2.0.0...3.0.0](https://github.com/kozistr/pytorch_optimizer/compare/v2.0.0...v3.0.0)
* from the previous version: [2.12.0...3.0.0](https://github.com/kozistr/pytorch_optimizer/compare/v2.12.0...v3.0.0)
151 changes: 80 additions & 71 deletions docs/index.md

Large diffs are not rendered by default.

12 changes: 12 additions & 0 deletions docs/optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,10 @@
:docstring:
:members:

::: pytorch_optimizer.BSAM
:docstring:
:members:

::: pytorch_optimizer.CAME
:docstring:
:members:
Expand Down Expand Up @@ -236,6 +240,14 @@
:docstring:
:members:

::: pytorch_optimizer.ScheduleFreeSGD
:docstring:
:members:

::: pytorch_optimizer.ScheduleFreeAdamW
:docstring:
:members:

::: pytorch_optimizer.AccSGD
:docstring:
:members:
Expand Down
4 changes: 4 additions & 0 deletions docs/util.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,10 @@
:docstring:
:members:

::: pytorch_optimizer.optimizer.utils.reg_noise
:docstring:
:members:

## Newton methods

::: pytorch_optimizer.optimizer.shampoo_utils.power_iteration
Expand Down
301 changes: 174 additions & 127 deletions poetry.lock

Large diffs are not rendered by default.

14 changes: 7 additions & 7 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "pytorch_optimizer"
version = "2.12.0"
version = "3.0.0"
description = "optimizer & lr scheduler & objective function collections in PyTorch"
license = "Apache-2.0"
authors = ["kozistr <[email protected]>"]
Expand All @@ -12,13 +12,13 @@ documentation = "https://pytorch-optimizers.readthedocs.io/en/latest"
keywords = [
"pytorch", "deep-learning", "optimizer", "lr scheduler", "A2Grad", "ASGD", "AccSGD", "AdaBelief", "AdaBound",
"AdaDelta", "AdaFactor", "AdaMax", "AdaMod", "AdaNorm", "AdaPNM", "AdaSmooth", "AdaHessian", "Adai", "Adalite",
"AdamP", "AdamS", "Adan", "AggMo", "Aida", "AliG", "Amos", "Apollo", "AvaGrad", "CAME", "DAdaptAdaGrad",
"AdamP", "AdamS", "Adan", "AggMo", "Aida", "AliG", "Amos", "Apollo", "AvaGrad", "bSAM", "CAME", "DAdaptAdaGrad",
"DAdaptAdam", "DAdaptAdan", "DAdaptSGD", "DAdaptLion", "DiffGrad", "Fromage", "GaLore", "Gravity", "GSAM", "LARS",
"Lamb", "Lion", "LOMO", "Lookahead", "MADGRAD", "MSVAG", "Nero", "NovoGrad", "PAdam", "PCGrad", "PID", "PNM",
"Prodigy", "QHAdam", "QHM", "RAdam", "Ranger", "Ranger21", "RotoGrad", "SAM", "SGDP", "Shampoo", "ScalableShampoo",
"SGDW", "SignSGD", "SM3", "SopihaH", "SRMM", "SWATS", "Tiger", "WSAM", "Yogi", "BCE", "BCEFocal", "Focal",
"FocalCosine", "SoftF1", "Dice", "LDAM", "Jaccard", "Bi-Tempered", "Tversky", "FocalTversky", "LovaszHinge",
"bitsandbytes",
"Prodigy", "QHAdam", "QHM", "RAdam", "Ranger", "Ranger21", "RotoGrad", "SAM", "ScheduleFreeSGD",
"ScheduleFreeAdamW", "SGDP", "Shampoo", "ScalableShampoo", "SGDW", "SignSGD", "SM3", "SopihaH", "SRMM", "SWATS",
"Tiger", "WSAM", "Yogi", "BCE", "BCEFocal", "Focal", "FocalCosine", "SoftF1", "Dice", "LDAM", "Jaccard",
"Bi-Tempered", "Tversky", "FocalTversky", "LovaszHinge", "bitsandbytes",
]
classifiers = [
"License :: OSI Approved :: Apache Software License",
Expand Down Expand Up @@ -50,7 +50,7 @@ bitsandbytes = { version = "^0.43", optional = true }

[tool.poetry.dev-dependencies]
isort = { version = "^5", python = ">=3.8" }
black = { version = "^24", python = ">=3.8"}
black = { version = "^24", python = ">=3.8" }
ruff = "*"
pytest = "*"
pytest-cov = "*"
Expand Down
6 changes: 5 additions & 1 deletion pytorch_optimizer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,8 @@
from pytorch_optimizer.optimizer.ranger import Ranger
from pytorch_optimizer.optimizer.ranger21 import Ranger21
from pytorch_optimizer.optimizer.rotograd import RotoGrad
from pytorch_optimizer.optimizer.sam import GSAM, SAM, WSAM
from pytorch_optimizer.optimizer.sam import BSAM, GSAM, SAM, WSAM
from pytorch_optimizer.optimizer.schedulefree import ScheduleFreeAdamW, ScheduleFreeSGD
from pytorch_optimizer.optimizer.sgd import ASGD, SGDW, AccSGD, SignSGD
from pytorch_optimizer.optimizer.sgdp import SGDP
from pytorch_optimizer.optimizer.shampoo import ScalableShampoo, Shampoo
Expand Down Expand Up @@ -186,6 +187,9 @@
Aida,
GaLore,
Adalite,
BSAM,
ScheduleFreeSGD,
ScheduleFreeAdamW,
]
OPTIMIZERS: Dict[str, OPTIMIZER] = {str(optimizer.__name__).lower(): optimizer for optimizer in OPTIMIZER_LIST}

Expand Down
Loading

0 comments on commit fc473bb

Please sign in to comment.