From 6364d19d50ea3ff983efe49676dda3fe5567abac Mon Sep 17 00:00:00 2001 From: Howuhh Date: Wed, 19 Jul 2023 17:01:09 +0300 Subject: [PATCH] fix some typos --- CONTRIBUTING.md | 36 ++++++++++++++++++++---------------- 1 file changed, 20 insertions(+), 16 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 86cc944a..48293538 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -20,7 +20,7 @@ pip install -r requirements/requirements_dev.txt 1. Fork this repo 2. Make a change and commit your code -3. Submit a pull request. It will be reviewed by maintainers and they'll give feedback or make requests as applicable +3. Submit a pull request. It will be reviewed by maintainers, and they'll give feedback or make requests as applicable ### Code style @@ -29,13 +29,17 @@ These checks can also be run locally without waiting for the CI by following the 1. [install `pre-commit`](https://pre-commit.com/#install), 2. install the Git hooks by running `pre-commit install`. -Once those two steps are done, the Git hooks will be run automatically at every new commit. The Git hooks can also be run manually with `pre-commit run --all-files`, and if needed they can be skipped (not recommended) with `git commit --no-verify`. **Note:** you may have to run `pre-commit run --all-files` manually a couple of times to make it pass when you commit, as each formatting tool will first format the code and fail the first time but should pass the second time. +Once those two steps are done, the Git hooks will be run automatically at every new commit. +The Git hooks can also be run manually with `pre-commit run --all-files`, and +if needed they can be skipped (not recommended) with `git commit --no-verify`. -We use [Ruff](https://github.com/astral-sh/ruff) as our main linter. If you want to see possible problems before pre-commit, you can run `ruff check --diff .` to see exact linter suggestions and future fixes. +We use [Ruff](https://github.com/astral-sh/ruff) as our main linter. If you want to see possible +problems before pre-commit, you can run `ruff check --diff .` to see exact linter suggestions and future fixes. ## Adding new algorithms -All new algorithms should go to the `algorithms/contrib`. +All new algorithms should go to the `algorithms/contrib/offline` for just +offline algorithms and to the `algorithms/contrib/finetune` for the offline-to-online algorithms. We as a team try to keep the core as reliable and reproducible as possible, but we may not have the resources to support all future algorithms. Therefore, this separation is necessary, as we cannot guarantee that all @@ -50,21 +54,21 @@ While we welcome any algorithms, it is better to open an issue with the proposal so we can discuss the details. Unfortunately, not all algorithms are equally easy to understand and reproduce. We may be able to give a couple of advices to you, or on the contrary warn you that this particular algorithm will require too much -computational resources to fully reproduce the results and it is better to do something else. +computational resources to fully reproduce the results, and it is better to do something else. ### Running benchmarks Although you will have to do a hyperparameter search while reproducing the algorithm, -in the end we expect to see final configs in `configs/contrib//.yaml` with the best hyperparameters for all calculated -datasets. The configs should be in yaml format, containing all parameters sorted +in the end we expect to see final configs in `configs/contrib///.yaml` with the best hyperparameters for all +datasets considered. The configs should be in `yaml` format, containing all hyperparameters sorted in alphabetical order (see existing configs for an inspiration). -Use this conventions to name your runs in the configs: +Use these conventions to name your runs in the configs: 1. `name: ` -2. `group: --multiseed-v0`. Increment version if needed +2. `group: --multiseed-v0`, increment version if needed 3. use our [\_\_post_init\_\_](https://github.com/tinkoff-ai/CORL/blob/962688b405f579a1ce6ec1b57e6369aaf76f9e69/algorithms/offline/awac.py#L48) implementation in your config dataclass -Since we are releasing wandb logs for all algorithms, you will need to submit multiseed (4 seeds) +Since we are releasing wandb logs for all algorithms, you will need to submit multiseed (~4 seeds) training runs the `CORL` project in the wandb [corl-team](https://wandb.ai/corl-team) organization. We'll invite you there when the time will come. We usually use wandb sweeps for this. You can use this example config (it will work with pyrallis as it expects `config_path` cli argument): @@ -76,15 +80,15 @@ program: algorithms/contrib/.py method: grid parameters: config_path: + # algo_type is offline or finetune (see sections above) values: [ - "configs/contrib//.yaml", - "configs/contrib//.yaml", - "configs/contrib//.yaml", + "configs/contrib///.yaml", + "configs/contrib///.yaml", + "configs/contrib///.yaml", ] train_seed: values: [0, 1, 2, 3] ``` - Then proceed as usual. Create wandb sweep with `wandb sweep sweep_config.yaml`, then run agents with `wandb agent `. ### Checklist @@ -93,5 +97,5 @@ Then proceed as usual. Create wandb sweep with `wandb sweep sweep_config.yaml`, - [ ] Single-file implementation is added to the `algorithms/contrib` - [ ] PR has passed all the tests - [ ] Evidence that implementation reproduces original results is provided -- [ ] Configs with best hyperparameters for all datasets are added to the `configs/contrib` -- [ ] Logs for best hyperparameters are submitted to the our wandb organization +- [ ] Configs with the best hyperparameters for all datasets are added to the `configs/contrib` +- [ ] Logs for best hyperparameters are submitted to our wandb organization