From aecadc9063788045a5c45a2aeed4b94d2a392d1b Mon Sep 17 00:00:00 2001 From: Max Marion Date: Thu, 12 Oct 2023 16:43:25 -0700 Subject: [PATCH] small typos in eval readme (#671) --- scripts/eval/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/eval/README.md b/scripts/eval/README.md index 201e61959c..ca97cc4bfb 100644 --- a/scripts/eval/README.md +++ b/scripts/eval/README.md @@ -31,7 +31,7 @@ You can also modify the specific benchmarks executed and their formatting by mod ### Evaluation during training -To run evaluatio during training, download this repo, follow the instructions in `scripts/train/README.md` to perform single node pre-training and run the following commands +To run evaluation during training, download this repo, follow the instructions in `scripts/train/README.md` to perform single node pre-training and run the following commands ```bash @@ -45,7 +45,7 @@ You can also modify the specific benchmarks executed and their formatting by mod ICL evaluation can be done offline via the `scripts/eval/eval.py` or during training via `scripts/train/train.py`. -In order to do ICL evaluation you must specify a set of benchmarks you'd like to run via the `icl_tasks` key in your eval/training config. `icl_tasks` can either consist of config, or it can be a file path pointing to a locally accessible YAML config (see `scripts/eval/yamls/icl_tasks.yaml` for an example). +In order to do ICL evaluation you must specify a set of benchmarks you'd like to run via the `icl_tasks` key in your eval/training config. `icl_tasks` can either consist of config, or it can be a file path pointing to a locally accessible YAML config (see `scripts/eval/yamls/tasks.yaml` for an example). #### ICL task YAML format