From b5b92c8c4bd8ad113fe34b58c4a6b41f6b6f519b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Quentin=20Gallou=C3=A9dec?= <45557362+qgallouedec@users.noreply.github.com> Date: Mon, 22 Apr 2024 14:56:06 +0200 Subject: [PATCH] Update README.md (#161) --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 0e4ad28b..b901f2bb 100644 --- a/README.md +++ b/README.md @@ -100,13 +100,13 @@ Here are some examples of how you might use JAT in both evaluation and fine-tuni For further details regarding usage, consult the documentation included with individual script files. ## Dataset -You can find the training dataset used to train the JAT model at this [Hugging Face dataset repo](https://huggingface.co/datasets/jat-project/jat-dataset). Thhe dataset contains a large selection of Reinforcement Learning, textual and multimodal tasks: +You can find the training dataset used to train the JAT model at this [Hugging Face dataset repo](https://huggingface.co/datasets/jat-project/jat-dataset). The dataset contains a large selection of Reinforcement Learning, textual and multimodal tasks: **Reinforment Learning tasks** - Atari 57 - BabyAI -- Metaworld -- Mujoco +- Meta-World +- MuJoCo **Textual tasks** - Wikipedia