From 615bd2c7ce525ee0330589272a3608d46211567e Mon Sep 17 00:00:00 2001 From: Li Siyao <81355712+lisiyao21@users.noreply.github.com> Date: Mon, 13 Jun 2022 18:54:12 +0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 13504bbf..a7590eec 100644 --- a/README.md +++ b/README.md @@ -94,7 +94,7 @@ Then, run the reinforcement learning code as sh srun_actor_critic.sh configs/actor_critic_demo.yaml train [your node name] 1 -Scan ./experiments/actor_critic_for_demo/vis/videos to pick out a relative good results. Since reinforcement learning is not stable, there is no guratee that the synthesized dance is always satisfying. But empirically, fintuning can produce not-too-bad results after fineuning <= 30 epochs. All of our demos in the wild are made +Scan each stored epoch folder in ./experiments/actor_critic_for_demo/vis/videos to pick up a relative good one. Since reinforcement learning is not stable, there is no guratee that the synthesized dance is always satisfying. But empirically, fintuning can produce not-too-bad results after fineuning <= 30 epochs. All of our demos in the wild are made in such way. I wish you could enjoy it.