Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add scheduler for alpha/beta parameters of PrioritizedSampler #2452

Merged
merged 5 commits into from
Sep 30, 2024

Conversation

LTluttmann
Copy link
Contributor

Description

Add scheduler for alpha/beta parameters of PrioritizedSampler.

Motivation and Context

close #1575

Following the suggestions made by @vmoens in issue #1575, this PR adds different Scheduler classes through which the user can adjust the alpha and beta parameters of the PrioritizedSampler during training when using the PrioritizedReplayBuffer. This is explicitly suggested in the paper "Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2015. Prioritized experience replay".

The main reason to use separate scheduler classes for the annealing instead of a simple linear annealing (as also suggested by @vmoens in issue #1575) is the greater flexibility for the users. This way, the annealing can take place for example after taking a sample from the replay buffer or after a full training epoch (depending on where the user places the scheduler.step() command). Also, through the LinearScheduler, StepScheduler and LambdaScheduler different annealing schemes can be used (or new ones can be easily created).

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented Sep 24, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2452

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 8 New Failures, 2 Unrelated Failures

As of commit 4b2897a with merge base 33e86c5 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 24, 2024
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing! Great and long awaited feature!
Thanks a mil

torchrl/data/replay_buffers/scheduler.py Show resolved Hide resolved
torchrl/data/replay_buffers/scheduler.py Outdated Show resolved Hide resolved
torchrl/data/replay_buffers/scheduler.py Outdated Show resolved Hide resolved
torchrl/data/replay_buffers/scheduler.py Outdated Show resolved Hide resolved
torchrl/data/replay_buffers/scheduler.py Outdated Show resolved Hide resolved
torchrl/data/replay_buffers/scheduler.py Show resolved Hide resolved
Comment on lines +221 to +224
if self._step_cnt % self.n_steps == 0:
return self.operator(current_val, self.gamma)
else:
return current_val
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

test/test_rb.py Outdated
Comment on lines 3036 to 3041
INIT_ALPHA = 0.7
INIT_BETA = 0.6
GAMMA = 0.1
EVERY_N_STEPS = 10
LINEAR_STEPS = 100
TOTAL_STEPS = 200
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's maybe make these args to the func?

test/test_rb.py Outdated
Comment on lines 3059 to 3062
expected_alpha_vals = np.linspace(INIT_ALPHA, 0.0, num=LINEAR_STEPS + 1)
expected_alpha_vals = np.pad(
expected_alpha_vals, (0, TOTAL_STEPS - LINEAR_STEPS), constant_values=0.0
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use torch here

test/test_rb.py Outdated
Comment on lines 3070 to 3075
assert np.isclose(
rb.sampler.alpha, expected_alpha_vals[i]
), f"expected {expected_alpha_vals[i]}, got {rb.sampler.alpha}"
assert np.isclose(
rb.sampler.beta, expected_beta_vals[i]
), f"expected {expected_beta_vals[i]}, got {rb.sampler.beta}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use torch.testing.assert_close

@vmoens vmoens added the enhancement New feature or request label Sep 25, 2024
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks

self.initial_val = getattr(self.sampler, self.param_name)
self._step_cnt = 0

def state_dict(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wow! Ok then...

Comment on lines 149 to 153
def _step(self):
if self._step_cnt < self.num_steps:
return self.initial_val + (self._delta * self._step_cnt)
else:
return self.final_val
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that's fine, maybe let's add a comment to let someone know in the future that this should be fixed

torchrl/data/replay_buffers/scheduler.py Show resolved Hide resolved
torchrl/data/replay_buffers/scheduler.py Show resolved Hide resolved
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks

@vmoens vmoens merged commit 5851652 into pytorch:main Sep 30, 2024
69 of 79 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Support for alpha and beta parameters' schedule in torchrl.data.PrioritizedReplayBuffer
3 participants