Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pd: fix learning rate setting when resume #4480

Merged
merged 2 commits into from
Dec 20, 2024

Conversation

HydrogenSulfate
Copy link
Contributor

@HydrogenSulfate HydrogenSulfate commented Dec 20, 2024

"When resuming training, there is no need to add self.start_step to the step count because Paddle uses lr_sche.last_epoch as the input for step, which already records the start_step steps."

learning rate are correct after fixing

22AD6874B74E437E9B133D75ABCC02FE

Summary by CodeRabbit

  • New Features

    • Enhanced training process with improved optimizer configuration and learning rate adjustments.
    • Refined logging of training and validation results for clarity.
    • Improved model saving logic to preserve the latest state during interruptions.
    • Enhanced tensorboard logging for detailed tracking of training metrics.
  • Bug Fixes

    • Corrected lambda function for learning rate scheduler to reference warmup steps accurately.
  • Chores

    • Streamlined data loading and handling for efficient training across different tasks.

Copy link
Contributor

coderabbitai bot commented Dec 20, 2024

📝 Walkthrough

Walkthrough

The pull request introduces modifications to the Trainer class in the deepmd/pd/train/training.py file, focusing on improving the training process and optimizer configuration. The changes primarily enhance the robustness of training, particularly for multi-task scenarios and checkpoint resumption. Key updates include refining the learning rate scheduler, improving logging and model saving mechanisms, and streamlining data handling during training.

Changes

File Change Summary
deepmd/pd/train/training.py - Updated __init__ method to include optimizer state dictionary parameter
- Modified learning rate scheduler lambda function
- Refined logging and model saving logic
- Improved tensorboard logging
- Enhanced checkpoint resumption handling

Sequence Diagram

sequenceDiagram
    participant Trainer
    participant Optimizer
    participant LRScheduler
    participant ModelSaver
    participant Logger

    Trainer->>Optimizer: Initialize with optional state dict
    Trainer->>LRScheduler: Configure warmup steps
    loop Training Iteration
        Trainer->>Trainer: Prepare batch data
        Trainer->>Optimizer: Compute gradients
        Optimizer->>LRScheduler: Adjust learning rate
        Trainer->>ModelSaver: Save checkpoint periodically
        Trainer->>Logger: Log training metrics
    end
    Trainer->>ModelSaver: Save final model state
Loading

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0914e1 and 5fb1509.

📒 Files selected for processing (1)
  • deepmd/pd/train/training.py (1 hunks)
🔇 Additional comments (2)
deepmd/pd/train/training.py (2)

591-591: Ensure correct alignment of warmup scheduling logic.

The new lambda function for the learning rate warmup appears logically consistent. However, please confirm that the “warmup to exponential” transition happens at the desired boundary and that no off-by-one issues occur if you resume from mid-training steps.


598-598: Double-check the off-by-one effect when resuming.

Subtracting 1 from self.scheduler.last_epoch might shift the learning rate schedule by one step. If your intent is to align the scheduler precisely with the current training step, consider setting last_epoch to self.start_step - 1 or verifying that the shift doesn’t cause discrepancies.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Dec 20, 2024

Codecov Report

Attention: Patch coverage is 0% with 1 line in your changes missing coverage. Please review.

Project coverage is 84.41%. Comparing base (c0914e1) to head (5fb1509).
Report is 1 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/pd/train/training.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4480      +/-   ##
==========================================
- Coverage   84.41%   84.41%   -0.01%     
==========================================
  Files         670      670              
  Lines       62147    62149       +2     
  Branches     3487     3486       -1     
==========================================
+ Hits        52464    52465       +1     
- Misses       8556     8558       +2     
+ Partials     1127     1126       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@njzjz njzjz added this pull request to the merge queue Dec 20, 2024
Merged via the queue into deepmodeling:devel with commit c24498b Dec 20, 2024
60 checks passed
iProzd added a commit to iProzd/deepmd-kit that referenced this pull request Dec 24, 2024
* change property.npy to any name

* Init branch

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change | to Union

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change sub_var_name default to []

* Solve pre-commit

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* solve scanning github

* fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete useless file

* Solve some UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve precommit

* slove pre

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve dptest UT, dpatomicmodel UT, code scannisang

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete param  and

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve UT fail caused by task_dim and property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix permutation error

* Add property bias UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* recover rcond doc

* recover blank

* Change code according  according to coderabbitai

* solve pre-commit

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change apply_bias doc

* update the version compatibility

* feat (tf/pt): add atomic weights to tensor loss (deepmodeling#4466)

Interfaces are of particular interest in many studies. However, the
configurations in the training set to represent the interface normally
also include large parts of the bulk material. As a result, the final
model would prefer the bulk information while the interfacial
information is less learnt. It is difficult to simply improve the
proportion of interfaces in the configurations since the electronic
structures of the interface might only be reasonable with a certain
thickness of bulk materials. Therefore, I wonder whether it is possible
to define weights for atomic quantities in loss functions. This allows
us to add higher weights for the atomic information for the regions of
interest and probably makes the model "more focused" on the region of
interest.
In this PR, I add the keyword `enable_atomic_weight` to the loss
function of the tensor model. In principle, it could be generalised to
any atomic quantity, e.g., atomic forces.
I would like to know the developers' comments/suggestions about this
feature. I can add support for other loss functions and finish unit
tests once we agree on this feature.

Best. 




<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced an optional parameter for atomic weights in loss
calculations, enhancing flexibility in the `TensorLoss` class.
- Added a suite of unit tests for the `TensorLoss` functionality,
ensuring consistency between TensorFlow and PyTorch implementations.

- **Bug Fixes**
- Updated logic for local loss calculations to ensure correct
application of atomic weights based on user input.

- **Documentation**
- Improved clarity of documentation for several function arguments,
including the addition of a new argument related to atomic weights.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* delete sub_var_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* recover to property key

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix conflict

* Fix UT

* Add document of property fitting

* Delete checkpoint

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add get_property_name to DeepEvalBackend

* pd: fix learning rate setting when resume (deepmodeling#4480)

"When resuming training, there is no need to add `self.start_step` to
the step count because Paddle uses `lr_sche.last_epoch` as the input for
`step`, which already records the `start_step` steps."

learning rate are correct after fixing


![22AD6874B74E437E9B133D75ABCC02FE](https://github.com/user-attachments/assets/1ad0ce71-6e1c-4de5-87dc-0daca1f6f038)



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced training process with improved optimizer configuration and
learning rate adjustments.
	- Refined logging of training and validation results for clarity.
- Improved model saving logic to preserve the latest state during
interruptions.
- Enhanced tensorboard logging for detailed tracking of training
metrics.

- **Bug Fixes**
- Corrected lambda function for learning rate scheduler to reference
warmup steps accurately.

- **Chores**
- Streamlined data loading and handling for efficient training across
different tasks.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* docs: update deepmd-gnn URL (deepmodeling#4482)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated guidelines for creating and integrating new models in the
DeePMD-kit framework.
- Added new sections on descriptors, fitting networks, and model
requirements.
	- Enhanced unit testing section with instructions for regression tests.
- Updated URL for the DeePMD-GNN plugin to reflect new repository
location.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>

* docs: update DPA-2 citation (deepmodeling#4483)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Updated references in the bibliography for the DPA-2 model to include
a new article entry for 2024.
	- Added a new reference for an attention-based descriptor.
  
- **Bug Fixes**
- Corrected reference links in documentation to point to updated DOI
links instead of arXiv.

- **Documentation**
- Revised entries in the credits and model documentation to reflect the
latest citations and details.
- Enhanced clarity and detail in fine-tuning documentation for
TensorFlow and PyTorch implementations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <[email protected]>

* docs: fix a minor typo on the title of `install-from-c-library.md` (deepmodeling#4484)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated formatting of the installation guide for the pre-compiled C
library.
- Icons for TensorFlow and JAX are now displayed together in the header.
	- Retained all installation instructions and compatibility notes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>

* fix: print dlerror if dlopen fails (deepmodeling#4485)

xref: deepmodeling/deepmd-gnn#44

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced error messages for library loading failures on non-Windows
platforms.
- Updated thread management environment variable checks for improved
compatibility.
- Added support for mixed types in tensor input handling, allowing for
more flexible configurations.

- **Bug Fixes**
	- Improved error reporting for dynamic library loading issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* change doc to py

* Add out_bias out_std doc

* change bias method to compute_stats_do_not_distinguish_types

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change var_name to property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change logic of extensive bias

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add doc for neww added parameter

* change doc for compute_stats_do_not_distinguish_types

* try to fix dptest

* change all property to property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Delete key 'property' completely

* Fix UT

* Fix dptest UT

* pd: fix oom error (deepmodeling#4493)

Paddle use `MemoryError` rather than `RuntimeError` used in pytorch, now
I can test DPA-1 and DPA-2 in 16G V100...

![image](https://github.com/user-attachments/assets/42ead773-bf26-4195-8f67-404b151371de)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved detection of out-of-memory (OOM) errors to enhance
application stability.
- Ensured cached memory is cleared upon OOM errors, preventing potential
memory leaks.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* pd: add missing `dp.eval()` in pd backend (deepmodeling#4488)

Switch to eval mode when evaluating model, otherwise `self.training`
will be `True`, backward graph will be created and cause OOM

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced model evaluation state management to ensure correct behavior
during evaluation.

- **Bug Fixes**
- Improved type consistency in the `normalize_coord` function for better
computational accuracy.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* [pre-commit.ci] pre-commit autoupdate (deepmodeling#4497)

<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.8.3 →
v0.8.4](astral-sh/ruff-pre-commit@v0.8.3...v0.8.4)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Delete attribute

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve comment

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve error

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete property_name in serialize

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Chenqqian Zhang <[email protected]>
Co-authored-by: Jia-Xin Zhu <[email protected]>
Co-authored-by: HydrogenSulfate <[email protected]>
Co-authored-by: Jinzhe Zeng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants