Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] pd: add CINN compiler for dpa2 training #4514

Open
wants to merge 2 commits into
base: devel
Choose a base branch
from

Conversation

HydrogenSulfate
Copy link
Contributor

@HydrogenSulfate HydrogenSulfate commented Dec 27, 2024

We verified paddle CINN compiler in DPA-2 example(single A100-SXM (40G), cada11.8, Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz x 160).

To enable CINN compiler in training, add one flag: CINN=1 before command line, e.g. CINN=1 dp --pd train input_torch_medium.json.

Curves:

e9eb3013dc154b03172a0c1881507c8e

Performance

We tested with torch==2.6.0.dev20241219+cu118

Pytorch(eager) Pytorch(compile) Paddle(eager) Paddle(CINN compile)
0.1078s/step compile do not support double-backward 0.1130s/step 0.0638s/step

Accuracy details:

  • Pytorch:
    image

  • Paddle(eager mode):
    image

  • Paddle(CINN compiler)
    1da419d9d8b27981cfd3a1067e30ce6f

TODO:

  • Fix high order backward of control flow in Paddle

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Introduced a new function to convert various input types to boolean values.
    • Added new variables for enhanced model configuration: CINN and DEFAULT_PRECISION.
  • Improvements

    • Updated JIT compilation handling for better performance during model training.
    • Enhanced learning rate handling to ensure precision is maintained.
    • Added profiling capabilities for performance monitoring during training.
  • Bug Fixes

    • Improved error handling for invalid input types in the new boolean conversion function.

deepmd/pd/train/training.py Fixed Show fixed Hide fixed
deepmd/pd/utils/env.py Fixed Show fixed Hide fixed
Copy link
Contributor

coderabbitai bot commented Dec 27, 2024

Warning

Rate limit exceeded

@HydrogenSulfate has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 25 minutes and 9 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 95b201d and 7ca2a9e.

📒 Files selected for processing (2)
  • deepmd/pd/train/training.py (6 hunks)
  • deepmd/pd/utils/env.py (2 hunks)
📝 Walkthrough

Walkthrough

The pull request introduces modifications to the training.py and env.py files in the DeepMD project, focusing on enhancing JIT compilation, precision handling, and environment variable configuration. The changes include adding new imports, updating JIT compilation logic, introducing a new to_bool function for environment variable conversion, and modifying the enable_prim function to handle JIT mode more flexibly.

Changes

File Change Summary
deepmd/pd/train/training.py - Added imports for CINN and DEFAULT_PRECISION
- Updated run method for JIT compilation and static graph optimization
- Modified learning rate handling with DEFAULT_PRECISION
- Introduced nvprof_context for performance profiling
deepmd/pd/utils/env.py - Added to_bool function for converting input to boolean
- Updated JIT and CINN environment variables using to_bool
- Modified enable_prim function to handle JIT mode conditionally

Sequence Diagram

sequenceDiagram
    participant Env as Environment
    participant Training as Training Module
    participant Model as Model
    
    Env->>Env: Configure JIT and CINN settings
    Env->>Training: Set precision and compilation strategy
    Training->>Model: Apply JIT compilation
    Model-->>Training: Optimize forward pass
    Training->>Training: Profile performance
Loading

The sequence diagram illustrates the flow of configuration and optimization process, showing how environment settings influence the training module and model compilation.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
deepmd/pd/train/training.py (2)

402-405: Remove or clarify commented-out code.

These lines comment out a previously raised NotImplementedError and a potential paddle.jit.to_static call. If you no longer need this logic, removing it altogether might reduce confusion and keep the file tidy. Otherwise, add a comment explaining why these lines are kept for future reference.

-# if JIT:
-#     raise NotImplementedError("JIT is not supported yet when training with Paddle")
-#     self.model = paddle.jit.to_static(self.model)

Also applies to: 406-406


925-926: Consider removing the extra commented-out code.

This snippet appears to comment out a JIT debugging break. If it's no longer needed, removing it can avoid potential confusion.

-# if JIT:
-#     break
🧰 Tools
🪛 GitHub Check: CodeQL

[notice] 925-926: Commented-out code
This comment appears to contain commented-out code.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bf79cc6 and 55f7ef6.

📒 Files selected for processing (2)
  • deepmd/pd/train/training.py (8 hunks)
  • deepmd/pd/utils/env.py (2 hunks)
🧰 Additional context used
🪛 GitHub Check: CodeQL
deepmd/pd/utils/env.py

[notice] 61-61: Unused global variable
The global variable 'CINN' is not used.

deepmd/pd/train/training.py

[notice] 925-926: Commented-out code
This comment appears to contain commented-out code.

🔇 Additional comments (7)
deepmd/pd/train/training.py (4)

56-57: Check the new imports for consistency.

The addition of CINN and DEFAULT_PRECISION is consistent with improvements to JIT compilation and precision handling. Nothing problematic is observed here; just ensure that CINN is successfully imported where used and that DEFAULT_PRECISION is consistently applied.


636-648: JIT and CINN integration logic looks good.

You're conditionally enabling JIT using jit.to_static and passing the build_strategy.build_cinn_pass = CINN. This is a clean approach, ensuring that CINN is only used if enabled. Just verify upstream usage to avoid unexpected behaviors if CINN is disabled at runtime.


688-692: Validate data fetching performance within profiling context.

Wrapping the data loading with nvprof_context helps profile overhead. Ensure that exceptions thrown within this block are properly handled so that the profiler is closed gracefully.


706-706: Precision usage for learning rate.

Using paddle.full([], pref_lr, DEFAULT_PRECISION) enforces consistent floating-point precision for the learning rate. This is beneficial for uniformity, especially in multi-task scenarios. Good practice!

Also applies to: 765-765, 815-815

deepmd/pd/utils/env.py (3)

36-50: Robust input validation in to_bool.

Excellent job handling integers, booleans, and string values thoroughly, with clear error messages and suitable lowercasing of strings. This ensures minimal confusion for environment variable parsing.


60-61: CINN usage is valid despite static analysis hint.

While static analysis tools suggest that CINN is “unused,” it is in fact used within training.py (via build_strategy.build_cinn_pass). You can safely disregard that hint.

🧰 Tools
🪛 GitHub Check: CodeQL

[notice] 61-61: Unused global variable
The global variable 'CINN' is not used.


167-182: Flexible prim enabling for JIT and eager modes.

Switching between JIT mode and eager mode, while setting blacklists only in eager mode, is a neat approach to leverage performance improvements. This design offers more control and clarity to developers using either mode.

@HydrogenSulfate HydrogenSulfate changed the title pd: add CINN compiler for dpa2 [WIP] pd: add CINN compiler for dpa2 Dec 27, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
deepmd/pd/train/training.py (3)

709-709: Confirm precision alignment for learning rate.
Replacing a float with paddle.full([], pref_lr, DEFAULT_PRECISION) ensures consistent floating-point precision. Verify that using a 0-dimensional tensor is intentional and that the rest of the forward pass expects a scalar.


768-768: Avoid code duplication in validation logic.
This pattern repeats the same approach for creating a scalar tensor from pref_lr. Consider encapsulating the logic to reduce repetitive calls and simplify maintenance.


818-818: Maintain consistent approach for multi-task calls.
Reusing paddle.full([], pref_lr, DEFAULT_PRECISION) in multiple task keys is valid. However, centralizing this snippet could prevent future errors in multi-task scenarios.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 55f7ef6 and 95b201d.

📒 Files selected for processing (2)
  • deepmd/pd/train/training.py (6 hunks)
  • deepmd/pd/utils/env.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/pd/utils/env.py
🔇 Additional comments (2)
deepmd/pd/train/training.py (2)

56-57: Use environment-driven feature flags carefully.
Importing CINN and DEFAULT_PRECISION helps toggle experimental optimizations and ensure matching data types throughout training. This is acceptable, but confirm that these environment variables or configurations are properly documented and tested to avoid accidental misconfigurations.


691-695: Validate data fetching within profiler context.
The usage of nvprof_context around get_data() is beneficial for profiling. Make sure exceptions (e.g., StopIteration) in data fetching are handled gracefully to preserve consistent profiling measurements.

Comment on lines +636 to +651
if CINN:
from paddle import (
jit,
static,
)

build_strategy = static.BuildStrategy()
build_strategy.build_cinn_pass: bool = CINN
self.wrapper.forward = jit.to_static(
full_graph=True, build_strategy=build_strategy
)(self.wrapper.forward)
log.info(
"Enable CINN during training, there may be some additional "
"compilation time in the first traning step."
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Refine the CINN build strategy assignment.

build_strategy.build_cinn_pass: bool = CINN

Here, the colon syntax is typically a type hint, but does not assign the field at runtime. Consider switching to

build_strategy.build_cinn_pass = CINN

to ensure CINN is actually enabled. Otherwise, it may silently fail to apply the intended optimization.

- build_strategy.build_cinn_pass: bool = CINN
+ build_strategy.build_cinn_pass = CINN

@HydrogenSulfate HydrogenSulfate changed the title [WIP] pd: add CINN compiler for dpa2 [WIP] pd: add CINN compiler for dpa2 training Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant