Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corrected docs to run JPQD in DDP mode #315

Merged
merged 3 commits into from
Jun 7, 2023

Conversation

ljaljushkin
Copy link
Contributor

@ljaljushkin ljaljushkin commented May 11, 2023

What does this PR do?

Corrects command line for launching JPQD. Currently, it launches training in DataParallel (DP) mode, which is slower than Distributed Data Parallel (DDP) and may have some issue with movement pruning openvinotoolkit/nncf#1582

Before submitting

  • This PR fixes a typo or improves the docs

@ljaljushkin
Copy link
Contributor Author

@vuiseng9 @yujiepan-work @AlexKoff88 please take a look

@yujiepan-work
Copy link
Contributor

LGTM

@helena-intel
Copy link
Collaborator

Thanks @ljaljushkin ! It would be good to update the example tests too (https://github.com/huggingface/optimum-intel/blob/main/tests/openvino/test_training_examples.py ), so that the test tests what we show in the docs (probably good to test both methods for at least one example). We can also do that in a separate PR.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented May 14, 2023

The documentation is not available anymore as the PR was closed or merged.

@@ -92,4 +92,4 @@ python run_audio_classification.py \
--seed 0
```

This script should take about 3 hours on a single V100 GPU and produce a quantized Wav2Vec2-base model with ~80% structured sparsity in its linear layers. The model accuracy should converge to about 97.5%.
This script should take about 3 hours on a single V100 GPU and produce a quantized Wav2Vec2-base model with ~80% structured sparsity in its linear layers. The model accuracy should converge to about 97.5%. For launching the script on multiple GPUs specify `--nproc-per-node=<number of GPU>`. Note, that different batch size and other hyperparameters might be required to achieve the same results as on a single GPU.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vuiseng9 @yujiepan-work @AlexKoff88 mentioned necessity for hyperparameters tuning in case of multiple GPU

@ljaljushkin
Copy link
Contributor Author

st one example). We can also do that in a separat

Thanks @helena-intel! It makes sense. I've corrected tests here: #319

Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @ljaljushkin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants