Skip to content

Commit

Permalink
gpu: Fixed some problems in accelerate example
Browse files Browse the repository at this point in the history
  • Loading branch information
simo-tuomisto committed Oct 11, 2024
1 parent 7eb2f2d commit 201f05b
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#SBATCH --gpus-per-task=2
#SBATCH --cpus-per-task=12
#SBATCH --time=00:10:00
#SBATCH --output=accelerate_run_parallel.out
#SBATCH --output=accelerate_cuda.out

export OMP_NUM_THREADS=$(( $SLURM_CPUS_PER_TASK / $SLURM_GPUS_ON_NODE ))

Expand Down
9 changes: 5 additions & 4 deletions content/gpus.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,9 @@ and installs a few missing Python packages:

Submission script that launches the container looks like this:

:download:`run_accelerate_parallel.sh </examples/run_accelerate_parallel.sh>`:
:download:`run_accelerate_cuda.sh </examples/run_accelerate_cuda.sh>`:

.. literalinclude:: /examples/run_accelerate_parallel.sh
.. literalinclude:: /examples/run_accelerate_cuda.sh
:language: slurm

.. tabs::
Expand All @@ -120,8 +120,9 @@ Submission script that launches the container looks like this:

.. code-block:: console
$ sbatch run_accelerate_parallel.sh
$ cat accelerate_run.out
$ wget https://raw.githubusercontent.com/huggingface/accelerate/refs/heads/main/examples/nlp_example.py
$ sbatch run_accelerate_cuda.sh
$ cat accelerate_cuda.out
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Expand Down

0 comments on commit 201f05b

Please sign in to comment.